Monday, January 25, 2010

Create a Learning Culture

Want to unlock the potential of your employees? Want to harness the power of their ideas? Need to gain their cooperation and active participation in making the company better?

Create a Learning Organization.

What is a learning organization? A learning organization is one that places a high value on the knowledge and experience of the individuals, but understands the vulnerability of not transfering that knowledge to a system through which others can share and benefit from that knowledge. A learning organization seeks out the best practices of individuals, recognizes them, and integrates them into the process for all to use, thereby spreading the benefit of those best practices to the company. A learning company recognizes that the people are its greatest asset and the source of its wealth and success.

There are several key steps that need to be taken to facilitate the transition to a learning organization. Upfront planning and commitment is key. Of primary concern early after the decision to create a learning culture is that if steps are taken to create interest in learning, will we be properly prepared to support that interest. If we are not ready, apathy will result, so before we can really go forward with grandiose communication plans and goals and objectives for learning, we need to spend some time setting up the infrastructure for learning. One approach is to visualize and graphically represent the various career paths through the organization. This can be done through a career mapping initiative. The goal of this initiative is to thoroughly understand the key skill needs within the organization by position, streamline and standardize the position criteria, and map all of the potential career paths through the organization. As sample of how this might will look is included here:


In this example, you can see that there are four main tracks of progression through the organization, but there are cross-connections along the way, allowing people to move between tracks into other areas of interest.
Lets take the Six Sigma Black Belt position as an example. For someone to be successful in that position, they need several critical skills and character traits. First they need training in the essentials of what a Black Belt does. They would need a firm understanding of the DMAIC process, statistics, change psychology, team building, data collection, project management, influencing skills, and a desire to excel. Some of those are easily boiled down to quantifiable measureable things, others are charatcer traits, which for the most part, you either have, or you dont have. This information along with the roadmap give employees essential information needed to judge if they think they are cut out for a role as a Black Belt. It also gives them the path to success if they decide to pursue the position. They would know that they need to get some training in statistics, learn how to lead projects and people, learn how to tactfully challenge the status quo, and how to handle team dynamics.

As you can see from the example, quite a bit of detail goes into this analysis. The benefit of this to the organization is to finally be able to charge the individual with some responsibility for their own career and give them a roadmap to success. If the philosophy is that career development as an individual responsibility then you have to give them the tools to manage their own careers. This is the first piece of the puzzle. Once this is done, spend some time evaluating training programs and educational offerings in the marketplace and establishing, through purchasing or internal development, the catalog of available training. One of the key factors in this effort is to ensure that the educational offerings we elect to support are in alignment with our organizational values and goals. For instance, if we use a DMAIC process for problem solving, we should not be sponsoring training activities around other problem solving methods. That would contradict our aim. These are the two key elements of our ability to start the first phase of our transition to a learning organization. Once these are completed and ready for communication then we can roll out the career management program to everyone, begin encouraging managers to sit with their people and discuss career planning, creating career plans and acting on those plans. This activity will in turn enable progress in another major focus area: Promotion from within.

Promotion from within is an important part of creating a learning culture, but it has to be managed properly. Talking up a promotion form within value should be done in the context of promoting people with the right skill set for the job. Promotion from within is not a exercise in saluting the flag, we shouldnt do it for its own sake, but rather make it a key component of the larger aim, raising the skill level in the workforce. Promotion opportunity is a tremendous incentive for growth and learning. Poorly managed, promotion from within will only result in a frustrated workforce submitting their resumes for jobs that they are not remotely qualified for, then complaining when they are not selected.

The third major area of focus is the knowledge sharing activity. We have to create systems through which knowledge can be pulled by the individual that has a need. A major vehicle for this could be the company intranet. Some recent experiences that I have had indicate that as much as 25% of our resources are wasted repeating things we already learned once before and that many of the tough problems that are causing quality issues have already been solved, just not shared. This is a tremendous waste of company resources. A Knowledge Management System can allow for the collection, cataloguing, and collaboration on knowledge so that it is learned once, and applied many times to greater benefit for the company. Many times, a process improvement is achieved in only one location, through six sigma or other means, but could be shared with other locations that have that same process to multiply the benefit of what was learned. Worse yet are examples where that same process improvement was "discovered" by one of those other locations months or years later, wasting valueable time when we could have been collecting the savings. The idea of a knowledge management system is to accelerate learning by avoiding repeating the same experiments time and again, and instead building on previous knowledge to achieve new, higher levels of knowledge.

These actions only address part of the issue however. One of the key factors to our success is how well we create a desire to improve within the workforce. Everything we have discussed so far goes largely towards creating desire to learn for extrinsic needs (promotion, leadership, salary, position, etc.). We need to drill down further in order to create intrinsic motivation to improve our own work everyday.

Many of the ideas and tools of lean can and will help with that. First and foremost is making things visible. Creating visual management systems that allow the worker to see how they are doing is a basic first requirement. I start here because I believe that if we make things visible, that will create some of the initial drive to improve, which will force some of the other changes needed. Those changes are: giving control to the worker, enabling good decision making through well planned instructions, training on the tools of improvement, allowing workers to control the quality of their own documentation and procedures, educating them on the technology of our product and/or the process and rewarding the right behaviors. I believe making things visible is the key to creating motivation. Everything else will follow. The final action here is to determine how to effectively marry up the motivation and results of improving our work everyday with the global knowledge sharing piece so that we can quickly and easily share with each other best practices that come from our improvements.

With these steps taken, we will be far down the road to building a learning organization and fully leveraging the most valuable asset we have, our people.

Thursday, January 21, 2010

Lean Tool of the Month-Value Stream Mapping

Value Stream Mapping is an exercise to identify the value added steps of a process and more importantly, the non-value added steps and delays in a process. The reason to do this is to identify where waste and rework and bureacracy occurs in a process and seek ways to streamline by removing unnecessary steps, loops and approvals from the process so that only the value is left in the process. By the way, a word on value. Value is defined by the customer. The customer is not always the paying customer at the end of a process waiting for their product or service that they ordered, but can be the next process owner down the line towards fulfulling the paying customers' needs. Value, generally speaking, is steps that move the product or service towards the customer. Any thing that does not contribute to that value chain is waste. Some might say that product inspection is valuable because it ensures that the customer gets what they want. While it is true from one perspective, from the value-chain perspective, inspection is not something that moves the product towards the customer. In an ideal world, where high quality is present, inspection is not necessary. Inspection is therefore, a non-value added activity. Since we don't live in an ideal world, some non-value added activities are required. View them as necessary evils and seek to reduce them whenever possible.

The first premise of value stream mapping is that it starts with the customer. Since mapping in reverse is a metally tough exercise, we typically will map starting with the supplier, through all of the work process steps and data exchanges to the customer. Once the map is built however, we must look critically at the steps of the process through the eyes of the customer and determine what is valued and what is not. During the mapping exercise capture any wait times or delays or rework loops or approvals that occur along the way. Capture data exchanges, manual or electronic forms that get filled out all the way back to the start of the process, including the supplier. As you complete this exercise, the NVA will jump off the page and where the improvement opportunities lie will be obvious.

Value stream mapping is an effective tool. I have used it to remove significant chunks of time from processes that I was improving, and so can you.


Below is an example of a value stream map of the "As Found" process


and here is the improved process



This improvement saved 26 days of cycle time.

Monday, January 18, 2010

Worker Satisfaction at an All Time Low

A recent news story caught my attention and I thought was worth commenting on here. Here's the story. The jist of the story is that worker satisfaction is at its lowest since data has been collected on the subject. The major idea of the article was that this low satisfaction has long term implications for innovation and excellence. Of course this is true, the issue is risk. When people do not feel safe, they don't take risks. Risk taking is what produces innovation, excitement, and a feeling of satisfaction with the job. Accomplishing something hard is very rewarding. Three things that workers said in the survey were;

1. Fewer workers consider their jobs to be interesting.

2. Incomes have not kept up with inflation.

3. The soaring cost of health insurance has eaten into workers' take-home pay.

The second and third things are certainly not trivial, but are a sign of the times with the width and depth of the recession that we are in. The first item is what interests me however because it makes sense to the way the workplace has changed over the past two years. The idea is survival. If you have a job, thank your lucky stars and do whatever you have to to keep it as long as you can. The problem with this is that we can't keep playing whack-a-mole with our workforce and expect them to stick their neck out and take a risk for the business. Risk taking is necessary for businesses to thrive and survive. The role of the leader is to create an environment where risk taking is allowed. Let me be clear on what this means. Risk is a two sided coin. Doing something hard is and should be very rewarding, but taking a risk and failing to achieve success is rewarding too, as long as risk of failure does not put the risk taker out on the street. A risk taken but failed is a learning experience. Thomas Edison failed hundreds of times in his quest for the incandescent light bulb. After his success, someone asked him about his many failures, to which he replied that they were not failures at all, that in fact, he learned many hundreds of ways NOT to make a light bulb. As you read this post, probably under a light of some kind, think about where we would be had Edision laid himself off after failing at inventing the light bulb in 5 or 10 attempts. Is there an Edison at your workplace, keeping his or her head down, playing the survival game?

Thursday, January 14, 2010

Six Sigma Tool of the Month-Gage R&R

In this series of posts, I review the purpose, use, interpretation, and limitations of various six sigma tools.

This post is about Gage R&R, a critical tool in the Measure phase and the Control phase of DMAIC. There are many aspects of Gage Repeatability and Reproduceability, I'll attempt to cover the major points here, starting with the purpose(s). When a decision is made to charter a six sigma project, one of the major concerns early into the project is the reliability of the data that is used to determine root causes. It is very important that the data that is analyzed about the project problem be accurate, and consistent to enable accurate analysis, and that good conclusions can be drawn from the data. This is important to ensure that improvement plans are effective at addressing the root causes and can deliver real improvement. Gage R&R also comes up again in the Control Phase of DMAIC to help ensure that the significant parts of the improvement plan can be accurately measured. There is another purpose for Gage R&R. Actually Gage R&R is primarily a tool for determining if measurement systems used to evaluate the quality aspects of a product will produce reliable results. In either situation, the intent of Gage R&R is to give an indication of the proportion of the variation that is present in our system that comes from the measurement system. At its very basic level, a measurement system must be able to distinguish good product from bad product. Understanding the ability of the measurement system to do that is the purpose of a Gage R&R study.


There are several aspects to a Gage R&R study. Among the most important things to consider are:
-Reproduceability
-Repeatability
-Accuracy
-Precision
-Bias
-Linearity
-Sample Selection

Lets start with Reproduceability. Reproduceability is the measurement of the portion of the variation in the measurement system that is coming from differences between people. Most measurement systems consist of two primary components of variation; People induced variation, and Instrument induced variation. Reproduceability tells us about differences between the ways that people do the steps of a measurement method and how much those differences matter.

Repeatability is the portion of the measurement system variation that comes from the instrument itself. Repeatability is a measure of the ability of the measurement device to deliver a consistent result over several measurements.

Accuracy and Precision can be taken together. Accuracy is a measurement of the measured result compared to the true result. Remember that our measurement system is intented to give us a high confidence in the data that we use to decide on product quality or determine the root causes and improvement plans for six sigma. Precision is a measurement of the variation in results seen. Think of these two like a bullseye target. See below for a visual example showing the relationship between Accuracy and Precision.


Bias is the difference between the measured result of a sample and the actual result of that same sample. Bias is the error that exists in the measurement system. In the example below we are looking at our car speedometer. If we compare the measured result of the speedometer at three speeds (30, 50, and 70 mph) and we can know the actual speed that the car is going, we can determine the bias or error across the range of interest of the measurement system. The red arrow indicates the measured speed on the speedometer, and the yellow arrow is the actual speed as measured by some other device (a gps for example). We see that at 30 mph, we are actual traveling at 25 mph, there is a negative bias of 5 mph. At 50 mph, we are actually traveling at 50 mph so there is no bias at this speed. At 70 mph however, we are actually traveling at 85mph! This is a positive bias of 15 mph. Our local police officer would be very interested in this result. In an ideal world, bias would not exist, but since we don't live in an ideal world, we know it does, and we would like the bias to be predictable. That leads us to the next measurement characteristic; Linearity.

Linearity is the measure of the bias over the range of interest in the measured samples. In our example below, we see that at 30 mph there is a negative bias of 5 mph and that as we proceed up the scale of measurement, the bias increases to plus 15 mph at 70 mph. This is NOT a linear response. If you look at the chart below the speedometers below, you will see that the actual speed is not a linear line, its more quadratic (curved) than straight. This is useful information for us. First, it tells us that we can not apply a simple correction factor for bias across the range of measurement. If we were to apply a correction factor to the speed based on either end of the measurement range, results at the opposite end of the range will not be accurate. Secondly, the linearity tells us that the error grows as speed increases and that measurements at the high speed end of the range are more suspect and risky than measurements at the low speed end of the range.




Finally, lets talk a bit about sample selection. Hopefully through this discussion, you have seen that one of the most important aspects of setting up a Gage R&R Study is the choice of samples to measure. Remember the purpose of our study from earlier; determine if our measurement system can produce reliable results that can be used in decision making, either for our six sigma project or in the actual measurement of quality. In order to know about things like Bias and Linearity of response, we must measure samples that span the range of interest of our measurement. What does this mean? Lets say for instance that using our speedometer example from above, that we have an upper specification of 65 mph and a lower specification of 30 mph. If we were to chose the measure at 50 mph because that result is in the middle of the range of interest, we get a very different picture of our capability to measure speed than if we measure at either end and outside the range of specification. If we only measured 50 mph, we would incorrectly conclude that our speedometer is accurate and precise, with no bias. We would not be able to comment on linearity and this would result in our surprise at getting a speeding ticket for going about 12 mph over the limit at 65 mph. If we measure across the range of interest we then can add linearity to our understanding and know that we should not be confident in results near the upper specification of 65 mph. This tells us that we should set our upper specification for the speedometer somewhere in the area of 58 mph (measured) to always be under 65 mph (actual).

Gage R&R is a very useful tool in your six sigma tool box. It is also vital to ensuring that customer receive good product that meets their needs. Gage R&R studies are constructed to tell us how much confidence we can have in the measurement system. Gage studies can tell us where we need to improve the measurement system. Through analysis of the statistics that come along with the study, we can determine if person variation is causing issues or if the device itself is the source of variation. In any case, the gage study is a versatile tool to help identify improvement needs and improve quality.

Monday, January 11, 2010

Got a Tough Problem-Work It Out

I'm reading a new book that everyone should go out and buy. Its called The Brain Advantage, Become a More Effective Business Leader using the Latest Brain Research. A friend of mine is a co-author so I got a copy from him to read through. I've made several connections to my own experiences in leadership.


I'm going to share one of them with you here, then you go right out and buy the book.

We've all been there. You have a tough problem that you just can't figure out. You have the best experts in the field working on it with you, the best collective brainpower around and still the answer eludes you. What do you do? The authors of The Brain Advantage suggest; walk away for awhile, go do something else, take a nap, have a workout, chit chat with friends, whatever. Do anything except focus on that problem, anything that allows you to relax, that is. So The Brain Advantage strikes again, more folklorish knowledge is confirmed through science. I think we all know instinctively that when we need to solve a tough problem, its best to cool off, do something "mindless" and magically, the Eureka moment comes to us. For me, I do my best thinking while on the bike for a long ride. I usually come back from a long ride outside with several new ideas. I routinely bring back notes on my cell phone for ideas for posts on this blog. Usually the sleep thing does not work for me. If I try to go to sleep with a problem on my mind, I'm awake for a significant part of the night.

In The Brain Advantage, the authors discuss research that shows that that creative people tend to be more flexible in their thinking and relaxed than people not considered creative. Neuroscientist Mark Jung-Beeman discovered that people who were in a good mood solved more problems with insight than those who were not in a good mood. Dr. Jung-Beeman observed that this may help explain why people find solutions to their problems in the shower or during a short nap or some other activity that is relaxing and allows the brain to unwind. So the next time you have a sticky problem that is haunting you, just walk away, go have some fun, and the answer may come to you.

You can pick up a copy of The Brain Advantage here.

Thursday, January 7, 2010

ISO Stuff: The Audit

I went through my third party ISO 9001 assessment recently. That got me thinking a little bit about the experience. I thought I'd share some of what I've learned over the years about this unique experience. First, some background. There are three "registration" avenues that a company can follow. Those are:

Self-Declaration, 2nd Party Assessment, and 3rd Party Assessment.

The difference between these is dramatic so let me explain a little about each.

Self-Declaration is just what is sounds like. If you believe that you have a good solid Quality Management System that, in your opinion and assessment, meets the intent of the relevant quality standard, you can self-declare that your organization complies with the standard. The benefit of this is mainly cost and effort. No cost associated with hiring a registrar to come in and spend several days assessing your system, and no lost time associated with the aforemetioned audit visit. The drawback of this approach is that no one will believe the declaration. It's akin to the "fox watching the henhouse" euphemism.

Second Part assessment is assessment by a customer. The benefits of this approach are similar to Self-Declaration above. No cost. However there is the time component that is in play. You will have to spend time preparing and conducting the assessment. The downsides of this approach are many. Starting with the portability issue. If one customer assesses you, will other customers accept the results or will you find several other customers at your door wanting to conduct an assessment visit also? Additional negatives of this approach are that you won't want to reveal anything except the best parts of your QMS to any customer, for fear of a negative impact to purchasing decisions. This inhibits improvement of the QMS.

Third Party registration is by far the most common approach around the world. There are several reasons why third party is the preferred approach. Third party registrars are independent and accreditted by an international body that ensures their competence, thoroughness, and impartiality. Third party registrations are pretty much universally accepted by any customer. Third party registrars have a unique relationship that presents either a downside or an opportunity, depending on the maturity of the QMS, the attitude of management towards the QMS and the confidence of the Quality Manager. Let me explore this area in more detail.

The relationship with the registrar is a strange symbiotic relationship. The registering company is hired by the company seeking registration, which from a buiness standpoint creates a desire to continue the business. Afterall, we are all in business to make money, even registrars. So the registrar must walk a fine line between "finding" too many things that might harm the relationship and letting the QMS off too easy. This is a big challenge for auditors, again, depending on the maturity of the organization being auditted. Third party registrars are prohibited from "consulting" as this is a conflict of interest that undermines their impartiality. Individual auditors can and do, share information on a one on one basis about potential improvement opportunities that they observe but may not include in their report.

Lets spend a few minutes on maturity. What I mean by this is that different organizations are motivated by different things to achieve registration. Many are interested in getting the certificate on the wall to satisfy a customer demand. Once this demand is met, some organizations stop there and really don't mature much more. On the other hand, some companies enter into the registered quality management system with the motivation of improving their business or perhaps start from a demand from customers and mature into a more enlightened attitude about the value of their QMS. Those with a mature approach to their QMS registrar relationship understand that the registrar works for them and that the third party audit is a tool in their toolbox to help drive improvement in the QMS. The Quality Manager is the key player in making this happen by sharing information with the registrar about weaknesses in the system. Through enabling the third party auditor, the Quality Manager can leverage findings from them to force improvement in areas that may have been resistant to participating.

Third party registration is by far the most common approach used. Customers and competitors universally recognize an independent registrars findings. Due to the business relationship that exists, registrars are interested in adding value for the registered company by helping them mature and improve the QMS beyond just getting a certificate hanging on the wall.

Monday, January 4, 2010

Total Cost of Quality for the Total Picture

Happy New Year! Here's wishing everyone a healthy, happy 2010. Now on to business.

What is the financial impact of quality on our organization? Is it money well spent? What should we be spending our money on in regards to quality? Is inspection the right place to spend our precious capital or should we invest in automation? To what degree should we inspect for conformance. If any or all of these questions sound familiar to you, read on for the answers.

Enter Total Cost of Quality. Total Cost of Quality (CoQ) is a finanicial model of the costs incurred to operate and maintain the function of quality in a business. The CoQ model takes into account all of the activities that any typical company would perform in the name of providing good products or services to customers. The CoQ model, also known as The Economic Conformance Model, shows us the rising costs associated with proactive management of quality as compared to the decreasing costs associated with improving quality. The graphic below gives a visual representation of the CoQ model.


Before we get into that too much though, lets define the four components of the Cost of Quality model.


The first two categories of cost are associated with putting systems and processes in place to reduce the likelihood of a failure. First is prevention. Prevention is the category for those costs associated with preventing a quality problem from occuring in the first place. Typical costs that are included in this category are; training, procedure writing, ISO related costs, and process or equipment automation.

Appraisal is the next category. Appraisal is where we capture our inspection costs. Any activity we do that inspects the quality of the product or service falls in this category. Typical costs included here are; Calibration, instrumentation, and inspection and test personnel.

Internal Failure is the first of two categories associated with poor quality. Internal Failure are those costs associated with recognizing a poor quality characteristic exists BEFORE the product leaves the factory. The most common cost in this category is scrap, followed closely by rework costs.

External Failure is the worst of all possible situations. External failure is failure of a product or service at the delivery point or usage point of the customer. I say that this is the worst of all possible situations for two reasons. One, the product is fully burdened with cost, including transportation and storage costs. Second, reputation in impacted here. The customer experienced the failure, damaging the company reputation and hindering future sales.

Ok, now that we have that out of the way, lets talk about what this means to us. In the model above the total cost of quality is represented by a bowl shaped curve. The low point of the bowl shaped curve is called the economic conformance point. This point represents the lowest possible cost of quality that a company can expect to see. This point is the balance between the costs associated with preventing a problem from occuring and the costs of dealing with the problems that do occur. So, we might look at that graph and say, great, this is easy, all we have to do is balance costs in the four categories to achieve the economic conformance point, then we're done. Easy! Not quite. The thing to remember is that the economic conformance point can be moved lower and to the right through effort. The graph below shows the traditional view of Cost of Quality in the top left section, and in the lower right section shows the effect of applying an improvement methodology such as Lean or Six Sigma.



In the traditional view, there is a fixed minimum cost associated with the quality function. In a Lean or Six Sigma company, we can lower the costs associated with prevention and appraisal by building higher quality products. Higher quality products with less variation allow us to reduce our inspection routines to ever decreasing sampling strategies. Higher quality products also reduce the need for expensive automation systems and complicated work routines to ensure a mistake is not made. With higher quality comes higher confidence, and higher confidence brings lower costs.


Graphics excerpted from:
Cost of Quality: Not Only Failure Costs
by: Arne Buthman
isixsigma.com