Sunday, November 28, 2010

Whats Better? Satisfied Customers or Loyal Customers?

If you are running a business, what would you rather have, satisfied, happy customers or loyal customers? I suspect many might say that the idea of a satisfied customer seems like the right visual in their minds eye. Some one who is pleased with the experience they have just been through, leaving the store with a smile on their face and contentment with their purchase. Trouble with that is that your in business to make money. I know, I know, the standard mantra is that satisfied customers will come back again and again. That may be true, or maybe its not.

Loyalty and satisfaction differ in one significant respect: action. Satisfaction is defined as a feeling about a past interaction or set of interactions. Satisfaction is fleeting and does not translate to actions on the part of the customer. This is where loyalty differs. Loyalty is focused on the buying behavior of the customer. Research by the Corporate Executive Board (an industry benchmarking research company, discovered that loyal customers were more likely to behave differently in three ways:

Loyal Customers

1. Recommend the supplier to others more often

2. Increase purchases

3. Partner with the supplier on new opportunities

To contrast, Satisfied Customers

1. Purchase based on price

2. Purchases remain steady or decline

3. Will readily jump to competitor based on price, availability, or negative experience

4. No partnership on new development

So which of those sounds like the customers you want? I don't know about you but give me loyal customers any day.

So whats the difference? Is there a difference? Turns out there is a difference. A satisfied customer is merely a spectator in the operation of your business. What I mean by this is that their involvement is more superficial, more circumstantial, and more subject to change at a moments notice than the loyal customer.

Ever had a conversation with a die hard car brand enthusiast? There is no swaying them from their brand, and if you press hard enough strong emotions and maybe some fists will fly. That's loyalty. They know all of the gory little details that make their brand the best. They are in it up to their eyeballs. Satsified customers really do not know much about the brand they have purchased, and as such, do not have a strong attraction to it.

I'll relate a personal experience. My wife owns a General Motors car. The now defunct brand has been problematic from the beginning. I have owned various General Motors brand vehicles over the years. Some have been pretty good, but none have been great, none have wowed me in the areas that matter most to me. I have been a satisfied customer, but not a loyal customer. Now back to my wife's car. Like I said, problematic from the first, but thats not the end of the story. We have owned problematic cars before from other brands (not GM) and I would consider owning one again if I did my homework and found their quality and engineering to be good. I will never allow another GM product into my garage again though. To say that I am a dissatified customer of GM is a vast understatement on par with saying that World War 2 was a minor disagreement between friends. Never, I say, NEVER will I own another GM. Why such strong emotions on this brand? It was not the problems alone that caused it, it was the problems and the fact that they could not be solved in 6 (count them 6!) tows to the dealer, it was the problems and the tows and the lack of fixes, and the moronic engineering that makes it so I can not change a burnt out headlamp, and the poor structure that required suspension work normally reserved for cars older than decade.

In short, my several ownership experiences with this car have led me to conclude that as a company GM is not competent at what it and sell cars. The entire experience has been awful, thoroughly and completely awful. Interestingly enough though, I may never have arrived at this conclusion were I not a die hard car brand enthusiast for another brand. About 8 years ago, I bought my first Japanese car, a Subaru. Now let me say that I didnt get the whole "Subaru Love" thing for the first couple of years, but after we bought my wife's car, I started to notice all of the things that I did NOT have to do to my car. In 8 years, nothing but regularly scheduled preventive maintenance. If the headlamp goes out (which it has) I can replace it myself in about 10 minutes for about $20. No ball joints needing replacement after 60,000 miles, no intermittent electrical outages. No problems at all. In my entire car buying adult life, I have never owned a car this well put together, this well thought out. In short, I am a very loyal Subaru customer.

So this little story brings us to the essential element of loyalty versus satsifaction. I'll give you one guess which brand my next car will be, and one guess which brands it will not be. Subaru will be rewarded with my money in short because I see that they are competent at what they do and I am loyal to them for it.

So, which kind of customers do you want?

Thursday, October 28, 2010

What takes more time? and costs more?

I am having a debate. For those who know me, that won't come as a surprise. I love to debate about things that I'm passionate about. The current debate that I'm having is about product design. The specific topic is about how to approach determining part tolerances for a new design.

The debate is prompted by problems we are having getting a supplier to achieve the tolerances we specified for the parts they make for us. We have been through three rounds of back and forth with this supplier trying to get good parts only to finally decide to do an experiment and learn that we could have given more tolerance to the supplier to begin with.

Of course, my suggestion to rectify this problem is well designed and planned Design of Experiments (DOE) for the purpose of determining the edges of the performance window. The counterpoint is that DOE takes too long and we don't have time or the samples to do it. This post is not a how to on DOE, I can cover that another time. This post is about what is the best business decision to make.

So, lets analyse the sides of this debate. On one side the arguement is that DOE takes too long, we don't have the samples and we can't test everything. I would agree with one of those viewpoints, we can not test everything. We should only test the important things. How to determine the important things, for that you need a QFD (Quality Function Deployment) tool. We'll save that for another time also.
The other side of this debate is that NOT doing DOE takes too long, costs more money and reduces the chances of success of the design.

What are the facts? DOE can take a while to complete IF too much is thrown into the mix to test. DOE matrices can grow to hundreds of replicates and samples if not contained. There are strategies to whittle down the many factors to the critical few, among the best I have used are Taguchi Designs. Taguchi design matrices allow for testing many factors at two or three levels for the purpose of screening factors for importance.  I have completed simple two factor two level designs in less than a week from start to finish, as recently as this summer. Can DOE take a long time, yes, if we dont plan it well. So what is the cost in time and money associated with NOT doing Designed Experiments? Those costs are real and painful. In the real life debate I am having, we have been working with a supplier since April of 2010 to try to get good parts, now, in October, we still don't have good parts. This has delayed the intended release of the design several times and forced us to decide between another delay and accepting a 40% scrap rate for these parts. This is only half of the story though. By not doing experiments to determine the real needed specification, we have asked the supplier to make a part that costs more to produce because of the extra care required to obtain the tight tolerances we ask for. Additionally, the tight tolerances require that we measure the parts on a sofisticated, highly precise measurement machine that takes quite a few minutes to make the measurement for one part, and finally because we can not get good parts, we must measure every part, further adding to cost and slowing the process. The final cost associated with this decision is in the manufacture of the design. Due to all of the issues, Assemblers will probably have to rework these products in order to make sure they function properly before leaving the factory, taking more time that could be spent doing other things.

To me, the decision is easy. Determine the proper specification upfront by building samples to test the boundaries of performance on critical dimensions, incorporate those learnings into the tolerance for the design, give the suppliers as much room as possible to make good parts, and measure it as simply as possible with acceptable precision. Seems worth the investment of a few weeks effort to me.

Monday, May 17, 2010

Whats the difference between flavors of six sigma?

Six Sigma for Manufacturing; Six Sigma for Healthcare; Six Sigma for Public Sector; Six Sigma for Education.

There's a lot of marketing materials out there hyping the specialty classes and programs for Six Sigma or Lean for a particular industry sector or application. So, what is really different between these different approaches? In short-Nothing.

Lets start with what Six Sigma is and what it is not. Six Sigma is a universally applicable logic based improvement process, it is not specific to manufacturing environments only. Six Sigma is flexible in the use of specific tools for a given problem, it is not so flexible that the logical problem solving process can be avoided. Six Sigma is a scientific method of discovery, it is not a substitute for knowledge.

One thing that may not be universally recognized is that the specific tools that are endorsed and needed to make the six sigma method work effectively are not unique to six sigma. The early practioners of Six Sigma did not invent Gage R&R, Capability Analysis, Process Flow Diagrams, Brainstorming, Regression or Designed Experiments. What they did do was recognize how those tools could add value and strengthen the six sigma method. Why is that important to understand? because understanding that the "inventors" of six sigma borrowed good tools from other areas where they were already in use informs us that we also can adapt tools and apply them in the six sigma method without violating the 'spirit" of the method. For instance; the generally accepted statistic used to express capability in a six sigma project is the Z score, however if you have a transactional of office process that you are working on and continuous data is not available to derive a Z score, thats OK. Just count occurences of your defect regardless of what it is (time, errors, missed deliverables, rework) and report a percentage of failure against the requirement. That's the same thing in "spirit" as a  Z score. Some dogmatic practitioners might insist that the only acceptable metric is a Z score. I assure you this is not correct, what is required is that we measure our baseline capability against the requirement of the process (specification) and express that in some appropriate way.

The other area of flexibility in the six sigma method is in the area of specific process steps that apply. I have led and mentored many project teams and in some of those cases we have had to abandon efforts to complete certain steps because the process we were working on did not lend itself to effectively completing that particular step. In other cases there have been steps that were not needed. One of the typical steps that some teams struggle to complete is the Gage R&R requirement. In some cases it is not practical or necessary to conduct a proper Gage R&R but it is always necessary to ensure that the data is reliable. See this post for a further discussion fo Gage R&R. The step that most often gets skipped is the Designed Experiment step in Analyze. many times this step is skipped intentionally and quite properly because earlier analysis of historical data via regression has produced a conclusive list of reliable root causes and no experimentation is needed to confirm or screen these root causes. If you know from historical data what needs to be fixed, why waste time and money on a confirmation experiement. Conversely, recognizing that your suspected root causes are weakly correlated to the problem should drive the design of an experiment to determine conclusively what the causes are. Of course in an office or transactional project, a designed experiment may not be possible.

Six Sigma is a flexible, universally applicable process improvement method that follows the logical thought pattern of; Define the problem, Baseline performance, Discover root causes, Implement improvements, and Standardize the new process. These concepts apply to any process, anywhere in any professional setting. What is required to use the method successfully in any setting is an understanding of the improvement process, the reasons why each step is important, how all of the steps fit together, and a concerted effort to apply each step. Finally, if you are doing a six sigma project and have used a particular tool effectively in the past, there is no limitation to the use of new tools within the six sigma process. if it works for you and is effective at helping the team move forward, use it.

Monday, May 10, 2010

Improve your Lean with some Six Sigma (and vice versa)

With the advent in recent times of a combined lean and six sigma approach to continuous improvement efforts, one might think that the dilemma of which methods to chose has been resolved. However, confusion still exists over how to move forward with a combined approach to Lean and Six Sigma.

First, why a combined approach to Lean and Six Sigma? Isn't lean or six sigma good enough on its own? Not in all cases by my experience. Every problem is like a snowflake, unique, with some similarities to other problems but still, unique. For some types of problems six sigma is good enough all on its own, those types of problems are typically variation related issues. For other types of problems, lean is fine, those are typically waste related issues. Combining them together leaves nothing on the table in terms of improvement opportunities. Lean and Six Sigma each complement weaknesses in the other method. For instance, Lean is weak in the area of data based analysis and decision making, this is a strength of Six Sigma. Six Sigma is a top down driven improvement approach, which risks not involving the right people in solving the problem, lean compliments this by being driven bottoms up, ensuring the process users are involved in the improvement.

Add some lean to your six sigma or six sigma to your lean. In studying both Lean and Six sigma, we find that they operate on multiple levels. Lean and Six Sigma are at once, a tool set, a methodology, and a culture. Much handwringing has occurred over the question of creating another initiative when a six sigma or lean culture already exists. Concerns usually center on creating confusion in the ranks. Avoid this confusion by folding the supporting tools and method into your existing improvement culture. If six sigma is your way of working, incorporate lean tools into the method to capture waste in addition to the variation reduction. If you have a Lean culture, add the six sigma data gathering and analysis tools and steps to your lean to strengthen your decision making.

I  led a couple of projects that are good examples demonstrating the idea of combining approaches to achieve the best results. Without going into much detail on them I can share that in the first project we improved process accuracy by over 40% while reducing cycle time from 45 to 19 days.This project can be reviewed here for those interested in more detail. In the other project we reduced cycle time by 40 hours through using regression statistics to identify what equipment was producing the longest cycle times, then apply lean concepts to improve them.

The point here is that combining lean and six sigma approaches together improves results by reducing variation and reducing waste at the same time. This is the best use of our precious improvement resources, gaining the maximum benefit for the time and effort spent focused on a particular problem.

Monday, May 3, 2010

Hiring Authentic People-Brain Advantage

I'm reading a new business book that everyone should go out and get a copy of. Its called The Brain Advantage, Become a More Effective Business Leader using the Latest Brain Research. A friend of mine is a co-author so I got a copy from him to read through. I've made several connections to my own experiences in leadership.

 In the Brain Advantage, the authors share research that shows that when in the company of others, people modify their reactions to seeing difficult images to match that of the other persons reaction. Moreover if the other persons' reaction did not match the expected reaction, this had a negative impact on them and the others physiologically. Non-voluntary inidcators of stress such as blood pressure increased or decreased in reaction to other peoples reactions or when someone did not react in a way that others expect. The research also showed that if the difficult images were viewed with consideration of the potential positive aspects that might come from the situation. The authors suggest that leaders have to be true to themselves in every situation but that this does not mean airing your dirty laundry or being negative about everything you disagree with, but if leaders can find the potential positives in a situation, they have an opportunity to have a positive impact on those around them, as well as themselves. The conclusion the authors come to is that managers should hire authentic people.  What if managers hired people who were authentic. Sounds like a good idea. Easier said than done, I think. I can visualize the efforts that would have to be undertaken to understand what authentic is. Its hard to get through the facade of what candidates want you to think they are, to see the real person. Of course, hiring authentic people requires leaders to BE authentic themselves and determine what authentic characteristics are in alignment with the organizations culture. Thats much more work than most companies put into their hiring processes today. Maybe that says something about the state of business today.

This insight from The Brain Advantage does line up nicely with a couple of key leadership messages of another of my favorite business books, Good To Great by Jim Collins. In Good To Great, Dr. Collins shares two of the key elements of success for Good To Great companies. Those elements are the Stockdale Paradox and Level 5 Leadership. I've talked about the Stockdale Paradox before so I won't go into detail here but click here for a review. The Stockdale paradox is the presence of two personal principles that are counter to each other; First is the ability to see the present situation for what it is, and Second is an unwaivering faith that the future will be better than today. The other leadership key to success for Good To Great companies is Level 5 Leadership. Level 5 leaders are characterized by two priniciples; personal humility, and a strong drive for success for the company. These sound alot like authenticity to me.  Level 5 leaders tend to shy away from personal credit for the success of the company. Assigning much of the credit to luck, but taking most of the blame for failures. Level 5 leaders tend to attract other level 5 leaders, just as devisive leaders tend to create that culture around them. Leaders who possess the Stockdale Paradox have the ability to see the problems of today, but not be overly negative about them, instead focusing on the future.

What if managers hired people who were authentic? Sounds like a good idea. I think it goes beyond hiring decisions though. Hiring authentic people is a good first step but I believe that leaders have to allow people to BE authentic once they have hired them. This should not be an issue if leadership acts authentically as well. In a business book reference trifecta; Harvey J. Coleman, in the Empowering Yourself, The Organizational Game Revealed

theorizes that individual success in business is made up of three elements: Performance, Image, and Exposure. The author suggests that Image and Exposure are more important than Performance (Results). At least he is honest, but I believe he is wrong in the proportion of the PIE that is assigned to Performance. While peoples varied experiences on this point would create endless debate, suffice it to say that my personal experiences would suggest that the P is worth less than the 10% of the PIE asserted by the author. In an organization where people are not authentic, performance is not nearly as important as the image or the exposure of the individual. In other experiences from my past, the P was far more important.

So what can we take from these three books on the subject of authenticity? When results are what matters to the organization, authenticity is the recipe for success. When people are authentic, less effort must be spent managing an image, and more time spent focusing on helping the company move to the next level of performance.

Monday, April 19, 2010

Total Customer Focus-Too Much of a Good Thing

I remember watching the news a little while back and there was a report of a health study in which the researchers concluded that drinking red wine was good for heart, and stroke risk reduction. Something about compounds quercetin, epicatechin and resveratrol found in the skin of the grapes. Great news! Several weeks later, another study showed that drinking white wine was also beneficial because these same compounds were found in the meat of the grape also. Studies like these are always being published that tell us to have more of this and less of that. Conflicting messages come from these overlapping studies sometimes, creating confusion; More red wine, More white wine, Less wine, More sunshine, less sunshine, eat certain kinds of yogurt. So what's the point of this discussion in relation to customer focus? The key message in all of this in moderation. Too much of a good thing is a bad thing. Drink SOME red or white wine to get the health benefits, however, if we drink too much, our heads and livers will protest. It is also not good to focus only on one thing. If we drink red wine and never water or milk we lose out on the benefits of those things. The same is true of a focus on the customer. Some would argue that we can never focus too much on the customer. Generally, I agree. However it can become a bad thing when we are so focused on the customer that we forget the other parts of our quality value proposition.

I have a favorite book about focusing on the customer. Its on my recommended list on my Linked-In profile. Its an older book, but I find it to be a timeless reference on flooding the company with the Voice of the Customer. The book is The Customer Focused Company by Richard Whitely of the Forum Corporation. Mr. Whitley makes several good points throughout this book. Two among my favorite ideas from this book are that customer complaints are like gold nuggets and that quality is defined by the customer. Quality is indeed defined by the customer, whether we like it or not. So wait a minute, you might be saying. It sounds like I'm arguing both sides of the arguement. The diference in The Customer Focused Company is that Mr. Whitley understands that while we need to improve our service quality and go to school on the needs of the customer, he never forgets what underlies all of this, the products and services that the customer receives as our main offering. These must meet the customer's needs as well. So not only must we improve our service to the customer and concern for the customer, but the products must work too.

This work, in addition to the experiences that I have had examining customer loyalty lead me to the recipe for loyal customers and the House of Customer Loyalty. Just so we are clear on definitions, Customer Loyalty and Customer Satisfaction are not the same thing. A satisfied customer may never buy again, or recommend us to their business associates. A loyal customer on the other hand, is our cheerleader. Loyal customers advocate for the company and are more than a little hesitant to switch brands. Satisfaction is about the experience and feelings in the past, loyalty is about the behavior in the future. We want loyal customers.

The House of Customer Loyalty above indicates the relationship of all the components of our quality management system to achieve satisfied and loyal customer. The house is built from many components, starting with the foundation and ending with Loyal Customers on the roof. We call it a house because, like a real house, it is made up of many components; and like a real house, you don't put the roof on first, you build the house from the foundation up, putting the roof on last. The foundation of the house of loyalty are the systems that support our ability to deliver consistent products, resolve issues quickly and effectively & develop new products that customers want. Only when we've tackled the basics will we be able to advance to the roof section of our model. The bottom plank of the roof is Continuous Improvement. Without continuous improvement of all aspects of our systems and products, competitors will overtake us eventually. Robust engagement with customers gives the critical Voice of the Customer (VOC) that we need to inform all of our efforts. Finally we start to reach the roof, where we must engage the right players at the customer to achieve satisfaction. Interesting plank, that one. Customer's are not monolithic, talk to the purchasing agent and they will give one perspective, the engineer a completely different perspective, and the quality person yet another different perspective. Experience says that the key customer within the customer is the user of your product. Win them over and all the others will fall in line.

The problem with too much focus on the customer is that we risk forgetting that our quality proposition must be built from the ground up starting with a solid foundation. Problems occur when we start to think that we can build the roof first and stop there and that somehow, everything else will take care of itself, or perhaps if we change our focus from the foundation to the pillars and then the roof, but along the way we have not maintained the foundation. The problem with this approach is that the recipe for loyalty is complex. Its great to focus on the relationship part of the equation, but if product variation is too much for customers to bear, loyalty suffers. If our product offering does not meet the needs of the customer, we wont get the business and there won't be any chance to build loyalty. Without systems to document the processes that need to be followed to make good product, product quality suffers. Without continuous improvement, competition will overtake us and no amount of loyalty will save us. One other point to emphasize here. There is no end to the journey. Competition is always nipping at our heels, customer expectations are always increasing. We have to keep our eye on the ball for the long term.

Customer satisfaction and loyalty are the ultimate aims of our quality efforts. We have to remember that, like a house, we have to build it from the ground up. Rest assured that if we try to build our house of customer loyalty with some of the blocks of the foundation missing or if we remove them later, the house will eventually fall in on itself and no amount of satisfaction or loyalty will rescue us.

Monday, April 12, 2010

Six Sigma Tool of the Month-Attribute Agreement

How many times has this happened to you? You’re leading a six sigma project on a transactional process of some kind, something not directly tied to manufacturing or measurement of product quality. You get to the Gage R&R step in Measure and struggle to figure out how to satisfy the requirement for a Gage R&R statistic to interpret. If that’s ever happened to you, I have a solution to this problem for you..

First, let’s discuss briefly the “spirit” of the Gage R&R requirement. The reason we want to do a Gage Study boils down to confidence and good decision making. In Measure, we do a Gage Study of the data used to generate the Project Y or Critical To Quality (CTQ) measurement. Why? So we can be confident that, as we carry that data forward to capability analysis and Root Cause Analysis in the Analyze phase, we can trust the conclusions that we will draw and the results we will see. That’s it, confidence and good decision making.

How do we gain confidence whent he data is based on a person judging the quality of a characteristic rather than an objective instrument? One way that works very well is called Attribute Agreement. Attribute agreement assesses the results of decision making by people and produces some key statistics that tell us whether the results are due to random chance or if our judgment appears to be better (or worse) than random chance. Attribute agreement produces a result that can be interpreted in a similar fashion to a traditional Gage R&R and the detailed statstical analysis can give insight into areas in need of improvement, just as a traditional Gage R&R can.

Attribute agreement analysis is an effective method for delivering a statistical interpretation of a subjective judgement decision made by people, allowing fact based improvements to be identified, implemented and measured. Attribute agreement analysis allows those leading projects without continuous data to measure the quality of that data and boost confidence in the capability of the system, and decisions that are made to improve it. For a more in-depth discussion of Atrribute Agreement including a case study where it was used effectively, including details on how to interpret the results go to this article that I wrote.

Monday, April 5, 2010

When is a Complaint a Complaint?

Complaints are one of the more controversial topics to deal with in Quality. In many organizations that I have been a part of, much effort and energy was spent not only answering complaints, but fretting over what a complaint is and is not, when should a complaint be logged, who should log it, how will it make us look, etc... Suffice it to say that complaints cause angst, and understandably so. Complaints are bad news, they cause urgent work to be done, attract alot of attention, and are nto easy to solve in many cases.

One of my favorite business books from several years ago introduced me to a new way of thinking about complaints. The Customer Driven Company by Richard Whitely introduced me to the idea that complaints are like little nuggets of gold.

While the book is a bit dated today, I still find it relevant in my work. Many organizations struggle with complaints and service levels in addition to the quality of their product and that was Mr. Whitely's message; quality of service must improve as well as the quality of product. One of the best ways to udnerstand how customers feel about both is to analyze complaints. This is difficult to do if the organization culture is reinforcing negative behaviors about complaints. Many organizations do this simply by placing a goal around the reduction in the number of complaints. Of course, the intent is to reduce complaints by solving issues. What can happen instead is underreporting out of fear of what happens to the metric and "airing dirty laundry". This is not the intended outcome but it happens nonetheless. One way that this happens is by having the people responsible for the metric entering complaints into the business system. The obvious answer is that those who interface with the customer should be trained, enabled, and empowered to enter complaints based on their interactions with customers, rather than having to appeal to someone elses judgement on what to enter and what to leave out. The easiest way to do this is to establish some simple criteria to judge when something is a complaint vs some other kind of customer transaction. The simplest criteria: Is the customer experiencing something unexpected about our product or service. If they are experiencing a breakdown in the product or service and have called to get it fixed, that's a complaint. We failed to deliver something to the customers expectations. Enter the complaint, then solve the problem and spread the learning around.

Monday, March 29, 2010

Lean Tool of the Month-SMED

SMED or Single Minute Exchange of Dies is a manufacturing based term for the lean idea of minimizing down time of a process. In a traditional manufacturing process a die is a part of the machinery that helps to mold a part. Changing a die requires the machine to stop producing parts, some amount of time and resources devoted to changing the die, then a restart period where the process is dialed in to make the next good parts. The idea is to minimize this down time because its non-value added.

Just because the description I just gave is manufacturing based, does not mean that those of you in an office should stop reading here. Read on, because SMED applies to almost every process, regardless of the "product" that is produced. It does not matter if that process is transactional, front office, or back office, SMED applies to many of them.

Lets start with a visual example of SMED. Click this link for a video of SMED in a non-manufacturing environment.

So, from what we've already discussed and seen the generalized goal of SMED is to reduce Non-Value Added Time (NVA) associated with changing over from one value added activity to the next value added activity. If we consider this general goal, we can find many opportunities to SMED our office and transactional processes. An example might help illustrate. Suppose we have a process for completing a sales forecast. The Value-Add of the process is the end result, the forecast, and all of the steps associated with actually moving that forecast towards completion. Some of those steps might be; 1. Sales force enters known demand changes from their customers, 2. Sales force changes are consolidated to a global forecast, 3. Business leaders adjust forecast for general market conditions. There are many more steps than these but these are good examples of VA steps that move the forecast towards completion. Typical NVA steps in this process might be; 1. Adminstrator corrects file errors before releasing for revision, 2. Review meetings for updates and adjustments, and 3. Publication of forecast to ERP to start demand planning. Looking at our NVA step examples, we could apply SMED to a couple of these to reduce them or move them "off" the system. Correcting errors is definitely a NVA activity, but if errors are present it is a necessary activity to ensure a "good" forecast results. Maybe we could establish an "offline" process of file review that goes on outside of the forecasting process and ensures that when the process kicks off each month, the files are error free and can proceed immediately to the sales input step. This first step of SMED is called converting internal setups to external setups. Using this strategy we take process steps that were conducted during the execution of the process and move them outside of the process. The second step of the SMED process would be to reduce internal setups that remain to a minimum. That idea might drive us to reduce the review meeting activity to a minimum required to move the consolidated forecast towards completion.

While the example used above may not speak to you in your particular office or transactional situation, if you take it in as an example of how SMED can be applied outside of the manufacturing machinery paradigm to reduce or eliminate NVA time from any process.

Monday, March 22, 2010

Six Sigma Tool of the Month-Defect vs Defective

What is a defect? What is a defective? Whats the diference? Why do I care? Last month I discussed capability analysis as the Six Sigma Tool of the Month. Get a refresher on that here. One of the ways that we can measure capability is a statistic called DPMO or Defects Per Million Opportunities. DPMO is a measure of how many times (out of a million) your process would produce good "product" vs bad "product". But that measurement begs the question, What is a defect? So that brings us to today's post.

A defect is a characteristic of the "product" that does not meet the customer's requirements (Specifications). in a DPMO calculation, each one would count as one. Does a defect make a unit of product defective? lets say we are producing a complex product, with multiple quality characteristics, say a new automobile or a monthly sales forecast for 20 products. Does one defect make the car or the sales forecast defective? In other words does a defect affect our decision about the purchase or retention of the product as a whole. If there is a blemish in the paint on the car, should the car go to the junkyard? If one product in our sales forecast is misrepresented by 10% does that mean the whole sales forecast is no good and should be scrapped? Probably not. That's the difference between a defect and a defective. A defective is a unit of product that can not be considered good due to a preponderance of failing quality characteristics (Defects) - OR - a failing quality characteristic that is so vital to the functional purpose of the product that its failing inhibits the primary function of the product. In our new car and sales forecast examples if the car had a paint blemish, a burnt out headlight, a wobbly wheel, a staticy radio, and cracked windshield and tear in the seat. We'd return that car and demand a new one, or not buy it in the first place. In our sales forecast, if one product was off by 10%, and another off by 12% and another off by 23% and so on, eventually we would not believe the forecast at all. Its defective. Again, using our two case examples, if the car was missing the engine or the forecast for a major product was off by 50% or more, that alone would make the forecast defective.

So why care about the difference between a defect and a defective. Its all about the pain the customer is feeling. remember, we measure process capability in terms of the deliverable(s) the customer cares about (specifications). Our reaction to measuring a missing engine in our automobile product as a defect might be different than if we take the customers' perspective that the product is unuseable without the engine (defective).

Monday, March 15, 2010

Quality Methods Take Business by Storm; Whats Next?

Quality Circles, Total Quality Management, ISO 9001, QS9000, Baldrige, Six Sigma, Lean. All of these methods have at one time had lots of attention in the press and in the Quality profession. Looking back on these methods and their moment in the sun, I have one (maybe more than one) observation. All of these methods are valuable and have had major impact through disciplined application in a number of different companies but the one thing that strikes me is that all of these methods have been touted as a savior for all of our quality woes.
Quality Circles were the thing in the 70's and 80's as America struggled to catch up with quality advances made in Japan under the tutelage of Dr. Deming. ISO 9001 took off in Europe and Asia first but steadily gained momentum in the US in the 90's as more and more international companies started to make it a condition of business. The 80's and 90's saw the excitement about possible benefits of achieving the Malcolm Baldrige National Quality Award. Much was made of stock indexes based on the Baldrige winners beating the S&P 500 by significant margins. Many of us lived through the PR assault for Six Sigma created by Jack Welch at GE and others. Dramatic claims were made about savings achieved using six sigma, attracting lots of media attention. Lean has been slowly but surely gaining adherent over the years as Toyota continues to perfect the method and tools that has led to the elimination of millions of dollars in waste around the world
I think this is an interesting parallel to what we have seen here in the US in the last 30 years when we think about the stock market. Many of these companies have been doing things well for years and they just happen to do it using lean, or six sigma, or ISO 9001, or Baldrige, or Quality Circles. People start to hear good things about this companu or that and what they are doing with Baldrige and how their stock price is up compared to everyone else and suddenly, everyone starts to thign that the secret to success is the quality method du jour. Then of course everyone wants to learn about the method and pretty soon, just like preppies in the 80's, everybody is doing that sexy new quality method and looking for the big payoff. Some find it, others don't. Which is just additional proof that its really not the method as much as the culture of the organization attempting to employ it.

Back to that interesting parrallel I mentioned before. I have been a working adult through the last two recessions (2008-Present, and 2001-2002) While I was not a working adult during the recessions in 1979-81, or 1987, I do remember them well and I think there is an interesting difference between these era's that relates to our attitude about quality. The recessions of 1979-81, and 1987 were what I would call "normal" recessions. Normal in that they were caused by normal economic factors such as inflation, failure to compensate for changes in economic conditions, oil shortages, conflict, etc....The recessions of 2001-02 and the present one are different from the others in that they have been primarily caused by irrational speculation coupled with poor business discipline. Not since the Great Depression has speculation caused so much upset in the world. Seems the old saying is true, those that dont learn from history are doomed to repeat it.

OK, so what does all this have to do with Baldrige, Lean and Six Sigma? They are parrallels for our experience in the markets, sexy methods that were going to come in like white knights and save us boatloads of money. Baldrige winners were supposed to outperform the S&P 500 by significant percentages, ISO 9001 was going to keep American companies in business, and of course, Six Sigma saved the day at GE, Allied Signal and others in the 90's. None of this is false information, all of these things happened. So why is it that every company is not going for Baldrige, or applying Six Sigma or Lean? Why are none of these the silver bullet once claimed? Basically, it comes down to what really matters and what we should be focused on: Quality Fundamentals. Quality Fundamentals or those things that help a company delight customers, drive continuous improvement, and enable excellence. Whether you call it ISO 9001, Baldrige, Six Sigma, or Lean, those are the things that matter most.

Monday, March 8, 2010

ISO Stuff: Customer Focus

In this series I will talk about sections of the ISO 9001 standard that I have seen organizations struggle with.

This week's topic. Customer Focus.

Customer Focus is one of those "Mom & Apple Pie" requirements in the ISO 9001 standard. The requirement states "Top management shall ensure that customer requirements are determined and are met with the aim of enhancing customer satisfaction." It was added to the 2000 revision of the standard to increase the emphasis placed on meeting customer needs and improving customer satisfaction. While these have always been interpreted as requirements, they were not previously explicitly stated. There's little action mandated directly by this requirement, but Customer Focus is connected directly to two other clauses of the standard that do require specific action. Customer related processes, which describes our obligations around understanding customer product and service requirements, whether explicitly stated, implied, or assumed; and Monitoring & Measurement of Customer Satisfaction, which, as the title gives away, requires that we determine a method of measuring customer satisfaction with our products and/or services. Take these three requirements together and what we have is a system that says that we must figure out what customers want, whether they tell us directly or not, meet those needs, measure how satisfied they are with our efforts and that Top Management needs to care about all of that and use the information to spread the voice of the customer throughout the company.

Ok, so how do we show that we meet this requirement? This set of requirements mean that we need to get closer to our customers, ask the right questions, listen intently to what they tell us, and what they don't tell us. Bring back that information and spread it around. Communicate it often and widely. The Voice of the Customer (VOC) needs to inform how we produce products and deliver services, how we design new products and services. We need to establish measurements that tell us how we are doing and how satisfied our customers are with our efforts. Probably the easiest and most straightforward way to measure this is some measurement of customer complaints although extrapolating inferences from customer complaints to all customers is fraught with difficulty. First is the fact that most customers who experience an issue don't complain. Those that do complain fit a demographic group that you may not want to assume all customers are in. Measuring complaints is a good first step but a better way would be to measure customer satisfaction. There are a number of ways to do this from home made surveys to passive collection of supplier scorecards that customers may provide. The easiest way to show that we are addressing this requirement is to show improving satisfaction, and evidence of incorporation of information derived from customer feedback into products or services.

Monday, March 1, 2010

Measure What Matters

I was recently reading an update on the 2009 Baldrige National Quality Award recipients and in the right margin an article title caught my eye. Get the article here. The article was about a North Carolina school district that received the award in 2008 but the subject of the article was really about developing meaningful measurements. The author discusses how this is a key concept in the baldrige criteria and I pondered that for a minute......Really, that's a key concept of business. Every business, regardless of their intentions towards national quality awards, or not, should measure what matters most to their business. In last weeks' post I discussed a business situation in which the measurement did not drive action. Go here for a refresher. This is a key component of measuring what matters, does it draw in resources? Perhaps a better question might be, should it draw in resources? Should a measurement cause a leader to make a decision to dedicate some resources to its improvement? If it matters, the answer would be: Yes. So obviously the decisions about what to measure are very important. The first question should be; Does this matter to our customers? If the answer is yes, then it matters to you too. It should be measured and improved if not meeting needs. The second question should be; if this measurement does not meet the need, are we committed to improving it? If the answer to the first question is yes, then this one should be an easy yes. Sometimes its not though, hence my story from last week.

So how do we determine what to measure? What matters most? The short answer is: Get to know your customer. What matters most to them about the product or service you provide? Go ask them. Conduct surveys, solicit feedback, analyze complaints, talk to ex-customers about why they are ex-customers, and potential customers about their needs and expectations.

Lets be honest. Sometimes its hard to achieve a balanced perspective on whats important. Every business is concerned about costs, and quality, and customer satisfaction, and employee satisfaction to some degree. But how do you balance these sometimes competing concerns to make sure that cost does not outweigh quality or customer satisfaction does not drain the company cofers. Enter the Balanced Scorecard.

The balanced scorecard approach suggests that we view the organization from four different perspectives, establish objectives, measures, targets, and initiatives for each perspective. The four perspectives are: Financial, Customer, Learning & Growth, and Internal Business Processes. These four areas surround the central theme of the business; the vision and strategy. As you can see from the model above, each area interacts with the areas adjacent to it, so improvement in learning and growth has a positive impact on internal business process performance and customer satsifaction. The better we are at our internal business processes, the better we are able to meet our customers needs and improve our financial situation.

Of course, the balanced scorecard is only one approach to measuring what matters. There really is not right or wrong way to do it, just do it. Determining what matters most to the success of the business, developing measures, monitoring performance, and committing resources to improve when necessary are really the keys to success no matter what format you chose.

Monday, February 22, 2010

Requirement One for Continuous Improvement Culture-Significant Emotional Event

The Significant Emotional Event (SEE or Epiphany) is what is required to start down the road to developing a continuous improvement culture. CI culture change is driven by the survival instinct.

So what is the SEE? It is the realization that creeps into the leadership of a company that if something doesn't change, things are gonna get alot worse around here. Maybe the SEE comes from competition taking away market share, or customers firing you, or significant drops in sales and revenue. If the SEE comes from those kinds of things, chances are, the ship is already lost or at least the battle to turn things around will be alot tougher.

The trick is to sensitize the organization to EARLY signs that help deliver the SEE, while there's still time to right the ship. So what are the early signs? First, before going into that one thing that must be said is that the organization must saturate the company with the Voice of the Customer (VOC). I've talked about one of my favorite books on this; The Customer Driven Company by Richard Whiteley. In The Customer Driven Company, Mr. Whiteley makes the point that quality is from the customers' viewpoint. This is a common idea to lean practitioners where value is defined by the customer, same idea. The bottom line is that the organization has to start by accepting what the customer says as the truth and understand that we are all in the business of satisfying customer needs, not producing goods. Once the customer version of the truth is endorsed, metrics dashboards need to be built and monitored that give those early warning signs of trouble. That's not enough, however. You have to act.

A recent example illustrates the point. The Voice of the Customer for Company A stated, among other things, that responsiveness on quality issues and effectiveness of the response were key factors for improving customer attitudes towards the company. Metrics were established for measuring time to key milestones for response on customer complaints. Performance against these metrics was poor. In many cases Company A did not meet any milestones for any complaints in a given month, dates slipped, excuses started flying.

What didn't happen next is what was important. When the monthly metrics reviews showed continued poor performance, and a continuous improvement project was suggested, no action was taken. Why? Who knows, pick your excuse. Too much firefighting, not sure if its really important or not, can't be fixed. We heard 'em all. The bottom line is, the excuses worked. Management was lulled into indecisiveness because they decided it was easier to dismiss the metric as unimportant than to act on trying to improve it.

If the Voice of the Customer was truly present in this situation, the metric would drive a decision to improve the process based on the poor measured performance and the understanding that it is an important characteristic of quality in the customer's eyes. The hard lessons of the Significant Emotional Event would send a message throughout the organization that says "Believe what the customer tells you, it's important to our survival."

To quote a recent political expediency, "Never let a good crisis go to waste, its an opportunity to do important things you would otherwise avoid". Quality Leaders should recognize the early signs of the crisis and push the message for all its worth.

Thursday, February 18, 2010

Lean Tool of the Month-Poka Yoke

Poka Yoke (Pronounced Poka Yokay) is quite literally, mistake proofing. A mistake proof is anything that prevents an error from occuring. One of the seven lean wastes is Defects or Poor Quality. Poor quality is a lean waste because it causes additional product to be manufactured or additional repetitions of the work process to be performed to achieve an acceptable result. This is waste. In an ideal lean world, we would do it right the first time, with no wasted effort or resources. So if poor quality is a significant issue in the lean waste stream, Poka Yoke is an effective ideal to help address it. There is no standard formulaic way to apply Poka Yoke, its simply the concept of something that prevents the mistake from occuring in the first place. Without even knowing it, we all have at least a dozen Poka Yoke that we interact with on pretty much a daily basis. Here are some examples that you may not have noticed.

All of these are Poka Yoke because they prevent us from inserting the plug incorrectly. A Poka Yoke does not have to a physical means of preventing a mistake. Ever order anything from the internet? or do your banking electronically?, or refill your prescriptions over the phone? There are Poka Yoke's in those systems that prevent the order from being placed without your authorization (that 3 or 4 digit number they always ask us for from the back of the credit card), or the account number for our savings account or refill number for our prescriptions. All of these systems have a poka yoke to prevent the vendor or the client (us) from making a mistake.

So the next time you're grappling with a tough quality problem in your process, don't dismiss a poka yoke just because you're working on a transactional or back office process where there's no physical control or prevention that can be easily done. Plenty of electronic business systems give the user some ability to require certain information in order to process a transaction to the next step. That's a poka yoke.

Monday, February 15, 2010

Tug of War: Customer Satisfaction vs Cost Reduction

This is the classic struggle of Quality. High quality and high customer satisfaction demands that suspect product be thoroughly scrutinized and a high standard set for its release to a customer. Customers expect this and quality staff strive to achieve it. The other side of the tug of war is cost. Business leaders are looking for every advantage they can get in reducing their costs and scrap product is one fo the biggest targets out there. Does it have to be this way? Can Quality and Business be on the same side? In short, the answer is, yes they can. What's required is a change in attitude and approach about quality. The two models below help to illustrate the different attitudes about quality. In the Economic Conformance Model illustrated below the classic Quality vs Cost struggle is portrayed. The underlying philosophy is that high quality costs money and that there exists a point of diminishing returns. That point is called the Economic Conformance Level (ECL). The ECL is the point above which higher quality is more costly to achieve that its "worth". This viewpoint pits those with "Quality" in their job title against those without in a dance to determine where the ECL exists for the business and promotes a "good enough" rationalization mindset that permeates every product quality decision eventually.

For a different perspective, we turn to the work of Phillip Crosby. Mr. Crosby developed a four part quality philosophy through years of proven experience. The four components of Mr. Crosby's philosophy are:
1. The definition of quality is conformance to requirements

2. The system of quality is prevention

3. The performance standard is zero defects

4. The measurement of quality is the price of nonconformance

Mr. Crosby's approach is illustrated below in the "Quality if Free" perspective on quality costs.

A key component of Mr. Crosby's philosophy is the definition of quality. "Quality is defined as a conformity to certain specifications set forth by management and not some vague concept of "goodness." These specifications are not arbitrary either; they must be set according to customer needs and wants." This statement is very important for what it really means in practice. It means that everyone from top management on down is engaged in the work of quality, not just those folks with "Quality" in their job title. It means that management must grapple with interpreting what the customer wants and needs and set policy to achieve those wants and needs, accepting the consequences of not achieving those results and relieving the pressure on the organization to rationalize quality and cost. I participated in a discussion recently which is frightening but illustrates the point. In this discussion, trend data were being reviewed and a poor trend was displayed for a product that is limping along. A meeting particpant then announced that the product would continue to be made for the next couple of years in response to customer demand. Since this was beyond what we had all come to understand was the End Of Life of this product, the questions started to fly. Will we now engage suppliers in process improvement? Will we perform process improvement in-house? Will we upgrade the raw materials? In short, will we invest in helping make and keep the product "good". The answer to all of these questions was No. Now here's the frightening part; tacked onto that No was a statement that the problems with the product were because we had accepted the notion that variation should decrease over time and the continuous improvement was driven by customer demands. "We should just say no" to tighter control limits requested by the customer is the viewpoint expressed. Clearly, this is not a customer-centric approach to quality and shows that there is still work to be done to change hearts and minds.

So how do people with Quality in their job title fit into this system? The unpleasant answer is that, in my opinion, Quality folks should do everything in their power to work themselves out of a job by working with top mangement to change the view of quality from a job responsibility to an organizational value. When everyone values quality from the customer perspective, and acts accordinly, "Quality" folks in the traditional sense will no longer be needed because there will be no need to police for quality. Quality becomes a  consultive role to management in interpreting customer needs and wants and giving guidance on what that means in practice.

Thursday, February 11, 2010

Six Sigma Tool of the Month-Capability Analysis

Capability analysis comes up twice in the six sigma process. Capability can be measured a number of different ways. I'll cover that in a minute.

First, what is capability analysis and why do we measure it? Capability analysis is measuring the variation and bias of our process against the requirements of the customer (otherwise known as specifications). The number reported, in whatever form, is a statistic that indicates to us how well the process meets the needs of the customer. The diagram below shows a process that has wide variation and is not centered between the specifications. Both of these issues result in defects, costing the organization money and reducing customer satisfaction.

So, how can capability be measured? At least four ways come to mind immediately: Cpk, Z Score, DPMO, and Yield. The key thing to remember about all of these measurements is that they are just different ways of expressing the same idea; how good is my process at meeting the customer needs. So, whats the difference? The short answer is not much. Cpk and Z scores are directly convertable from one to the other just by multiplying or dividing by 3, depending on which statistic you prefer. DPMO is Defects Per Million Opportunities and expresses the idea that if you were to perform the work process 1 million times, how many results would be defects. Yield is another way of expressing this as a percentage. Yield is just the percentage of the output of the work process that is "good". I could share with you the formulas for how to calculate these different capability measurements but I dont want you to leave, and those formulas are readily available from any number of sources, including myself.

What I'd rather discuss is why we do this in the context of six sigma or really any continuous improvement project. There are two reasons; 1. baseline process performance and, 2. measure the improvement. In step 6 of DMAIC we measure baseline capability. Capability is a measure of how well the process performs its intended function for the customer. Before we go and start tinkering with things in an effort ot improve performance, we need to understand how the process performs today: thats a baseline. We come back later in DMAIC at step 14, after we have implemented the improvement plan and allowed the process to run for some time and we measure performance again. This time, we compare the new process performance to the baseline to determine how much improvement we have made through our efforts.

Monday, February 8, 2010

Six Sigma-The Project Review

I'm reading a new book that everyone should go out and buy. Its called "The Brain Advantage, Become a More Effective Business Leader using the Latest Brain Research". A friend of mine is a co-author so I got a copy from him to read through. I've made several connections to my experiences in leadership.

I'm going to share one of them with you here, then you go right out and buy the book.

All of us involved in Six Sigma, or Quality Management have participated or led a management review meeting, and all of us have experienced some that were good active discussions and others that were flat, routine, automatic. In Chapter 2 of "The Brain Advantage" the authors talk about scripting and how brain research has shown that when mastery of a subject is achieved, the brain goes on autopilot, working less to accomplish the same task than before mastery was attained, this makes sense in a very real way. The old adage "practice makes perfect" applies here. Once well practiced, an automatic script slowly takes over, and things become "second nature". However, practice makes perfect also makes people overlook when changes to the script are needed. A review meeting is one example where we should be "off-script" but typically, review meetings follow a routine agenda, which reinforces the script in the mind, inhibiting questioning behavior or "thinking outside the box" if your keeping buzzword score. Six Sigma reviews are particularly susceptable to this scripting because of the step by step nature of the six sigma process, which reinforces the script. I have observed some Master Black Belts and Black Belts who are so scripted that when they see an innovative approach used to address a step in the six sigma process, they struggle to accept it because it does not follow the script of what they expect to see in their mind. A review meeting is the best place for people to be "on" and actively thinking, rather than playing a script. When reviewers are "on" rather than "on autopilot", new and interesting approaches to problems can be discovered and project leaders can be challenged to deliver better results rather than just marching through a bunch of pages to place a check mark on a script. How I think I can practice this new learning to improve the quality of my reviews is to ask for key points from the project leader about what was learned that was surprising, insightful, or counterintuitive or to deliver those points if I am the one being reviewed. Sort of like asking "Why should I care about this information?" or "What can I do with this information?" instead of looking for something to fill a space where I expect to see a space filled.

Get a copy of The Brain Advantage here

Thursday, February 4, 2010

ISO Stuff-Internal Audit

In this series I will talk about sections of the ISO 9001 standard that I have seen organizations struggle with. This week is Internal Audit.

Internal Audit is part of those requirements that deal with the need for us to monitor the performance of our Quality Management System. As such, you'll find the requirements for Internal Audit under section 8 of the ISO 9001 standard, where monitoring & measurement requirements are laid out.

The purpose of Internal Audit is to ensure that the QMS conforms to planned requirements and is effectively implemented.

Internal audit has two key interactions with other parts of the ISO 9001 standard, one is Corrective and Preventive Action. Internal audits result in identification of issues that require corrective action. Management for the audited area is responsible for taking action in a timely manner on results of audits. The other key interaction is with Management Review. Internal audit results are a key input to management review activities. Think of Internal Audit as the "Eyes and Ears" of management with regards to the integrity of the Quality Management System; its intended to tell management how things are going and where things need to be improved.

Lets cover the requirements for internal audit in detail. The requirements for internal audit are few but there are some critically important points. Here's the requirements:

Internal Audits should be;

1. Process Based
2. Scheduled and conducted according to the status and importance of the process to the overall Quality Management System (QMS).
3. Conducted by personnel not responsible for the work of the process
4. Acted on my management for the area.

Process based. This is a key requirement because it encompassed a sweeping change in audit philosophy with the 2000 revision to ISO 9001 from a clause based audit approach to a process based approach. The intent is that internal (and external) audit activites should focus on the process(es) being audited and let the auditor determine what clauses of the ISO 9001 standard are in play. This approach makes much more sense from the viewpoint of the way the business operates.

Scheduled & conducted according to the status and importance of the process. This requirement has broad, meaningful implications to audit programs everywhere. No longer are Quality Managers required to audit everything in a yearly cycle, now the Quality Manager can assess the status and importance of a process relative to other processes and decide how often to look at that process. There are many different logical strategies that can be employed to make this assessment using everything from a Failure Modes & Effects Analysis (FMEA) type of approach to a simple assessment of business results through dashboard metrics to enable decision making about what to audit and how often.

Conducted by personnel not responsible for the work of the process being audited. This is simple. Auditors can not audit their own work. It ensures an unbiased assessment of the process.

Acted on by management for the area. We've covered this already above. Internal audits find things that need to be corrected, management that is responsible for the process being audited must act on the finding to correct the issues discovered.
As you can see, the requirements for internal audit are not prescriptive. There is no detailed "how to" in the requirements. This generic approach leave the Quality Manager alot of room to interpret how they chose to address the requirements in a way that is effective to their QMS.

One final point about internal audit. Internal audit is NOT intended to be a "gotcha" process, where we "trick" people into revealing the skeletons in the closet. It is intended to be a unbiased assessment of compliance with the planned activities of the QMS. Auditors should be looking equally for best practices and improved processes as they are for non-compliance with planned requirements.

Monday, February 1, 2010

Lean-Its Not Just for Manufacturing Anymore

Lean is not just for manufacturing anymore! Really, lean has always been about removing waste from processes, any processes, it doesn't matter if they are manufacturing processes or office processes. So why the disconnect? Why do so many look at Lean as something that applied to the manufacturing floor but not to their office space? I think the difference is in how you think about Lean. For instance, if you consider 5S. It is very easy to see how and where 5S is applied on the shop floor. It involves the physical arrangement of the space and the tools within it. Easy to see and wrap your brain around. Move into the office and things get a little harder but you can still see removing clutter and establishing organization and homes for everything, but move into the digital office and you've got problems. To see an opportunity for 5S you have to change your thinking because the mess and lack of organization is not right out in front of you, it might be in your computer. The same 5S concepts apply to your e-document & data storage and retrieval as would if you printed it and put it on your desk.

What is the focus of the Lean office then? Productivity, Efficiency, and Customer Satisfaction. Lets consider Productivity and Efficiency. A 2008 report on global productivity by Proudfoot Consulting showed that unproductive time (defined as time spent doing things that were unproductive for the company) rose to a new high of 34.3% or roughly 1.7 days per week per employee of time wasted. The story by industry sector is not much better. See the chart below for the sector breakout.

The number one cause cited for low productivity was staffing shortages and labor issues. This is great news if your thinking about how to implement a Lean Office because this means that by streamlining your processes, removing wasteful delays, reviews and approvals, you can bridge the staffing gap and do more with the same staffing levels.

Be sure of one thing though. If you're thinking about how to become more efficient, your competition is too. The chart below shows the gap that exists by geography between potential and realized gains in productivity. This data shows Europe and North America losing ground to Asia-Pacific and BRIC regions in terms of realizing producivity gains. The really bad news in this is that the staffing shortage issues don't exist in those regions, meaning that even as they work to gain efficiency and productivity, they have no shortage of labor to bridge the gap.

Here's an example of a ripe opportunity for waste reduction. How many of us receive reports that we dont get much from? Or maybe we dont even read them at all. According to the 2008 Global Productivity Report, Managers said they needed on average 6.6 reports to do their job, but received 10 per month. Meaning that 34% of the reports created every month are not needed. How much staff time is spent preparing those reports? The chart below shows the data broken out by best and worst. In the west, we are swimming in reports! This is a waste of your time but also the people that have to put these together each month.

That covers Productivity and Efficiency but what about Customer Satisfaction? I mentioned earlier that to think about Lean in the digital age, we have to change our paradigm. In the lean digital office environment, we first must understand that everyone has a customer, internal or external, and that all customers have legitimate needs that we can fulfill. To improve customer satisfaction in the Lean Office, just as in the Lean Manufacturing setting, you must understand your value proposition. What is the thing that your customers' value from your organization. What do they need, really need, in their terms, not yours. If they were paying you, what would they want to pay for? I'll give an example to consider: Consider an Internal Training Development Team. They might say that the customer needs good quality training content, jazzy graphics, and slick animations to keep people interested. If we were to ask the customer, they would probably say that what they need are to have their people's skills enhanced so they can be more effective at their jobs. If you consider what the customer really values, you can imagine that the outcome might be very different.

Lean is not just for the shop floor anymore. Lean is universal, it applies to any process. Not every lean tool will apply in every situation but the overall lean principles of reducing non-value added activity, enhancing value added process steps, doing only those things that enhance value for the customer, making only what is needed by the customer, and continually striving to improve apply universally to all business situations.

Monday, January 25, 2010

Create a Learning Culture

Want to unlock the potential of your employees? Want to harness the power of their ideas? Need to gain their cooperation and active participation in making the company better?

Create a Learning Organization.

What is a learning organization? A learning organization is one that places a high value on the knowledge and experience of the individuals, but understands the vulnerability of not transfering that knowledge to a system through which others can share and benefit from that knowledge. A learning organization seeks out the best practices of individuals, recognizes them, and integrates them into the process for all to use, thereby spreading the benefit of those best practices to the company. A learning company recognizes that the people are its greatest asset and the source of its wealth and success.

There are several key steps that need to be taken to facilitate the transition to a learning organization. Upfront planning and commitment is key. Of primary concern early after the decision to create a learning culture is that if steps are taken to create interest in learning, will we be properly prepared to support that interest. If we are not ready, apathy will result, so before we can really go forward with grandiose communication plans and goals and objectives for learning, we need to spend some time setting up the infrastructure for learning. One approach is to visualize and graphically represent the various career paths through the organization. This can be done through a career mapping initiative. The goal of this initiative is to thoroughly understand the key skill needs within the organization by position, streamline and standardize the position criteria, and map all of the potential career paths through the organization. As sample of how this might will look is included here:

In this example, you can see that there are four main tracks of progression through the organization, but there are cross-connections along the way, allowing people to move between tracks into other areas of interest.
Lets take the Six Sigma Black Belt position as an example. For someone to be successful in that position, they need several critical skills and character traits. First they need training in the essentials of what a Black Belt does. They would need a firm understanding of the DMAIC process, statistics, change psychology, team building, data collection, project management, influencing skills, and a desire to excel. Some of those are easily boiled down to quantifiable measureable things, others are charatcer traits, which for the most part, you either have, or you dont have. This information along with the roadmap give employees essential information needed to judge if they think they are cut out for a role as a Black Belt. It also gives them the path to success if they decide to pursue the position. They would know that they need to get some training in statistics, learn how to lead projects and people, learn how to tactfully challenge the status quo, and how to handle team dynamics.

As you can see from the example, quite a bit of detail goes into this analysis. The benefit of this to the organization is to finally be able to charge the individual with some responsibility for their own career and give them a roadmap to success. If the philosophy is that career development as an individual responsibility then you have to give them the tools to manage their own careers. This is the first piece of the puzzle. Once this is done, spend some time evaluating training programs and educational offerings in the marketplace and establishing, through purchasing or internal development, the catalog of available training. One of the key factors in this effort is to ensure that the educational offerings we elect to support are in alignment with our organizational values and goals. For instance, if we use a DMAIC process for problem solving, we should not be sponsoring training activities around other problem solving methods. That would contradict our aim. These are the two key elements of our ability to start the first phase of our transition to a learning organization. Once these are completed and ready for communication then we can roll out the career management program to everyone, begin encouraging managers to sit with their people and discuss career planning, creating career plans and acting on those plans. This activity will in turn enable progress in another major focus area: Promotion from within.

Promotion from within is an important part of creating a learning culture, but it has to be managed properly. Talking up a promotion form within value should be done in the context of promoting people with the right skill set for the job. Promotion from within is not a exercise in saluting the flag, we shouldnt do it for its own sake, but rather make it a key component of the larger aim, raising the skill level in the workforce. Promotion opportunity is a tremendous incentive for growth and learning. Poorly managed, promotion from within will only result in a frustrated workforce submitting their resumes for jobs that they are not remotely qualified for, then complaining when they are not selected.

The third major area of focus is the knowledge sharing activity. We have to create systems through which knowledge can be pulled by the individual that has a need. A major vehicle for this could be the company intranet. Some recent experiences that I have had indicate that as much as 25% of our resources are wasted repeating things we already learned once before and that many of the tough problems that are causing quality issues have already been solved, just not shared. This is a tremendous waste of company resources. A Knowledge Management System can allow for the collection, cataloguing, and collaboration on knowledge so that it is learned once, and applied many times to greater benefit for the company. Many times, a process improvement is achieved in only one location, through six sigma or other means, but could be shared with other locations that have that same process to multiply the benefit of what was learned. Worse yet are examples where that same process improvement was "discovered" by one of those other locations months or years later, wasting valueable time when we could have been collecting the savings. The idea of a knowledge management system is to accelerate learning by avoiding repeating the same experiments time and again, and instead building on previous knowledge to achieve new, higher levels of knowledge.

These actions only address part of the issue however. One of the key factors to our success is how well we create a desire to improve within the workforce. Everything we have discussed so far goes largely towards creating desire to learn for extrinsic needs (promotion, leadership, salary, position, etc.). We need to drill down further in order to create intrinsic motivation to improve our own work everyday.

Many of the ideas and tools of lean can and will help with that. First and foremost is making things visible. Creating visual management systems that allow the worker to see how they are doing is a basic first requirement. I start here because I believe that if we make things visible, that will create some of the initial drive to improve, which will force some of the other changes needed. Those changes are: giving control to the worker, enabling good decision making through well planned instructions, training on the tools of improvement, allowing workers to control the quality of their own documentation and procedures, educating them on the technology of our product and/or the process and rewarding the right behaviors. I believe making things visible is the key to creating motivation. Everything else will follow. The final action here is to determine how to effectively marry up the motivation and results of improving our work everyday with the global knowledge sharing piece so that we can quickly and easily share with each other best practices that come from our improvements.

With these steps taken, we will be far down the road to building a learning organization and fully leveraging the most valuable asset we have, our people.

Thursday, January 21, 2010

Lean Tool of the Month-Value Stream Mapping

Value Stream Mapping is an exercise to identify the value added steps of a process and more importantly, the non-value added steps and delays in a process. The reason to do this is to identify where waste and rework and bureacracy occurs in a process and seek ways to streamline by removing unnecessary steps, loops and approvals from the process so that only the value is left in the process. By the way, a word on value. Value is defined by the customer. The customer is not always the paying customer at the end of a process waiting for their product or service that they ordered, but can be the next process owner down the line towards fulfulling the paying customers' needs. Value, generally speaking, is steps that move the product or service towards the customer. Any thing that does not contribute to that value chain is waste. Some might say that product inspection is valuable because it ensures that the customer gets what they want. While it is true from one perspective, from the value-chain perspective, inspection is not something that moves the product towards the customer. In an ideal world, where high quality is present, inspection is not necessary. Inspection is therefore, a non-value added activity. Since we don't live in an ideal world, some non-value added activities are required. View them as necessary evils and seek to reduce them whenever possible.

The first premise of value stream mapping is that it starts with the customer. Since mapping in reverse is a metally tough exercise, we typically will map starting with the supplier, through all of the work process steps and data exchanges to the customer. Once the map is built however, we must look critically at the steps of the process through the eyes of the customer and determine what is valued and what is not. During the mapping exercise capture any wait times or delays or rework loops or approvals that occur along the way. Capture data exchanges, manual or electronic forms that get filled out all the way back to the start of the process, including the supplier. As you complete this exercise, the NVA will jump off the page and where the improvement opportunities lie will be obvious.

Value stream mapping is an effective tool. I have used it to remove significant chunks of time from processes that I was improving, and so can you.

Below is an example of a value stream map of the "As Found" process

and here is the improved process

This improvement saved 26 days of cycle time.

Monday, January 18, 2010

Worker Satisfaction at an All Time Low

A recent news story caught my attention and I thought was worth commenting on here. Here's the story. The jist of the story is that worker satisfaction is at its lowest since data has been collected on the subject. The major idea of the article was that this low satisfaction has long term implications for innovation and excellence. Of course this is true, the issue is risk. When people do not feel safe, they don't take risks. Risk taking is what produces innovation, excitement, and a feeling of satisfaction with the job. Accomplishing something hard is very rewarding. Three things that workers said in the survey were;

1. Fewer workers consider their jobs to be interesting.

2. Incomes have not kept up with inflation.

3. The soaring cost of health insurance has eaten into workers' take-home pay.

The second and third things are certainly not trivial, but are a sign of the times with the width and depth of the recession that we are in. The first item is what interests me however because it makes sense to the way the workplace has changed over the past two years. The idea is survival. If you have a job, thank your lucky stars and do whatever you have to to keep it as long as you can. The problem with this is that we can't keep playing whack-a-mole with our workforce and expect them to stick their neck out and take a risk for the business. Risk taking is necessary for businesses to thrive and survive. The role of the leader is to create an environment where risk taking is allowed. Let me be clear on what this means. Risk is a two sided coin. Doing something hard is and should be very rewarding, but taking a risk and failing to achieve success is rewarding too, as long as risk of failure does not put the risk taker out on the street. A risk taken but failed is a learning experience. Thomas Edison failed hundreds of times in his quest for the incandescent light bulb. After his success, someone asked him about his many failures, to which he replied that they were not failures at all, that in fact, he learned many hundreds of ways NOT to make a light bulb. As you read this post, probably under a light of some kind, think about where we would be had Edision laid himself off after failing at inventing the light bulb in 5 or 10 attempts. Is there an Edison at your workplace, keeping his or her head down, playing the survival game?

Thursday, January 14, 2010

Six Sigma Tool of the Month-Gage R&R

In this series of posts, I review the purpose, use, interpretation, and limitations of various six sigma tools.

This post is about Gage R&R, a critical tool in the Measure phase and the Control phase of DMAIC. There are many aspects of Gage Repeatability and Reproduceability, I'll attempt to cover the major points here, starting with the purpose(s). When a decision is made to charter a six sigma project, one of the major concerns early into the project is the reliability of the data that is used to determine root causes. It is very important that the data that is analyzed about the project problem be accurate, and consistent to enable accurate analysis, and that good conclusions can be drawn from the data. This is important to ensure that improvement plans are effective at addressing the root causes and can deliver real improvement. Gage R&R also comes up again in the Control Phase of DMAIC to help ensure that the significant parts of the improvement plan can be accurately measured. There is another purpose for Gage R&R. Actually Gage R&R is primarily a tool for determining if measurement systems used to evaluate the quality aspects of a product will produce reliable results. In either situation, the intent of Gage R&R is to give an indication of the proportion of the variation that is present in our system that comes from the measurement system. At its very basic level, a measurement system must be able to distinguish good product from bad product. Understanding the ability of the measurement system to do that is the purpose of a Gage R&R study.

There are several aspects to a Gage R&R study. Among the most important things to consider are:
-Sample Selection

Lets start with Reproduceability. Reproduceability is the measurement of the portion of the variation in the measurement system that is coming from differences between people. Most measurement systems consist of two primary components of variation; People induced variation, and Instrument induced variation. Reproduceability tells us about differences between the ways that people do the steps of a measurement method and how much those differences matter.

Repeatability is the portion of the measurement system variation that comes from the instrument itself. Repeatability is a measure of the ability of the measurement device to deliver a consistent result over several measurements.

Accuracy and Precision can be taken together. Accuracy is a measurement of the measured result compared to the true result. Remember that our measurement system is intented to give us a high confidence in the data that we use to decide on product quality or determine the root causes and improvement plans for six sigma. Precision is a measurement of the variation in results seen. Think of these two like a bullseye target. See below for a visual example showing the relationship between Accuracy and Precision.

Bias is the difference between the measured result of a sample and the actual result of that same sample. Bias is the error that exists in the measurement system. In the example below we are looking at our car speedometer. If we compare the measured result of the speedometer at three speeds (30, 50, and 70 mph) and we can know the actual speed that the car is going, we can determine the bias or error across the range of interest of the measurement system. The red arrow indicates the measured speed on the speedometer, and the yellow arrow is the actual speed as measured by some other device (a gps for example). We see that at 30 mph, we are actual traveling at 25 mph, there is a negative bias of 5 mph. At 50 mph, we are actually traveling at 50 mph so there is no bias at this speed. At 70 mph however, we are actually traveling at 85mph! This is a positive bias of 15 mph. Our local police officer would be very interested in this result. In an ideal world, bias would not exist, but since we don't live in an ideal world, we know it does, and we would like the bias to be predictable. That leads us to the next measurement characteristic; Linearity.

Linearity is the measure of the bias over the range of interest in the measured samples. In our example below, we see that at 30 mph there is a negative bias of 5 mph and that as we proceed up the scale of measurement, the bias increases to plus 15 mph at 70 mph. This is NOT a linear response. If you look at the chart below the speedometers below, you will see that the actual speed is not a linear line, its more quadratic (curved) than straight. This is useful information for us. First, it tells us that we can not apply a simple correction factor for bias across the range of measurement. If we were to apply a correction factor to the speed based on either end of the measurement range, results at the opposite end of the range will not be accurate. Secondly, the linearity tells us that the error grows as speed increases and that measurements at the high speed end of the range are more suspect and risky than measurements at the low speed end of the range.

Finally, lets talk a bit about sample selection. Hopefully through this discussion, you have seen that one of the most important aspects of setting up a Gage R&R Study is the choice of samples to measure. Remember the purpose of our study from earlier; determine if our measurement system can produce reliable results that can be used in decision making, either for our six sigma project or in the actual measurement of quality. In order to know about things like Bias and Linearity of response, we must measure samples that span the range of interest of our measurement. What does this mean? Lets say for instance that using our speedometer example from above, that we have an upper specification of 65 mph and a lower specification of 30 mph. If we were to chose the measure at 50 mph because that result is in the middle of the range of interest, we get a very different picture of our capability to measure speed than if we measure at either end and outside the range of specification. If we only measured 50 mph, we would incorrectly conclude that our speedometer is accurate and precise, with no bias. We would not be able to comment on linearity and this would result in our surprise at getting a speeding ticket for going about 12 mph over the limit at 65 mph. If we measure across the range of interest we then can add linearity to our understanding and know that we should not be confident in results near the upper specification of 65 mph. This tells us that we should set our upper specification for the speedometer somewhere in the area of 58 mph (measured) to always be under 65 mph (actual).

Gage R&R is a very useful tool in your six sigma tool box. It is also vital to ensuring that customer receive good product that meets their needs. Gage R&R studies are constructed to tell us how much confidence we can have in the measurement system. Gage studies can tell us where we need to improve the measurement system. Through analysis of the statistics that come along with the study, we can determine if person variation is causing issues or if the device itself is the source of variation. In any case, the gage study is a versatile tool to help identify improvement needs and improve quality.