TQM & SIX SIGMA etc.

105
Lean manufacturing Lean manufacturing or lean production, often simply, "Lean," is a production practice that considers the expenditure of resources for any goal other than the creation of value for the end customer to be wasteful, and thus a target for elimination. Working from the perspective of the customer who consumes a product or service, "value" is defined as any action or process that a customer would be willing to pay for. Basically, lean is centered on preserving value with less work. Lean manufacturing is a generic process management philosophy derived mostly from the Toyota Production System (TPS) (hence the term Toyotism is also prevalent) and identified as "Lean" only in the 1990s. [1] [2] It is renowned for its focus on reduction of the original Toyota seven wastes to improve overall customer value, but there are varying perspectives on how this is best achieved. The steady growth of Toyota , from a small company to the world's largest automaker, [3] has focused attention on how it has achieved this. Lean manufacturing is a variation on the theme of efficiency based on optimizing flow; it is a present-day instance of the recurring theme in human history toward increasing efficiency, decreasing waste, and using empirical methods to decide what matters, rather than uncritically accepting pre- existing ideas. As such, it is a chapter in the larger narrative that also includes such ideas as the folk wisdom of thrift , time and motion study , Taylorism , the Efficiency Movement , and Fordism . Lean manufacturing is often seen as a more refined version of earlier efficiency efforts, building upon the work of earlier leaders such as Taylor or Ford , and learning from their mistakes. Contents [hide ] 1

Transcript of TQM & SIX SIGMA etc.

Page 1: TQM & SIX SIGMA etc.

Lean manufacturingLean manufacturing or lean production, often simply, "Lean," is a production practice that considers the expenditure of resources for any goal other than the creation of value for the end customer to be wasteful, and thus a target for elimination. Working from the perspective of the customer who consumes a product or service, "value" is defined as any action or process that a customer would be willing to pay for. Basically, lean is centered on preserving value with less work. Lean manufacturing is a generic process management philosophy derived mostly from the Toyota Production System (TPS) (hence the term Toyotism is also prevalent) and identified as "Lean" only in the 1990s.[1][2] It is renowned for its focus on reduction of the original Toyota seven wastes to improve overall customer value, but there are varying perspectives on how this is best achieved. The steady growth of Toyota, from a small company to the world's largest automaker,[3] has focused attention on how it has achieved this.

Lean manufacturing is a variation on the theme of efficiency based on optimizing flow; it is a present-day instance of the recurring theme in human history toward increasing efficiency, decreasing waste, and using empirical methods to decide what matters, rather than uncritically accepting pre-existing ideas. As such, it is a chapter in the larger narrative that also includes such ideas as the folk wisdom of thrift, time and motion study, Taylorism, the Efficiency Movement, and Fordism. Lean manufacturing is often seen as a more refined version of earlier efficiency efforts, building upon the work of earlier leaders such as Taylor or Ford, and learning from their mistakes.

Contents

[hide] 1 Overview

o 1.1 Origins 2 A brief history of waste reduction thinking

o 2.1 Pre-20th century o 2.2 20th century o 2.3 Ford starts the ball rolling o 2.4 Toyota develops TPS

3 Types of waste 4 Lean implementation develops from TPS

o 4.1 An example program o 4.2 Lean leadership o 4.3 Differences from TPS

5 Lean services 6 Lean Goals and Strategy 7 Steps to achieve lean systems

o 7.1 Design a simple manufacturing system

1

Page 2: TQM & SIX SIGMA etc.

o 7.2 There is always room for improvement o 7.3 Continuously improve o 7.4 Measure

8 See also o 8.1 Closely related methodologies o 8.2 Predictive validation techniques o 8.3 Terminology o 8.4 Related engineering disciplines o 8.5 Areas of implementation outside production o 8.6 Other

9 References

10 External links

Overview

Lean principles come from the Japanese manufacturing industry. The term was first coined by John Krafcik in a Fall 1988 article, "Triumph of the Lean Production System," published in the Sloan Management Review and based on his master's thesis at the MIT Sloan School of Management.[4] Krafcik had been a quality engineer in the Toyota-GM NUMMI joint venture in California before coming to MIT for MBA studies. Krafcik's research was continued by the International Motor Vehicle Program (IMVP) at MIT, which produced the international best-seller book co-authored by Jim Womack, Daniel Jones, and Daniel Roos called The Machine That Changed the World.[1] A complete historical account of the IMVP and how the term "lean" was coined is given by Holweg (2007).[2]

For many, Lean is the set of "tools" that assist in the identification and steady elimination of waste (muda). As waste is eliminated quality improves while production time and cost are reduced. Examples of such "tools" are Value Stream Mapping, Five S, Kanban (pull systems), and poka-yoke (error-proofing).

There is a second approach to Lean Manufacturing, which is promoted by Toyota, in which the focus is upon improving the "flow" or smoothness of work, thereby steadily eliminating mura ("unevenness") through the system and not upon 'waste reduction' per se. Techniques to improve flow include production leveling, "pull" production (by means of kanban) and the Heijunka box. This is a fundamentally different approach from most improvement methodologies, which may partially account for its lack of popularity.

The difference between these two approaches is not the goal itself, but rather the prime approach to achieving it. The implementation of smooth flow exposes quality problems that already existed, and thus waste reduction naturally happens as a consequence. The advantage claimed for this approach is that it naturally takes a system-wide perspective, whereas a waste focus sometimes wrongly assumes this perspective.

2

Page 3: TQM & SIX SIGMA etc.

Both Lean and TPS can be seen as a loosely connected set of potentially competing principles whose goal is cost reduction by the elimination of waste.[5] These principles include: Pull processing, Perfect first-time quality, Waste minimization, Continuous improvement, Flexibility, Building and maintaining a long term relationship with suppliers, Autonomation, Load leveling and Production flow and Visual control. The disconnected nature of some of these principles perhaps springs from the fact that the TPS has grown pragmatically since 1948 as it responded to the problems it saw within its own production facilities. Thus what one sees today is the result of a 'need' driven learning to improve where each step has built on previous ideas and not something based upon a theoretical framework.

Toyota's view is that the main method of Lean is not the tools, but the reduction of three types of waste: muda ("non-value-adding work"), muri ("overburden"), and mura ("unevenness"), to expose problems systematically and to use the tools where the ideal cannot be achieved. From this perspective, the tools are workarounds adapted to different situations, which explains any apparent incoherence of the principles above.

Origins

Also known as the flexible mass production, the TPS has two pillar concepts: Just-in-time (JIT) or "flow", and "autonomation" (smart automation).[6] Adherents of the Toyota approach would say that the smooth flowing delivery of value achieves all the other improvements as side-effects. If production flows perfectly then there is no inventory; if customer valued features are the only ones produced, then product design is simplified and effort is only expended on features the customer values. The other of the two TPS pillars is the very human aspect of autonomation, whereby automation is achieved with a human touch.[7] The "human touch" here meaning to automate so that the machines/systems are designed to aid humans in focusing on what the humans do best. This aims, for example, to give the machines enough intelligence to recognize when they are working abnormally and flag this for human attention. Thus, in this case, humans would not have to monitor normal production and only have to focus on abnormal, or fault, conditions.

Lean implementation is therefore focused on getting the right things to the right place at the right time in the right quantity to achieve perfect work flow, while minimizing waste and being flexible and able to change. These concepts of flexibility and change are principally required to allow production leveling, using tools like SMED, but have their analogues in other processes such as research and development (R&D). The flexibility and ability to change are within bounds and not open-ended, and therefore often not expensive capability requirements. More importantly, all of these concepts have to be understood, appreciated, and embraced by the actual employees who build the products and therefore own the processes that deliver the value. The cultural and managerial aspects of Lean are possibly more important than the actual tools or methodologies of production itself. There are many examples of Lean tool implementation without sustained benefit, and these are often blamed on weak understanding of Lean throughout the whole organization.

3

Page 4: TQM & SIX SIGMA etc.

Lean aims to make the work simple enough to understand, do and manage. To achieve these three goals at once there is a belief held by some that Toyota's mentoring process (loosely called Senpai and Kohai), is one of the best ways to foster Lean Thinking up and down the organizational structure. This is the process undertaken by Toyota as it helps its suppliers improve their own production. The closest equivalent to Toyota's mentoring process is the concept of "Lean Sensei," which encourages companies, organizations, and teams to seek outside, third-party experts, who can provide unbiased advice and coaching, (see Womack et al., Lean Thinking, 1998).

There have been recent attempts to link Lean to Service Management, perhaps one of the most recent and spectacular of which was London Heathrow Airport's Terminal 5. This particular case provides a graphic example of how care should be taken in translating successful practices from one context (production) to another (services), expecting the same results. In this case the public perception is more of a spectacular failure, than a spectacular success, resulting in potentially an unfair tainting of the lean manufacturing philosophies.[8]

A brief history of waste reduction thinking

The avoidance and then lateral removal of waste has a long history, and as such this history forms much of the basis of the philosophy now known as "Lean". In fact many of the concepts now seen as key to lean have been discovered and rediscovered over the years by others in their search to reduce waste.

Pre-20th century

The printer Benjamin Franklin contributed greatly to waste reduction thinking

Most of the basic goals of lean manufacturing are common sense, and documented examples can be seen as early as Benjamin Franklin. Poor Richard's Almanac says of wasted time, "He that idly loses 5s. worth of time, loses 5s., and might as prudently throw 5s. into the river." He added that avoiding unnecessary costs could be more profitable

4

Page 5: TQM & SIX SIGMA etc.

than increasing sales: "A penny saved is two pence clear. A pin a-day is a groat a-year. Save and have."

Again Franklin's The Way to Wealth says the following about carrying unnecessary inventory. "You call them goods; but, if you do not take care, they will prove evils to some of you. You expect they will be sold cheap, and, perhaps, they may [be bought] for less than they cost; but, if you have no occasion for them, they must be dear to you. Remember what Poor Richard says, 'Buy what thou hast no need of, and ere long thou shalt sell thy necessaries.' In another place he says, 'Many have been ruined by buying good penny worths'." Henry Ford cited Franklin as a major influence on his own business practices, which included Just-in-time manufacturing.

The concept of waste being built into jobs and then taken for granted was noticed by motion efficiency expert Frank Gilbreth, who saw that masons bent over to pick up bricks from the ground. The bricklayer was therefore lowering and raising his entire upper body to pick up a 2.3 kg (5 lb.) brick, and this inefficiency had been built into the job through long practice. Introduction of a non-stooping scaffold, which delivered the bricks at waist level, allowed masons to work about three times as quickly, and with less effort.

20th century

Frederick Winslow Taylor, the father of scientific management, introduced what are now called standardization and best practice deployment. In his Principles of Scientific Management, (1911), Taylor said: "And whenever a workman proposes an improvement, it should be the policy of the management to make a careful analysis of the new method, and if necessary conduct a series of experiments to determine accurately the relative merit of the new suggestion and of the old standard. And whenever the new method is found to be markedly superior to the old, it should be adopted as the standard for the whole establishment."

Taylor also warned explicitly against cutting piece rates (or, by implication, cutting wages or discharging workers) when efficiency improvements reduce the need for raw labor: "…after a workman has had the price per piece of the work he is doing lowered two or three times as a result of his having worked harder and increased his output, he is likely entirely to lose sight of his employer's side of the case and become imbued with a grim determination to have no more cuts if soldiering [marking time, just doing what he is told] can prevent it."

Shigeo Shingo, the best-known exponent of single minute exchange of die (SMED) and error-proofing or poka-yoke, cites Principles of Scientific Management as his inspiration.[9]

American industrialists recognized the threat of cheap offshore labor to American workers during the 1910s, and explicitly stated the goal of what is now called lean manufacturing as a countermeasure. Henry Towne, past President of the American Society of Mechanical Engineers, wrote in the Foreword to Frederick Winslow Taylor's

5

Page 6: TQM & SIX SIGMA etc.

Shop Management (1911), "We are justly proud of the high wage rates which prevail throughout our country, and jealous of any interference with them by the products of the cheaper labor of other countries. To maintain this condition, to strengthen our control of home markets, and, above all, to broaden our opportunities in foreign markets where we must compete with the products of other industrial nations, we should welcome and encourage every influence tending to increase the efficiency of our productive processes."

Ford starts the ball rolling

Henry Ford continued this focus on waste while developing his mass assembly manufacturing system. Charles Buxton Going wrote in 1915:

Ford's success has startled the country, almost the world, financially, industrially, mechanically. It exhibits in higher degree than most persons would have thought possible the seemingly contradictory requirements of true efficiency, which are: constant increase of quality, great increase of pay to the workers, repeated reduction in cost to the consumer. And with these appears, as at once cause and effect, an absolutely incredible enlargement of output reaching something like one hundredfold in less than ten years, and an enormous profit to the manufacturer.[10]

Ford, in My Life and Work (1922),[11] provided a single-paragraph description that encompasses the entire concept of waste:

I believe that the average farmer puts to a really useful purpose only about 5%. of the energy he expends.... Not only is everything done by hand, but seldom is a thought given to a logical arrangement. A farmer doing his chores will walk up and down a rickety ladder a dozen times. He will carry water for years instead of putting in a few lengths of pipe. His whole idea, when there is extra work to do, is to hire extra men. He thinks of putting money into improvements as an expense.... It is waste motion— waste effort— that makes farm prices high and profits low.

Poor arrangement of the workplace—a major focus of the modern kaizen—and doing a job inefficiently out of habit—are major forms of waste even in modern workplaces.

Ford also pointed out how easy it was to overlook material waste. A former employee, Harry Bennett, wrote:

One day when Mr. Ford and I were together he spotted some rust in the slag that ballasted the right of way of the D. T. & I [railroad]. This slag had been dumped there from our own furnaces. 'You know,' Mr. Ford said to me, 'there's iron in that slag. You make the crane crews who put it out there sort it over, and take it back to the plant.'[12]

In other words, Ford saw the rust and realized that the steel plant was not recovering all of the iron.

6

Page 7: TQM & SIX SIGMA etc.

Ford's early success, however, was not sustainable. As James Womack and Daniel Jones pointed out in "Lean Thinking", what Ford accomplished represented the "special case" rather than a robust lean solution.[13] The major challenge that Ford faced was that his methods were built for a steady-state environment, rather than for the dynamic conditions firms increasingly face today.[14] Although his rigid, top-down controls made it possible to hold variation in work activities down to very low levels, his approach did not respond well to uncertain, dynamic business conditions; they responded particularly badly to the need for new product innovation. This was made clear by Ford's precipitous decline when the company was forced to finally introduce a follow-on to the Model T (see Lean Dynamics).

Design for Manufacture (DFM) also is a Ford concept. Ford said in My Life and Work (the same reference describes just in time manufacturing very explicitly):

...entirely useless parts [may be]—a shoe, a dress, a house, a piece of machinery, a railroad, a steamship, an airplane. As we cut out useless parts and simplify necessary ones, we also cut down the cost of making. ... But also it is to be remembered that all the parts are designed so that they can be most easily made.

This standardization of parts was central to Ford's concept of mass production, and the manufacturing "tolerances", or upper and lower dimensional limits that ensured interchangeability of parts became widely applied across manufacturing. Decades later, the renowned Japanese quality guru, Genichi Taguchi, demonstrated that this "goal post" method of measuring was inadequate. He showed that "loss" in capabilities did not begin only after exceeding these tolerances, but increased as described by the Taguchi Loss Function at any condition exceeding the nominal condition. This became an important part of W. Edwards Deming's quality movement of the 1980s, later helping to develop improved understanding of key areas of focus such as cycle time variation in improving manufacturing quality and efficiencies in aerospace and other industries.

While Ford is renowned for his production line it is often not recognized how much effort he put into removing the fitters' work to make the production line possible. Until Ford, a car's components always had to be fitted or reshaped by a skilled engineer at the point of use, so that they would connect properly. By enforcing very strict specification and quality criteria on component manufacture, he eliminated this work almost entirely, reducing manufacturing effort by between 60-90%.[15] However, Ford's mass production system failed to incorporate the notion of "pull production" and thus often suffered from over-production.

Toyota develops TPS

Toyota's development of ideas that later became Lean may have started at the turn of the 20th century with Sakichi Toyoda, in a textile factory with looms that stopped themselves when a thread broke, this became the seed of autonomation and Jidoka. Toyota's journey with JIT may have started back in 1934 when it moved from textiles to produce its first car. Kiichiro Toyoda, founder of Toyota, directed the engine casting work and discovered

7

Page 8: TQM & SIX SIGMA etc.

many problems in their manufacture. He decided he must stop the repairing of poor quality by intense study of each stage of the process. In 1936, when Toyota won its first truck contract with the Japanese government, his processes hit new problems and he developed the "Kaizen" improvement teams.

Levels of demand in the Post War economy of Japan were low and the focus of mass production on lowest cost per item via economies of scale therefore had little application. Having visited and seen supermarkets in the USA, Taiichi Ohno recognised the scheduling of work should not be driven by sales or production targets but by actual sales. Given the financial situation during this period, over-production had to be avoided and thus the notion of Pull (build to order rather than target driven Push) came to underpin production scheduling.

It was with Taiichi Ohno at Toyota that these themes came together. He built on the already existing internal schools of thought and spread their breadth and use into what has now become the Toyota Production System (TPS). It is principally from the TPS, but now including many other sources, that Lean production is developing. Norman Bodek wrote the following in his foreword to a reprint of Ford's Today and Tomorrow:

I was first introduced to the concepts of just-in-time (JIT) and the Toyota production system in 1980. Subsequently I had the opportunity to witness its actual application at Toyota on one of our numerous Japanese study missions. There I met Mr. Taiichi Ohno, the system's creator. When bombarded with questions from our group on what inspired his thinking, he just laughed and said he learned it all from Henry Ford's book." The scale, rigor and continuous learning aspects of TPS have made it a core concept of Lean.

Types of waste

While the elimination of waste may seem like a simple and clear subject it is noticeable that waste is often very conservatively identified. This then hugely reduces the potential of such an aim. The elimination of waste is the goal of Lean, and Toyota defined three broad types of waste: muda, muri and mura; it should be noted that for many Lean implementations this list shrinks to the first waste type only with corresponding benefits decrease. To illustrate the state of this thinking Shigeo Shingo observed that only the last turn of a bolt tightens it—the rest is just movement. This ever finer clarification of waste is key to establishing distinctions between value-adding activity, waste and non-value-adding work.[16] Non-value adding work is waste that must be done under the present work conditions. One key is to measure, or estimate, the size of these wastes, to demonstrate the effect of the changes achieved and therefore the movement toward the goal.

The "flow" (or smoothness) based approach aims to achieve JIT, by removing the variation caused by work scheduling and thereby provide a driver, rationale or target and priorities for implementation, using a variety of techniques. The effort to achieve JIT exposes many quality problems that are hidden by buffer stocks; by forcing smooth flow

8

Page 9: TQM & SIX SIGMA etc.

of only value-adding steps, these problems become visible and must be dealt with explicitly.

Muri is all the unreasonable work that management imposes on workers and machines because of poor organization, such as carrying heavy weights, moving things around, dangerous tasks, even working significantly faster than usual. It is pushing a person or a machine beyond its natural limits. This may simply be asking a greater level of performance from a process than it can handle without taking shortcuts and informally modifying decision criteria. Unreasonable work is almost always a cause of multiple variations.

To link these three concepts is simple in TPS and thus Lean. Firstly, muri focuses on the preparation and planning of the process, or what work can be avoided proactively by design. Next, mura then focuses on how the work design is implemented and the elimination of fluctuation at the scheduling or operations level, such as quality and volume. Muda is then discovered after the process is in place and is dealt with reactively. It is seen through variation in output. It is the role of management to examine the muda, in the processes and eliminate the deeper causes by considering the connections to the muri and mura of the system. The muda and mura inconsistencies must be fed back to the muri, or planning, stage for the next project.

A typical example of the interplay of these wastes is the corporate behaviour of "making the numbers" as the end of a reporting period approaches. Demand is raised to 'make plan,' increasing (mura), when the "numbers" are low, which causes production to try to squeeze extra capacity from the process, which causes routines and standards to be modified or stretched. This stretch and improvisation leads to muri-style waste, which leads to downtime, mistakes and back flows, and waiting, thus the muda of waiting, correction and movement.

The original seven muda are:

Transportation (moving products that is not actually required to perform the processing)

Inventory (all components, work in process and finished product not being processed)

Motion (people or equipment moving or walking more than is required to perform the processing)

Waiting (waiting for the next production step) Overproduction (production ahead of demand) Over Processing (resulting from poor tool or product design creating activity) Defects (the effort involved in inspecting for and fixing defects)[17]

Later an eighth waste was defined by Womack et al. (2003); it was described as manufacturing goods or services that do not meet customer demand or specifications. Many others have added the "waste of unused human talent" to the original seven wastes. These wastes were not originally a part of the seven deadly wastes defined by Taiichi

9

Page 10: TQM & SIX SIGMA etc.

Ohno in TPS, but were found to be useful additions in practice. For a complete listing of the "old" and "new" wastes see Bicheno and Holweg (2009)[18]

Some of these definitions may seem rather idealistic, but this tough definition is seen as important and they drove the success of TPS. The clear identification of non-value-adding work, as distinct from wasted work, is critical to identifying the assumptions behind the current work process and to challenging them in due course.[19] Breakthroughs in SMED and other process changing techniques rely upon clear identification of where untapped opportunities may lie if the processing assumptions are challenged.

Lean implementation develops from TPS

The discipline required to implement Lean and the disciplines it seems to require are so often counter-cultural that they have made successful implementation of Lean a major challenge. Some[20] would say that it was a major challenge in its manufacturing 'heartland' as well. Implementations under the Lean label are numerous and whether they are Lean and whether any success or failure can be laid at Lean's door is often debatable. Individual examples of success and failure exist in almost all spheres of business and activity and therefore cannot be taken as indications of whether Lean is particularly applicable to a specific sector of activity. It seems clear from the "successes" that no sector is immune from beneficial possibility.[citation needed]

Lean is about more than just cutting costs in the factory.[21] One crucial insight is that most costs are assigned when a product is designed, (see Genichi Taguchi). Often an engineer will specify familiar, safe materials and processes rather than inexpensive, efficient ones. This reduces project risk, that is, the cost to the engineer, while increasing financial risks, and decreasing profits. Good organizations develop and review checklists to review product designs.

Companies must often look beyond the shop-floor to find opportunities for improving overall company cost and performance. At the system engineering level, requirements are reviewed with marketing and customer representatives to eliminate those requirements that are costly. Shared modules may be developed, such as multipurpose power supplies or shared mechanical components or fasteners. Requirements are assigned to the cheapest discipline. For example, adjustments may be moved into software, and measurements away from a mechanical solution to an electronic solution. Another approach is to choose connection or power-transport methods that are cheap or that used standardized components that become available in a competitive market.

An example program

In summary, an example of a lean implementation program could be:

With a tools-based approach Senior management to agree

With a muri or flow based approach (as used in the TPS with suppliers[22]).

10

Page 11: TQM & SIX SIGMA etc.

and discuss their lean vision Management brainstorm to

identify project leader and set objectives

Communicate plan and vision to the workforce

Ask for volunteers to form the Lean Implementation team (5-7 works best, all from different departments)

Appoint members of the Lean Manufacturing Implementation Team

Train the Implementation Team in the various lean tools - make a point of trying to visit other non competing businesses that have implemented lean

Select a Pilot Project to implement – 5S is a good place to start

Run the pilot for 2–3 months - evaluate, review and learn from your mistakes

Roll out pilot to other factory areas

Evaluate results, encourage feedback

Stabilize the positive results by teaching supervisors how to train the new standards you've developed with TWI methodology (Training Within Industry)

Once you are satisfied that you have a habitual program, consider introducing the next lean tool. Select the one that gives you the biggest return for your business.

Sort out as many of the visible quality problems as you can, as well as downtime and other instability problems, and get the internal scrap acknowledged and its management started.

Make the flow of parts through the system or process as continuous as possible using workcells and market locations where necessary and avoiding variations in the operators work cycle

Introduce standard work and stabilise the work pace through the system

Start pulling work through the system, look at the production scheduling and move toward daily orders with kanban cards

Even out the production flow by reducing batch sizes, increase delivery frequency internally and if possible externally, level internal demand

Improve exposed quality issues using the tools

Remove some people (or increase quotas) and go through this work again (the Oh No !! moment)

Lean leadership

11

Page 12: TQM & SIX SIGMA etc.

The role of the leaders within the organization is the fundamental element of sustaining the progress of lean thinking. Experienced kaizen members at Toyota, for example, often bring up the concepts of Senpai, Kohai, and Sensei, because they strongly feel that transferring of Toyota culture down and across Toyota can only happen when more experienced Toyota Sensei continuously coach and guide the less experienced lean champions. Unfortunately, most lean practitioners in North America focus on the tools and methodologies of lean, versus the philosophy and culture of lean. Some exceptions include Shingijitsu Consulting out of Japan, which is made up of ex-Toyota managers, and Lean Sensei International based in North America, which coaches lean through Toyota-style cultural experience.

One of the dislocative effects of Lean is in the area of key performance indicators (KPI). The KPIs by which a plant/facility are judged will often be driving behaviour, because the KPIs themselves assume a particular approach to the work being done. This can be an issue where, for example a truly Lean, Fixed Repeating Schedule (FRS) and JIT approach is adopted, because these KPIs will no longer reflect performance, as the assumptions on which they are based become invalid. It is a key leadership challenge to manage the impact of this KPI chaos within the organization.

Similarly, commonly used accounting systems developed to support mass production are no longer appropriate for companies pursuing Lean. Lean Accounting provides truly Lean approaches to business management and financial reporting.

After formulating the guiding principles of its lean manufacturing approach in the Toyota Production System (TPS) Toyota formalized in 2001 the basis of its lean management: the key managerial values and attitudes needed to sustain continuous improvement in the long run. These core management principles are articulated around the twin pillars of Continuous Improvement (relentless elimination of waste) and Respect for People (engagement in long term relationships based on continuous improvement and mutual trust).

This formalization stems from problem solving. As Toyota expanded beyond its home base for the past 20 years, it hit the same problems in getting TPS properly applied that other western companies have had in copying TPS. Like any other problem, it has been working on trying a series of countermeasures to solve this particular concern. These countermeasures have focused on culture: how people behave, which is the most difficult challenge of all. Without the proper behavioral principles and values, TPS can be totally misapplied and fail to deliver results. As one sensei said, one can create a Buddha image and forget to inject soul in it. As with TPS, the values had originally been passed down in a master-disciple manner, from boss to subordinate, without any written statement on the way. And just as with TPS, it was internally argued that formalizing the values would stifle them and lead to further misunderstanding. But as Toyota veterans eventually wrote down the basic principles of TPS, Toyota set to put the Toyota Way into writing to educate new joiners.[23]

Continuous Improvement breaks down into three basic principles:

12

Page 13: TQM & SIX SIGMA etc.

1. Challenge : Having a long term vision of the challenges one needs to face to realize one's ambition (what we need to learn rather than what we want to do and then having the spirit to face that challenge). To do so, we have to challenge ourselves every day to see if we are achieving our goals.

2. Kaizen : Good enough never is, no process can ever be thought perfect, so operations must be improved continuously, striving for innovation and evolution.

3. Genchi Genbutsu : Going to the source to see the facts for oneself and make the right decisions, create consensus, and make sure goals are attained at the best possible speed.

Respect For People is less known outside of Toyota, and essentially involves two defining principles:

1. Respect Taking every stakeholders' problems seriously, and making every effort to build mutual trust. Taking responsibility for other people reaching their objectives. Thought provoking, I find. As a manager, I must take responsibility for my subordinates reaching the target I set for them.

2. Teamwork : This is about developing individuals through team problem-solving. The idea is to develop and engage people through their contribution to team performance. Shop floor teams, the whole site as team, and team Toyota at the outset.

Differences from TPS

Whilst Lean is seen by many as a generalization of the Toyota Production System into other industries and contexts there are some acknowledged differences that seem to have developed in implementation.

1. Seeking profit is a relentless focus for Toyota exemplified by the profit maximization principle (Price – Cost = Profit) and the need, therefore, to practice systematic cost reduction (through TPS or otherwise) to realize benefit. Lean implementations can tend to de-emphasise this key measure and thus become fixated with the implementation of improvement concepts of “flow” or “pull”. However, the emergence of the "value curve analysis" promises to directly tie lean improvements to bottom-line performance measuments.20

2. Tool orientation is a tendency in many programs to elevate mere tools (standardized work, value stream mapping, visual control, etc.) to an unhealthy status beyond their pragmatic intent. The tools are just different ways to work around certain types of problems but they do not solve them for you or always highlight the underlying cause of many types of problems. The tools employed at Toyota are often used to expose particular problems that are then dealt with, as each tool's limitations or blindspots are perhaps better understood. So, for example, Value Stream Mapping focuses upon material and information flow problems (a title built into the Toyota title for this activity) but is not strong on Metrics, Man or Method. Internally they well know the limits of the tool and understood that it was never intended as the best way to see and analyze every

13

Page 14: TQM & SIX SIGMA etc.

waste or every problem related to quality, downtime, personnel development, cross training related issues, capacity bottlenecks, or anything to do with profits, safety, metrics or morale, etc. No one tool can do all of that. For surfacing these issues other tools are much more widely and effectively used.

3. Management technique rather than change agents has been a principle in Toyota from the early 1950s when they started emphasizing the development of the production manager's and supervisors' skills set in guiding natural work teams and did not rely upon staff-level change agents to drive improvements. This can manifest itself as a "Push" implementation of Lean rather than "Pull" by the team itself. This area of skills development is not that of the change agent specialist, but that of the natural operations work team leader. Although less prestigious than the TPS specialists, development of work team supervisors in Toyota is considered an equally, if not more important, topic merely because there are tens of thousands of these individuals. Specifically, it is these manufacturing leaders that are the main focus of training efforts in Toyota since they lead the daily work areas, and they directly and dramatically affect quality, cost, productivity, safety, and morale of the team environment. In many companies implementing Lean the reverse set of priorities is true. Emphasis is put on developing the specialist, while the supervisor skill level is expected to somehow develop over time on its own.

Lean services

Lean, as a concept or brand, has captured the imagination of many in different spheres of activity. Examples of these from many sectors are listed below.

Lean principles have been successfully applied to call center services to improve live agent call handling. By combining Agent-assisted Voice solutions and Lean's waste reduction practices, a company reduced handle time, reduced between agent variability, reduced accent barriers, and attained near perfect process adherence. [24]

Lean principles have also found application in software application development and maintenance and other areas of information technology (IT).[25] More generally, the use of Lean in IT has become known as Lean IT.

A study conducted on behalf of the Scottish Executive, by Warwick University, in 2005/06 found that Lean methods were applicable to the public sector, but that most results had been achieved using a much more restricted range of techniques than Lean provides.[26]

The challenge in moving Lean to services is the lack of widely available reference implementations to allow people to see how directly applying lean manufacturing tools and practices can work and the impact it does have. This makes it more difficult to build the level of belief seen as necessary for strong implementation. However, some research does relate widely recognized examples of success in retail and even airlines to the underlying principles of lean.[14] Despite this, it remains the case that the direct manufacturing examples of 'techniques' or 'tools' need to be better 'translated' into a

14

Page 15: TQM & SIX SIGMA etc.

service context to support the more prominent approaches of implementation, which has not yet received the level of work or publicity that would give starting points for implementors. The upshot of this is that each implementation often 'feels its way' along as must the early industrial engineers of Toyota. This places huge importance upon sponsorship to encourage and protect these experimental developments.

Lean Goals and Strategy

The espoused goals of Lean manufacturing systems differ between various authors. While some maintain an internal focus, e.g. to increase profit for the organization[27] , others claim that improvements should be done for the sake of the customer [28]

Some commonly mentioned goals are:

Improve quality: To stay competitive in today’s marketplace, a company must understand its customers' wants and needs and design processes to meet their expectations and requirements.

Eliminate waste: Waste is any activity that consumes time, resources, or space but does not add any value to the product or service. See Types of waste, above.

Taking the first letter of each waste, the acronym "TIM WOOD" is formed. This is a common way to remember the wastes.

Reduce time: Reducing the time it takes to finish an activity from start to finish is one of the most effective ways to eliminate waste and lower costs.

Reduce total costs: To minimize cost, a company must produce only to customer demand. Overproduction increases a company’s inventory costs because of storage needs.

The strategic elements of Lean can be quite complex, and comprise multiple elements. Four different notions of Lean have been identified [29]:

1. Lean as a fixed state or goal (Being Lean) 2. Lean as a continuous change process (Becoming Lean) 3. Lean as a set of tools or methods (Doing Lean/Toolbox Lean) 4. Lean as a philosophy (Lean thinking)

Steps to achieve lean systems

The following steps should be implemented to create the ideal lean manufacturing system: [2]:

1. Design a simple manufacturing system

15

Page 16: TQM & SIX SIGMA etc.

2. Recognize that there is always room for improvement 3. Continuously improve the lean manufacturing system design

Design a simple manufacturing system

A fundamental principle of lean manufacturing is demand-based flow manufacturing. In this type of production setting, inventory is only pulled through each production center when it is needed to meet a customer’s order. The benefits of this goal include: [3]:

decreased cycle time less inventory increased productivity increased capital equipment utilization

There is always room for improvement

The core of lean is founded on the concept of continuous product and process improvement and the elimination of non-value added activities. “The Value adding activities are simply only those things the customer is willing to pay for, everything else is waste, and should be eliminated, simplified, reduced, or integrated”(Rizzardo, 2003). Improving the flow of material through new ideal system layouts at the customer's required rate would reduce waste in material movement and inventory. [4]

Continuously improve

A continuous improvement mindset is essential to reach a company's goals. The term "continuous improvement" means incremental improvement of products, processes, or services over time, with the goal of reducing waste to improve workplace functionality, customer service, or product performance (Suzaki, 1987).

Stephen Shortell (Professor of Health Services Management and Organisational Behaviour – Berkeley University, California) states:-

“For improvement to flourish it must be carefully cultivated in a rich soil bed (a receptive organisation), given constant attention (sustained leadership), assured the right amounts of light (training and support) and water (measurement and data) and protected from damaging."

Measure

Overall equipment effectiveness (OEE) is a set of performance metrics that fit well in a Lean environment.

See also

16

Page 17: TQM & SIX SIGMA etc.

Closely related methodologies

Toyota Production System Lean accounting Value Network Demand Flow Technology Theory of Constraints Variation Management SCOR Six Sigma Business process management Statistical process control Lean Dynamics

Predictive validation techniques

Discrete event simulation

Terminology

Training Within Industry Lean accounting Just In Time or JIT Fixed Repeating Schedule or FRS Kaizen SMED Poka-Yoke Autonomation and Jidoka 5S Production levelling Cycle Time Variation muda , mura, muri workcell Takt time Andon Genchi Genbutsu Gemba

5 Whys

[edit] Related engineering disciplines

Industrial engineering Industrial technology

Areas of implementation outside production

Computer-Aided Lean Management Lean construction Lean consumption Lean Integration Lean laboratory Lean Services Lean IT Lean software development Lean accounting Lean Government Lean Office Lean Maintenance Repair and

Overhaul (MRO) Lean logistics

Other

Overall Equipment Effectiveness Cellular manufacturing Agile manufacturing Manufacturing Preorder Economy Process Reengineering Training Within Industry Value Stream Mapping 3D's Dirty, Dangerous and Difficult Systems thinking Oscillatory baffled reactor Lean accounting Value curve analysis

Inventory management software

17

Page 18: TQM & SIX SIGMA etc.

Statistical process control Statistical process control (SPC) is the application of statistical methods to the monitoring and control of a process to ensure that it operates at its full potential to produce conforming product. Under SPC, a process behaves predictably to produce as much conforming product as possible with the least possible waste. While SPC has been applied most frequently to controlling manufacturing lines, it applies equally well to any process with a measurable output. Key tools in SPC are control charts, a focus on continuous improvement and designed experiments.

Much of the power of SPC lies in the ability to examine a process and the sources of variation in that process using tools that give weight to objective analysis over subjective opinions and that allow the strength of each source to be determined numerically. Variations in the process that may affect the quality of the end product or service can be detected and corrected, thus reducing waste as well as the likelihood that problems will be passed on to the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over other quality methods, such as inspection, that apply resources to detecting and correcting problems after they have occurred.

In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product or service from end to end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using SPC data to identify bottlenecks, wait times, and other sources of delays within the process. Process cycle time reductions coupled with improvements in yield have made SPC a valuable tool from both a cost reduction and a customer satisfaction standpoint.

Contents

[hide] 1 History 2 General 3 How to Use SPC 4 See also 5 References 6 Bibliography

7 External links

History

Statistical process control was pioneered by Walter A. Shewhart in the early 1920s. W. Edwards Deming later applied SPC methods in the United States during World War II, thereby successfully improving quality in the manufacture of munitions and other

18

Page 19: TQM & SIX SIGMA etc.

strategically important products. Deming was also instrumental in introducing SPC methods to Japanese industry after the war had ended.[1][2]

Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process (common causes of variation), while others display uncontrolled variation that is not present in the process causal system at all times (special causes of variation).[3]

In 1989, the Software Engineering Institute introduced the notion that SPC can be usefully applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes has encountered much skepticism, and remains controversial today.[4][5][6]

General

The following description relates to manufacturing rather than to the service industry, although the principles of SPC can be successfully applied to either. For a description and example of how SPC applies to a service environment, refer to Roberts (2005).[7] SPC has also been successfully applied to detecting changes in organizational behavior with Social Network Change Detection introduced by McCulloh (2007). Selden describes how to use SPC in the fields of sales, marketing, and customer service, using Deming's famous Red Bead Experiment as an easy to follow demonstration[8].

In mass-manufacturing, the quality of the finished article was traditionally achieved through post-manufacturing inspection of the product; accepting or rejecting each article (or samples from a production lot) based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product.

Two kinds of variation occur in all manufacturing processes: both these types of process variation cause subsequent variation in the final product. The first is known as natural or common cause variation and consists of the variation inherent in the process as it is designed. Common cause variation may include variations in temperature, properties of raw materials, strength of an electrical current etc. The second kind of variation is known as special cause variation, or assignable-cause variation, and happens less frequently than

19

Page 20: TQM & SIX SIGMA etc.

the first. With sufficient investigation, a specific cause, such as abnormal raw material or incorrect set-up parameters, can be found for special cause variations.

For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.

By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.

SPC indicates when an action should be taken in a process, but it also indicates when NO action should be taken. An example is a person who would like to maintain a constant body weight and takes weight measurements weekly. A person who does not understand SPC concepts might start dieting every time his or her weight increased, or eat more every time his or her weight decreased. This type of action could be harmful and possibly generate even more variation in body weight. SPC would account for normal weight variation and better indicate when the person is in fact gaining or losing weight.

How to Use SPC

Statistical Process Control may be broadly broken down into three sets of activities: understanding the process; understanding the causes of variation; and elimination of the sources of special cause variation.

In understanding a process, the process is typically mapped out and the process is monitored using control charts. Control charts are used to identify variation that may be due to special causes, and to free the user from concern over variation due to common causes. This is a continuous, ongoing activity. When a process is stable and does not trigger any of the detection rules for a control chart, a process capability analysis may also be performed to predict the ability of the current process to produce conforming (i.e. within specification) product in the future.

When excessive variation is identified by the control chart detection rules, or the process capability is found lacking, additional effort is exerted to determine causes of that variance. The tools used include Ishikawa diagrams, designed experiments and Pareto charts. Designed experiments are critical to this phase of SPC, as they are the only means

20

Page 21: TQM & SIX SIGMA etc.

of objectively quantifying the relative importance of the many potential causes of variation.

Once the causes of variation have been quantified, effort is spent in eliminating those causes that are both statistically and practically significant (i.e. a cause that has only a small but statistically significant effect may not be considered cost-effective to fix; however, a cause that is not statistically significant can never be considered practically significant). Generally, this includes development of standard work, error-proofing and training. Additional process changes may be required to reduce variation or align the process with the desired target, especially if there is a problem with process capability.

Six SigmaNot to be confused with Sigma 6.

The often-used six sigma symbol.

Part of a series of articles on

Industry

Manufacturing methods

Batch production • Job production

Continuous production

Improvement methods

LM • TPM • QRM • VDM

TOC • Six Sigma • RCM

Information & communication

21

Page 22: TQM & SIX SIGMA etc.

ISA-88 • ISA-95 • ERP

SAP • IEC 62264 • B2MML

Process control

PLC • DCS

Six Sigma is a business management strategy originally developed by Motorola, USA in 1981.[1] As of 2010, it enjoys widespread application in many sectors of industry, although its application is not without controversy.

Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes.[2] It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization ("Black Belts", "Green Belts", etc.) who are experts in these methods.[2] Each Six Sigma project carried out within an organization follows a defined sequence of steps and has quantified financial targets (cost reduction or profit increase).[2]

The term six sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modelling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield, or the percentage of defect-free products it creates. A six-sigma process is one in which 99.99966% of the products manufactured are statistically expected to be free of defects (3.4 defects per million). Motorola set a goal of "six sigmas" for all of its manufacturing operations, and this goal became a byword for the management and engineering practices used to achieve it.

Contents

1 Historical overview 2 Methods

o 2.1 DMAIC o 2.2 DMADV o 2.3 Quality management tools and methods used in Six Sigma

3 Implementation roles o 3.1 Certification

4 Origin and meaning of the term "six sigma process" o 4.1 Role of the 1.5 sigma shift o 4.2 Sigma levels

5 Software used for Six Sigma 6 List of Six Sigma companies 7 Criticism

o 7.1 Lack of originality o 7.2 Role of consultants o 7.3 Potential negative effects

22

Page 23: TQM & SIX SIGMA etc.

o 7.4 Based on arbitrary standards o 7.5 Criticism of the 1.5 sigma shift

8 See also 9 References

10 Further reading

Historical overview

Six Sigma originated as a set of practices designed to improve manufacturing processes and eliminate defects, but its application was subsequently extended to other types of business processes as well.[3] In Six Sigma, a defect is defined as any process output that does not meet customer specifications, or that could lead to creating an output that does not meet customer specifications.[2]

Bill Smith first formulated the particulars of the methodology at Motorola in 1986.[4] Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects,[5][6] based on the work of pioneers such as Shewhart, Deming, Juran, Ishikawa, Taguchi and others.

Like its predecessors, Six Sigma doctrine asserts that:

Continuous efforts to achieve stable and predictable process results (i.e., reduce process variation) are of vital importance to business success.

Manufacturing and business processes have characteristics that can be measured, analyzed, improved and controlled.

Achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management.

Features that set Six Sigma apart from previous quality improvement initiatives include:

A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.[2]

An increased emphasis on strong and passionate management leadership and support.[2]

A special infrastructure of "Champions," "Master Black Belts," "Black Belts," "Green Belts", etc. to lead and implement the Six Sigma approach.[2]

A clear commitment to making decisions on the basis of verifiable data, rather than assumptions and guesswork.[2]

The term "Six Sigma" comes from a field of statistics known as process capability studies. Originally, it referred to the ability of manufacturing processes to produce a very high proportion of output within specification. Processes that operate with "six sigma quality" over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO).[7][8] Six Sigma's implicit goal is to improve all processes to that level of quality or better.

23

Page 24: TQM & SIX SIGMA etc.

Six Sigma is a registered service mark and trademark of Motorola Inc.[9] As of 2006 Motorola reported over US$17 billion in savings[10] from Six Sigma.

Other early adopters of Six Sigma who achieved well-publicized success include Honeywell (previously known as AlliedSignal) and General Electric, where Jack Welch introduced the method.[11] By the late 1990s, about two-thirds of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs and improving quality.[12]

In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to yield a methodology named Lean Six Sigma.

Methods

Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and DMADV.[12]

DMAIC is used for projects aimed at improving an existing business process.[12] DMAIC is pronounced as "duh-may-ick".

DMADV is used for projects aimed at creating new product or process designs.[12]

DMADV is pronounced as "duh-mad-vee".

DMAIC

The DMAIC project methodology has five phases:

Define the problem, the voice of the customer, and the project goals, specifically. Measure key aspects of the current process and collect relevant data. Analyze the data to investigate and verify cause-and-effect relationships.

Determine what the relationships are, and attempt to ensure that all factors have been considered. Seek out root cause of the defect under investigation.

Improve or optimize the current process based upon data analysis using techniques such as design of experiments, poka yoke or mistake proofing, and standard work to create a new, future state process. Set up pilot runs to establish process capability.

Control the future state process to ensure that any deviations from target are corrected before they result in defects. Implement control systems such as statistical process control, production boards, and visual workplaces, and continuously monitor the process.

DMADV

The DMADV project methodology, also known as DFSS ("Design For Six Sigma"),[12] features five phases:

24

Page 25: TQM & SIX SIGMA etc.

Define design goals that are consistent with customer demands and the enterprise strategy.

Measure and identify CTQs (characteristics that are Critical To Quality), product capabilities, production process capability, and risks.

Analyze to develop and design alternatives, create a high-level design and evaluate design capability to select the best design.

Design details, optimize the design, and plan for design verification. This phase may require simulations.

Verify the design, set up pilot runs, implement the production process and hand it over to the process owner(s).

Quality management tools and methods used in Six Sigma

Within the individual phases of a DMAIC or DMADV project, Six Sigma utilizes many established quality-management tools that are also used outside of Six Sigma. The following table shows an overview of the main methods used.

5 Whys Analysis of variance ANOVA Gauge R&R Axiomatic design Business Process Mapping Catapult exercise on variability Cause & effects diagram (also

known as fishbone or Ishikawa diagram)

Chi-square test of independence and fits

Control chart Correlation Cost-benefit analysis CTQ tree Design of experiments Failure mode and effects analysis

(FMEA)

General linear model

Histograms [[Homoscedasticity] ] Quality Function Deployment (QFD) Pareto chart Pick chart Process capability Quantitative marketing research

through use of Enterprise Feedback Management (EFM) systems

Regression analysis Root cause analysis Run charts SIPOC analysis (Suppliers, Inputs,

Process, Outputs, Customers) Stratification Taguchi methods Taguchi Loss Function

TRIZ

Implementation roles

One key innovation of Six Sigma involves the "professionalizing" of quality management functions. Prior to Six Sigma, quality management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Formal Six Sigma programs borrow martial arts ranking terminology to define a hierarchy (and career path) that cuts across all business functions.

25

Page 26: TQM & SIX SIGMA etc.

Six Sigma identifies several key roles for its successful implementation.[13]

Executive Leadership includes the CEO and other members of top management. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements.

Champions take responsibility for Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from upper management. Champions also act as mentors to Black Belts.

Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent application of Six Sigma across various functions and departments.

Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.

Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities, operating under the guidance of Black Belts.

Some organizations use additional belt colours, such as Yellow Belts, for employees that have basic training in Six Sigma tools.

Certification

In the United States, Six Sigma certification for both Green and Black Belts is offered by the Institute of Industrial Engineers [14] and by the American Society for Quality.[15]

In addition to these examples, there are many other organizations and companies that offer certification. There currently is no central certification body, either in the United States or anywhere else in the world.

Origin and meaning of the term "six sigma process"

26

Page 27: TQM & SIX SIGMA etc.

Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma model. The Greek letter σ (sigma) marks the distance on the horizontal axis between the mean, µ, and the curve's inflection point. The greater this distance, the greater is the spread of values encountered. For the curve shown above, µ = 0 and σ = 1. The upper and lower specification limits (USL, LSL) are at a distance of 6σ from the mean. Because of the properties of the normal distribution, values lying that far away from the mean are extremely unlikely. Even if the mean were to move right or left by 1.5σ at some point in the future (1.5 sigma shift), there is still a good safety cushion. This is why Six Sigma aims to have processes where the mean is at least 6σ away from the nearest specification limit.

The term "six sigma process" comes from the notion that if one has six standard deviations between the process mean and the nearest specification limit, as shown in the graph, practically no items will fail to meet specifications.[8] This is based on the calculation method employed in process capability studies.

Capability studies measure the number of standard deviations between the process mean and the nearest specification limit in sigma units. As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, fewer standard deviations will fit between the mean and the nearest specification limit, decreasing the sigma number and increasing the likelihood of items outside specification.[8]

Role of the 1.5 sigma shift

Experience has shown that processes usually do not perform as well in the long term as they do in the short term.[8] As a result, the number of sigmas that will fit between the process mean and the nearest specification limit may well drop over time, compared to an initial short-term study.[8] To account for this real-life increase in process variation over time, an empirically-based 1.5 sigma shift is introduced into the calculation.[8][16] According to this idea, a process that fits six sigmas between the process mean and the nearest specification limit in a short-term study will in the long term only fit 4.5 sigmas – either because the process mean will move over time, or because the long-term standard deviation of the process will be greater than that observed in the short term, or both.[8]

27

Page 28: TQM & SIX SIGMA etc.

Hence the widely accepted definition of a six sigma process as one that produces 3.4 defective parts per million opportunities (DPMO). This is based on the fact that a process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided capability study).[8] So the 3.4 DPMO of a "Six Sigma" process in fact corresponds to 4.5 sigmas, namely 6 sigmas minus the 1.5 sigma shift introduced to account for long-term variation.[8] This takes account of special causes that may cause a deterioration in process performance over time and is designed to prevent underestimation of the defect levels likely to be encountered in real-life operation.[8]

Sigma levels

A control chart depicting a process that experienced a 1.5 sigma drift in the process mean toward the upper specification limit starting at midnight. Control charts are used to maintain 6 sigma quality by signaling when quality professionals should investigate a process to find and eliminate special-cause variation.See also: Three sigma rule

The table[17][18] below gives long-term DPMO values corresponding to various short-term sigma levels.

Note that these figures assume that the process mean will shift by 1.5 sigma toward the side with the critical specification limit. In other words, they assume that after the initial study determining the short-term sigma level, the long-term Cpk value will turn out to be 0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma assumes that the long-term process mean will be 0.5 sigma beyond the

28

Page 29: TQM & SIX SIGMA etc.

specification limit (Cpk = –0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the defect percentages only indicate defects exceeding the specification limit to which the process mean is nearest. Defects beyond the far specification limit are not included in the percentages.

Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk

1 691,462 69% 31% 0.33 –0.172 308,538 31% 69% 0.67 0.173 66,807 6.7% 93.3% 1.00 0.54 6,210 0.62% 99.38% 1.33 0.835 233 0.023% 99.977% 1.67 1.176 3.4 0.00034% 99.99966% 2.00 1.57 0.019 0.0000019% 99.9999981% 2.33 1.83

Lack of originality

Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality improvement", stating that "[t]here is nothing new there. It includes what we used to call facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a new idea. The American Society for Quality long ago established certificates, such as for reliability engineers."[19]

Role of consultants

The use of "Black Belts" as itinerant change agents has (controversially) fostered an industry of training and certification. Critics argue there is overselling of Six Sigma by too great a number of consulting firms, many of which claim expertise in Six Sigma when they only have a rudimentary understanding of the tools and techniques involved.[2]

Potential negative effects

A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since". The statement is attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)."[20] The summary of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies." Advocates of Six Sigma have argued that many of these claims are in error or ill-informed.[21][22]

A BusinessWeek article says that James McNerney's introduction of Six Sigma at 3M may have had the effect of stifling creativity. It cites two Wharton School professors who

29

Page 30: TQM & SIX SIGMA etc.

say that Six Sigma leads to incremental innovation at the expense of blue-sky work.[23] This phenomenon is further explored in the book, Going Lean, which describes a related approach known as lean dynamics and provides data to show that Ford's "6 Sigma" program did little to change its fortunes.[24]

Based on arbitrary standards

While 3.4 defects per million opportunities might work well for certain products/processes, it might not operate optimally or cost effectively for others. A pacemaker process might need higher standards, for example, whereas a direct mail advertising campaign might need lower standards. The basis and justification for choosing 6 (as opposed to 5 or 7, for example) as the number of standard deviations is not clearly explained. In addition, the Six Sigma model assumes that the process data always conform to the normal distribution. The calculation of defect rates for situations where the normal distribution model does not apply is not properly addressed in the current Six Sigma literature.[2]

Criticism of the 1.5 sigma shift

The statistician Donald J. Wheeler has dismissed the 1.5 sigma shift as "goofy" because of its arbitrary nature.[25] Its universal applicability is seen as doubtful.[2]

The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that reflect short-term rather than long-term performance: a process that has long-term defect levels corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "6 sigma process."[8][26] The accepted Six Sigma scoring system thus cannot be equated to actual normal distribution probabilities for the stated number of standard deviations, and this has been a key bone of contention about how Six Sigma measures are defined.[26] The fact that it is rarely explained that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma performance rather than actual 6 sigma performance has led several commentators to express the opinion that Six Sigma is a confidence trick.[8]

Total quality managementTotal Quality Management (or TQM) is a management concept coined by W. Edwards Deming. The basis of TQM is to reduce the errors produced during the manufacturing or service process, increase customer satisfaction, streamline supply chain management, aim for modernization of equipment and ensure workers have the highest level of training. One of the principal aims of TQM is to limit errors to 1 per 1 million units produced. Total Quality Management is often associated with the development, deployment, and maintenance of organizational systems that are required for various business processes.

TQM and Six Sigma

30

Page 31: TQM & SIX SIGMA etc.

The main difference between TQM and Six Sigma (a newer concept) is the approach. TQM tries to improve quality by ensuring conformance to internal requirements, while Six Sigma focuses on improving quality by reducing the number of defects.[1]

Quality management

Quality management can be considered to have three main components: quality control, quality assurance and quality improvement. Quality management is focused not only on product/service quality, but also the means to achieve it. Quality management therefore uses quality assurance and control of processes as well as products to achieve more consistent quality.

Contents

1 Quality management evolution 2 Principles 3 Quality improvement 4 Quality standards 5 Quality software 6 Quality terms 7 Academic resources 8 See also 9 References

10 Bibliography

Quality management evolution

Quality management is a recent phenomenon. Advanced civilizations that supported the arts and crafts allowed clients to choose goods meeting higher quality standards than normal goods. In societies where art responsibilities of a master craftsman (and similarly for artists) was to lead their studio, train and supervise the on, the importance of craftsmen was diminished as mass production and repetitive work practices were instituted. The aim was to produce large numbers of the same goods. The first proponent in the US for this approach was Eli Whitney who proposed (interchangeable) parts manufacture for muskets, hence producing the identical components and creating a musket assembly line. The next step forward was promoted by several people including Frederick Winslow Taylor a mechanical engineer who sought to improve industrial efficiency. He is sometimes called "the father of scientific management." He was one of the intellectual leaders of the Efficiency Movement and part of his approach laid a further

31

Page 32: TQM & SIX SIGMA etc.

foundation for quality management, including aspects like standardization and adopting improved practices. Henry Ford also was important in bringing process and quality management practices into operation in his assembly lines. In Germany, Karl Friedrich Benz, often called the inventor of the motor car, was pursuing similar assembly and production practices, although real mass production was properly initiated in Volkswagen after World War II. From this period onwards, North American companies focused predominantly upon production against lower cost with increased efficiency.

Walter A. Shewhart made a major step in the evolution towards quality management by creating a method for quality control for production, using statistical methods, first proposed in 1924. This became the foundation for his ongoing work on statistical quality control. W. Edwards Deming later applied statistical process control methods in the United States during World War II, thereby successfully improving quality in the manufacture of munitions and other strategically important products.

Quality leadership from a national perspective has changed over the past five to six decades. After the second world war, Japan decided to make quality improvement a national imperative as part of rebuilding their economy, and sought the help of Shewhart, Deming and Juran, amongst others. W. Edwards Deming championed Shewhart's ideas in Japan from 1950 onwards. He is probably best known for his management philosophy establishing quality, productivity, and competitive position. He has formulated 14 points of attention for managers, which are a high level abstraction of many of his deep insights. They should be interpreted by learning and understanding the deeper insights and include:

Break down barriers between departments Management should learn their responsibilities, and take on leadership Improve constantly Institute a programme of education and self-improvement

In the 1950s and 1960s, Japanese goods were synonymous with cheapness and low quality, but over time their quality initiatives began to be successful, with Japan achieving very high levels of quality in products from the 1970s onward. For example, Japanese cars regularly top the J.D. Power customer satisfaction ratings. In the 1980s Deming was asked by Ford Motor Company to start a quality initiative after they realized that they were falling behind Japanese manufacturers. A number of highly successful quality initiatives have been invented by the Japanese (see for example on this page: Taguchi, QFD, Toyota Production System. Many of the methods not only provide techniques but also have associated quality culture (i.e. people factors). These methods are now adopted by the same western countries that decades earlier derided Japanese methods.

Customers recognize that quality is an important attribute in products and services. Suppliers recognize that quality can be an important differentiator between their own offerings and those of competitors (quality differentiation is also called the quality gap). In the past two decades this quality gap has been greatly reduced between competitive

32

Page 33: TQM & SIX SIGMA etc.

products and services. This is partly due to the contracting (also called outsourcing) of manufacture to countries like India and China, as well internationalization of trade and competition. These countries amongst many others have raised their own standards of quality in order to meet International standards and customer demands. The ISO 9000 series of standards are probably the best known International standards for quality management.

There are a huge number of books available on quality. In recent times some themes have become more significant including quality culture, the importance of knowledge management, and the role of leadership in promoting and achieving high quality. Disciplines like systems thinking are bringing more holistic approaches to quality so that people, process and products are considered together rather than independent factors in quality management.

The influence of quality thinking has spread to non-traditional applications outside of walls of manufacturing, extending into service sectors and into areas such as sales, marketing and customer service.[1]

Principles

Quality management adopts a number of management principles[2] that can be used by upper management to guide their organisations towards improved performance. The principles cover:

Customer focus Leadership Involvement of people Process approach System approach to management Continual improvement Factual approach to decision making Mutually beneficial supplier relationships

Quality improvement

There are many methods for quality improvement. These cover product improvement, process improvement and people based improvement. In the following list are methods of quality management and techniques that incorporate and drive quality improvement:

1. ISO 9004 :2008 — guidelines for performance improvement. 2. ISO 15504 -4: 2005 — information technology — process assessment — Part 4:

Guidance on use for process improvement and process capability determination. 3. QFD — quality function deployment, also known as the house of quality

approach.

33

Page 34: TQM & SIX SIGMA etc.

4. Kaizen — 改善, Japanese for change for the better; the common English term is continuous improvement.

5. Zero Defect Program — created by NEC Corporation of Japan, based upon statistical process control and one of the inputs for the inventors of Six Sigma.

6. Six Sigma — 6σ, Six Sigma combines established methods such as statistical process control, design of experiments and FMEA in an overall framework.

7. PDCA — plan, do, check, act cycle for quality control purposes. (Six Sigma's DMAIC method (define, measure, analyze, improve, control) may be viewed as a particular implementation of this.)

8. Quality circle — a group (people oriented) approach to improvement. 9. Taguchi methods — statistical oriented methods including quality robustness,

quality loss function, and target specifications. 10. The Toyota Production System — reworked in the west into lean manufacturing. 11. Kansei Engineering — an approach that focuses on capturing customer emotional

feedback about products to drive improvement. 12. TQM — total quality management is a management strategy aimed at embedding

awareness of quality in all organizational processes. First promoted in Japan with the Deming prize which was adopted and adapted in USA as the Malcolm Baldrige National Quality Award and in Europe as the European Foundation for Quality Management award (each with their own variations).

13. TRIZ — meaning "theory of inventive problem solving" 14. BPR — business process reengineering, a management approach aiming at 'clean

slate' improvements (That is, ignoring existing practices). 15. OQM — Object-oriented Quality Management, a model for quality management.

[3]

Proponents of each approach have sought to improve them as well as apply them for small, medium and large gains. Simple one is Process Approach, which forms the basis of ISO 9001:2008 Quality Management System standard, duly driven from the 'Eight principles of Quality managagement', process approach being one of them. Thareja[4] writes about the mechanism and benefits: "The process (proficiency) may be limited in words, but not in its applicability. While it fulfills the criteria of all-round gains: in terms of the competencies augmented by the participants; the organisation seeks newer directions to the business success, the individual brand image of both the people and the organisation, in turn, goes up. The competencies which were hitherto rated as being smaller, are better recognized and now acclaimed to be more potent and fruitful".[5] The more complex Quality improvement tools are tailored for enterprise types not originally targeted. For example, Six Sigma was designed for manufacturing but has spread to service enterprises. Each of these approaches and methods has met with success but also with failures.

Some of the common differentiators between success and failure include commitment, knowledge and expertise to guide improvement, scope of change/improvement desired (Big Bang type changes tend to fail more often compared to smaller changes) and adaption to enterprise cultures. For example, quality circles do not work well in every

34

Page 35: TQM & SIX SIGMA etc.

enterprise (and are even discouraged by some managers), and relatively few TQM-participating enterprises have won the national quality awards.

There have been well publicized failures of BPR, as well as Six Sigma. Enterprises therefore need to consider carefully which quality improvement methods to adopt, and certainly should not adopt all those listed here.

It is important not to underestimate the people factors, such as culture, in selecting a quality improvement approach. Any improvement (change) takes time to implement, gain acceptance and stabilize as accepted practice. Improvement must allow pauses between implementing new changes so that the change is stabilized and assessed as a real improvement, before the next improvement is made (hence continual improvement, not continuous improvement).

Improvements that change the culture take longer as they have to overcome greater resistance to change. It is easier and often more effective to work within the existing cultural boundaries and make small improvements (that is Kaizen) than to make major transformational changes. Use of Kaizen in Japan was a major reason for the creation of Japanese industrial and economic strength.

On the other hand, transformational change works best when an enterprise faces a crisis and needs to make major changes in order to survive. In Japan, the land of Kaizen, Carlos Ghosn led a transformational change at Nissan Motor Company which was in a financial and operational crisis. Well organized quality improvement programs take all these factors into account when selecting the quality improvement methods.

Quality standards

The International Organization for Standardization (ISO) created the Quality Management System (QMS) standards in 1987. They were the ISO 9000:1987 series of standards comprising ISO 9001:1987, ISO 9002:1987 and ISO 9003:1987; which were applicable in different types of industries, based on the type of activity or process: designing, production or service delivery.

The standards are reviewed every few years by the International Organization for Standardization. The version in 1994 was called the ISO 9000:1994 series; consisting of the ISO 9001:1994, 9002:1994 and 9003:1994 versions.

The last major revision was in the year 2008 and the series was called ISO 9000:2000 series. The ISO 9002 and 9003 standards were integrated into one single certifiable standard: ISO 9001:2008. After December 2003, organizations holding ISO 9002 or 9003 standards had to complete a transition to the new standard.

ISO released a minor revision, ISO 9001:2008 on 14 October 2008. It contains no new requirements. Many of the changes were to improve consistency in grammar, facilitating

35

Page 36: TQM & SIX SIGMA etc.

translation of the standard into other languages for use by over 950,000 certified organisations in the 175 countries (as at Dec 2007) that use the standard.

The ISO 9004:2000 document gives guidelines for performance improvement over and above the basic standard (ISO 9001:2000). This standard provides a measurement framework for improved quality management, similar to and based upon the measurement framework for process assessment.

The Quality Management System standards created by ISO are meant to certify the processes and the system of an organization, not the product or service itself. ISO 9000 standards do not certify the quality of the product or service.

In 2005 the International Organization for Standardization released a standard, ISO 22000, meant for the food industry. This standard covers the values and principles of ISO 9000 and the HACCP standards. It gives one single integrated standard for the food industry and is expected to become more popular in the coming years in such industry.

ISO has also released standards for other industries. For example Technical Standard TS 16949 defines requirements in addition to those in ISO 9001:2008 specifically for the automotive industry.

ISO has a number of standards that support quality management. One group describes processes (including ISO 12207 & ISO 15288) and another describes process assessment and improvement ISO 15504.

The Software Engineering Institute has its own process assessment and improvement methods, called CMMi (Capability Maturity Model — integrated) and IDEAL respectively.

Quality software

The software used to track the three main components of quality management through the use of databases and/or charting applications.

Quality terms

Quality Improvement can be distinguished from Quality Control in that Quality Improvement is the purposeful change of a process to improve the reliability of achieving an outcome.

Quality Control is the ongoing effort to maintain the integrity of a process to maintain the reliability of achieving an outcome.

Quality Assurance is the planned or systematic actions necessary to provide enough confidence that a product or service will satisfy the given requirements.

Academic resources

36

Page 37: TQM & SIX SIGMA etc.

International Journal of Productivity and Quality Management , ISSN 1746-6474, Inderscience

International Journal of Quality & Reliability Management, ISSN: 0265-671X, Emerald Publishing Group

[edit] See also

Quality audit Quality infrastructure Quality management system Sales process engineering Systems thinking - Applications Hoshin Kanri Health care Expediting Test management

[hide] v • d • e

Software Engineering

FieldsRequirements analysis • Systems analysis • Software design • Computer programming • Formal methods • Software testing • Software deployment • Software maintenance

Concepts

Data modeling • Enterprise architecture • Functional specification • Modeling language • Programming paradigm • Software • Software architecture • Software development methodology • Software development process • Software quality • Software quality assurance • Software archaeology • Structured analysis

OrientationsAgile • Aspect-oriented • Object orientation • Ontology • Service orientation • SDLC

Models

Development models

Agile • Iterative model • RUP • Scrum • Spiral model • Waterfall model • XP • V-Model

Other models

Automotive SPICE • CMMI • Data model • Function model • Information model • Metamodeling • Object model • Systems model • View model

Modeling languagesIDEF • UML

37

Page 38: TQM & SIX SIGMA etc.

Softwareengineers

Kent Beck • Grady Booch • Fred Brooks • Barry Boehm • Ward Cunningham • Ole-Johan Dahl • Tom DeMarco • Martin Fowler • C. A. R. Hoare • Watts Humphrey • Michael A. Jackson • Ivar Jacobson • Craig Larman • James Martin • Bertrand Meyer • David Parnas • Winston W. Royce • Colette Rolland • James Rumbaugh • Niklaus Wirth • Edward Yourdon • Victor Basili

Related fields

Computer science • Computer engineering • Enterprise engineering • History • Management • Mathematics • Project management • Quality management • Software ergonomics • Systems engineering

Quality management system

A quality management system (QMS) can be expressed as the organizational structure, procedures, processes and resources needed to implement quality management.

Contents

1 Elements of a Quality Management System 2 Concept of quality - historical background 3 Quality system for medical devices 4 Quality management organizations and awards 5 See also 6 References

7 External links

Elements of a Quality Management System

1. Organizational Structure 2. Responsibilities 3. Methods 4. Processes 5. Resources 6. Customer Satisfaction 7. Continuous Improvement

Concept of quality - historical background

38

Page 39: TQM & SIX SIGMA etc.

The concept of quality as we think of it now first emerged out of the Industrial Revolution. Previously goods had been made from start to finish by the same person or team of people, with handcrafting and tweaking the product to meet 'quality criteria'. Mass production brought huge teams of people together to work on specific stages of production where one person would not necessarily complete a product from start to finish. In the late 1800s pioneers such as Frederick Winslow Taylor and Henry Ford recognized the limitations of the methods being used in mass production at the time and the subsequent varying quality of output. Birland established Quality Departments to oversee the quality of production and rectifying of errors, and Ford emphasized standardization of design and component standards to ensure a standard product was produced. Management of quality was the responsibility of the Quality department and was implemented by Inspection of product output to 'catch' defects.

Application of statistical control came later as a result of World War production methods. Quality management systems are the outgrowth of work done by W. Edwards Deming, a statistician, after whom the Deming Prize for quality is named.

Quality, as a profession and the managerial process associated with the quality function, was introduced during the second-half of the 20th century, and has evolved since then. Over this period, few other disciplines have seen as many changes as the quality profession.

The quality profession grew from simple control, to engineering, to systems engineering. Quality control activities were predominant in the 1940s, 1950s, and 1960s. The 1970s were an era of quality engineering and the 1990s saw quality systems as an emerging field. Like medicine, accounting, and engineering, quality has achieved status as a recognized profession[citation needed]

Quality system for medical devices

Quality System requirements for medical have been internationally recognized as a way to assure product safety and efficacy and customer satisfaction since at least 1983, and were instituted as requirements in a final rule published on October 7, 1996. The U.S. Food and Drug Administration (FDA) had documented design defects in medical devices that contributed to recalls from 1983 to 1989 that would have been prevented if Quality Systems had been in place. The rule is promulgated at 21 CFR 820.

According to current Good Manufacturing Practice (GMP), medical device manufacturers have the responsibility to use good judgment when developing their quality system and apply those sections of the FDA Quality System (QS) Regulation that are applicable to their specific products and operations, in Part 820 of the QS regulation. As with GMP, operating within this flexibility, it is the responsibility of each manufacturer to establish requirements for each type or family of devices that will result in devices that are safe and effective, and to establish methods and procedures to design, produce, and distribute devices that meet the quality system requirements.

39

Page 40: TQM & SIX SIGMA etc.

The FDA has identified in the QS regulation the essential elements that a quality system shall embody for design, production and distribution, without prescribing specific ways to establish these elements. These elements include:

Quality System personnel training and qualification; controlling the product design; controlling documentation; controlling purchasing; product identification and traceability at all stages of production; controlling and defining production and process; defining and controlling inspection, measuring and test equipment; validating processes; product acceptance; controlling nonconforming product; instituting corrective and preventive action when errors occur; labeling and packaging controls; handling, storage, distribution and installation; records; servicing; statistical techniques;

all overseen by Management Responsibility and Quality Audits.

Because the QS regulation covers a broad spectrum of devices and production processes, it allows some leeway in the details of quality system elements. It is left to manufacturers to determine the necessity for, or extent of, some quality elements and to develop and implement procedures tailored to their particular processes and devices. For example, if it is impossible to mix up labels at a manufacturer because there is only one label to each product, then there is no necessity for the manufacturer to comply with all of the GMP requirements under device labeling.

40

Page 41: TQM & SIX SIGMA etc.

Drug manufactures are regulated under a different section of the Code of Federal Regulations: 21 CFR 211. However, the FDA has instituted new policies requiring QS for pharmaceuticals.

Quality management organizations and awards

The International Organization for Standardization's ISO 9001:2008 series describes standards for a QMS addressing the principles and processes surrounding the design, development and delivery of a general product or service. Organizations can participate in a continuing certification process to ISO 9001:2000 to demonstrate their compliance with the standard, which includes a requirement for continual (i.e. planned) improvement of the QMS.

(ISO 9000:2005 provides information the fundamentals and vocabulary used in quality management systems. ISO 9004:2009 provides guidance on quality management approach for the sustained success of an organization. Neither of these standards can be used for certification purposes as they provide guidance, not requirements).

The Malcolm Baldrige National Quality Award is a competition to identify and recognize top-quality U.S. companies. This model addresses a broadly based range of quality criteria, including commercial success and corporate leadership. Once an organization has won the award it has to wait several years before being eligible to apply again.

The European Foundation for Quality Management's EFQM Excellence Model supports an award scheme similar to the Malcolm Baldrige Award for European companies.

In Canada, the National Quality Institute presents the 'Canada Awards for Excellence' on an annual basis to organisations that have displayed outstanding performance in the areas of Quality and Workplace Wellness, and have met the Institute's criteria with documented overall achievements and results.

The Alliance for Performance Excellence is a network of state, local, and international organizations that use the Malcolm Baldrige National Quality Award criteria and model at the grassroots level to improve the performance of local organizations and economies. NetworkforExcellence.org is the Alliance web site; browsers can find Alliance members in their state and get the latest news and events from the Baldrige community.

Seven Basic Tools of Quality The Seven Basic Tools of Quality is a designation given to a fixed set of graphical techniques identified as being most helpful in troubleshooting issues related to quality.[1] They are called basic because they are suitable for people with little formal training in statistics and because they can be used to solve the vast majority of quality-related issues.[2]

41

Page 42: TQM & SIX SIGMA etc.

The tools are:[3]

The cause-and-effect or Ishikawa diagram The check sheet The control chart The histogram The Pareto chart The scatter diagram Stratification (alternately flow chart or run chart)

The designation arose in postwar Japan, inspired by the seven famous weapons of Benkei.[4] At that time, companies that had set about training their workforces in statistical quality control found that the complexity of the subject intimidated the vast majority of their workers and scaled back training to focus primarily on simpler methods which suffice for most quality-related issues anyway.[5]

The Seven Basic Tools stand in contrast with more advanced statistical methods such as survey sampling, acceptance sampling, statistical hypothesis testing, design of experiments, multivariate analysis, and various methods developed in the field of operations research.[6]

Ishikawa diagram

Ishikawa diagram

One of the Seven Basic Tools of Quality

First

described by

Kaoru Ishikawa

42

Page 43: TQM & SIX SIGMA etc.

Purpose To break down (in successive layers of

detail) root causes that potentially contribute

to a particular effect

Ishikawa diagrams (also called fishbone diagrams or cause-and-effect diagrams) are diagrams that show the causes of a certain event. Common uses of the Ishikawa diagram are product design and quality defect prevention, to identify potential factors causing an overall effect. Each cause or reason for imperfection is a source of variation. Causes are usually grouped into major categories to identify these sources of variation. The categories typically include:

People: Anyone involved with the process Methods: How the process is performed and the specific requirements for doing

it, such as policies, procedures, rules, regulations and laws Machines: Any equipment, computers, tools etc. required to accomplish the job Materials: Raw materials, parts, pens, paper, etc. used to produce the final product Measurements: Data generated from the process that are used to evaluate its

quality Environment: The conditions, such as location, time, temperature, and culture in

which the process operates

Contents

1 Overview 2 Causes

o 2.1 The 6 Ms (used in manufacturing) o 2.2 The 8 Ps (used in service industry) o 2.3 The 4 Ss (used in service industry) o 2.4 More Ms

3 References o 3.1 Further reading

4 External links

Overview

43

Page 44: TQM & SIX SIGMA etc.

Ishikawa diagram, in fishbone shape, showing factors of Equipment, Process, People, Materials, Environment and Management, all affecting the overall problem. Smaller arrows connect the sub-causes to major causes.

Ishikawa diagrams were proposed by Kaoru Ishikawa [1] in the 1960s, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management.

It was first used in the 1960s, and is considered one of the seven basic tools of quality control.[2] It is known as a fishbone diagram because of its shape, similar to the side view of a fish skeleton.

Mazda Motors famously used an Ishikawa diagram in the development of the Miata sports car, where the required result was "Jinba Ittai" or "Horse and Rider as One". The main causes included such aspects as "touch" and "braking" with the lesser causes including highly granular factors such as "50/50 weight distribution" and "able to rest elbow on top of driver's door". Every factor identified in the diagram was included in the final design.

Causes

Causes in the diagram are often categorized, such as to the 4 M's, described below. Cause-and-effect diagrams can reveal key relationships among various variables, and the possible causes provide additional insight into process behavior.

Causes can be derived from brainstorming sessions. These groups can then be labeled as categories of the fishbone. They will typically be one of the traditional categories mentioned above but may be something unique to the application in a specific case. Causes can be traced back to root causes with the 5 Whys technique.

Typical categories are:

The 6 Ms (used in manufacturing)

Machine (technology)

44

Page 45: TQM & SIX SIGMA etc.

Method (process/inspection) Material (raw, consumables etc.) Man Power (physical work)/Mind Power (brain work): Kaizens, Suggestions Money Milieu (External Environment or surroundings)

The 8 Ps (used in service industry)

Product=Service Price Place Promotion People Process Physical Evidence Productivity & Quality

The 4 Ss (used in service industry)

Surroundings Suppliers Systems Skills

More Ms

Mother Nature (Environment) Measurement (Inspection) Maintenance Management

Check sheet

Check sheet

45

Page 46: TQM & SIX SIGMA etc.

One of the Seven Basic Tools of Quality

Purpose To provide a structured way to collect quality-

related data as a rough means for assessing a

process or as an input to other analyses

The check sheet is a simple document that is used for collecting data in real-time and at the location where the data is generated. The document is typically a blank form that is designed for the quick, easy, and efficient recording of the desired information, which can be either quantitative or qualitative. When the information is quantitative, the checksheet is sometimes called a tally sheet.

A defining characteristic of a checksheet is that data is recorded by making marks ("checks") on it. A typical checksheet is divided into regions, and marks made in different regions have different significance. Data is read by observing the location and number of marks on the sheet. 5 Basic types of Check Sheets:

Classification: A trait such as a defect or failure mode must be classified into a category.

Location: The physical location of a trait is indicated on a picture of a part or item being evaluated.

Frequency: The presence or absence of a trait or combination of traits is indicated. Also number of occurrences of a trait on a part can be indicated.

Measurement Scale: A measurement scale is divided into intervals, and measurements are indicated by checking an appropriate interval.

Check List: The items to be performed for a task are listed so that, as each is accomplished, it can be indicated as having been completed.

46

Page 47: TQM & SIX SIGMA etc.

An example of a simple quality control checksheet

The check sheet is one of the seven basic tools of quality control.[1]

Control chart

Control chart

One of the Seven Basic Tools of Quality

First

described by

Walter A. Shewhart

Purpose To determine whether a process should

undergo a formal examination for quality-

47

Page 48: TQM & SIX SIGMA etc.

related problems

Control charts, also known as Shewhart charts or process-behaviour charts, in statistical process control are tools used to determine whether or not a manufacturing or business process is in a state of statistical control.

Contents

1 Overview 2 History 3 Chart details

o 3.1 Chart usage o 3.2 Choice of limits o 3.3 Calculation of standard deviation

4 Rules for detecting signals 5 Alternative bases 6 Performance of control charts 7 Criticisms 8 Types of charts 9 See also 10 Notes 11 Bibliography

12 External links

Overview

If analysis of the control chart indicates that the process is currently under control (i.e. is stable, with variation only coming from sources common to the process) then data from the process can be used to predict the future performance of the process. If the chart indicates that the process being monitored is not in control, analysis of the chart can help determine the sources of variation, which can then be eliminated to bring the process back into control. A control chart is a specific kind of run chart that allows significant change to be differentiated from the natural variability of the process.

The control chart can be seen as part of an objective and disciplined approach that enables correct decisions regarding control of the process, including whether or not to change process control parameters. Process parameters should never be adjusted for a process that is in control, as this will result in degraded process performance.[1]

The control chart is one of the seven basic tools of quality control.[2]

History

48

Page 49: TQM & SIX SIGMA etc.

The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a business need to reduce the frequency of failures and repairs. By 1920 they[who?] had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Dr. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control." [3] Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.[4]

In 1924 or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and then became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander of the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

Chart details

A control chart consists of:

Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times [the data]

49

Page 50: TQM & SIX SIGMA etc.

The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions)

A center line is drawn at the value of the mean of the statistic The standard error (e.g., standard deviation/sqrt(n) for the mean) of the statistic is

also calculated using all the samples Upper and lower control limits (sometimes called "natural process limits") that

indicate the threshold at which the process output is considered statistically 'unlikely' are drawn typically at 3 standard errors from the center line

The chart may have other optional features, including:

Upper and lower warning limits, drawn as separate lines, typically two standard errors above and below the center line

Division into zones, with the addition of rules governing frequencies of observations in each zone

Annotation with events of interest, as determined by the Quality Engineer in charge of the process's quality

Chart usage

If the process is in control, all points will plot within the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart "signaling" the presence of a special-cause requires immediate investigation.

This makes the control limits very important decision aids. The control limits tell you about process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the center line) may not coincide with the specified value (or target) of the quality characteristic because the process' design simply can't deliver the process characteristic at the desired level.

50

Page 51: TQM & SIX SIGMA etc.

Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural center is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.

The purpose of control charts is to allow simple detection of events that are indicative of actual process change. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.

The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it's clear that the process is truly in control. Note that with three sigma limits, one expects to be signaled approximately once out of every 370 points on average, just due to common-causes.

Choice of limits

Shewhart set 3-sigma (3-standard error) limits on the following basis.

The coarse result of Chebyshev's inequality that, for any probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.

The finer result of the Vysochanskii-Petunin inequality, that for any unimodal probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).

The empirical investigation of sundry probability distributions reveals that at least 99% of observations occurred within three standard deviations of the mean.

Shewhart summarized the conclusions by saying:

... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.

Though he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:

51

Page 52: TQM & SIX SIGMA etc.

Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.

The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman-Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past .... He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:

1. Ascribe a variation or a mistake to a special cause when in fact the cause belongs to the system (common cause). (Also known as a Type I error)

2. Ascribe a variation or a mistake to the system (common causes) when in fact the cause was special. (Also known as a Type II error)

Calculation of standard deviation

As for the calculation of control limits, the standard deviation (error) required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.

An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, an estimator which tends to be less influenced by the extreme observations which typify special-causes.

Rules for detecting signals

The most common sets are:

The Western Electric rules The Wheeler rules (equivalent to the Western Electric zone tests[5]) The Nelson rules

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.

The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data.

52

Page 53: TQM & SIX SIGMA etc.

Alternative bases

In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart-Deming tradition.

Performance of control charts

When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, then that cause should be eliminated if possible. It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. Since the control limits are evaluated each time a point is added to the chart, it readily follows that every control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.

It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart and the CUSUM chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.

Criticisms

Several authors have criticised the control chart on the grounds that it violates the likelihood principle.[citation needed] However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.[citation needed]

53

Page 54: TQM & SIX SIGMA etc.

Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties.[citation needed]

Types of charts

Chart Process observationProcess

observations relationships

Process observations

type

Size of shift to detect

and R chartQuality characteristic measurement within one subgroup

Independent VariablesLarge (≥ 1.5σ)

and s chartQuality characteristic measurement within one subgroup

Independent VariablesLarge (≥ 1.5σ)

Shewhart individuals control chart (ImR chart or XmR chart)

Quality characteristic measurement for one observation

Independent Variables† Large (≥ 1.5σ)

Three-way chartQuality characteristic measurement within one subgroup

Independent VariablesLarge (≥ 1.5σ)

p-chartFraction nonconforming within one subgroup

Independent Attributes† Large (≥ 1.5σ)

np-chartNumber nonconforming within one subgroup

Independent Attributes† Large (≥ 1.5σ)

c-chartNumber of nonconformances within one subgroup

Independent Attributes† Large (≥ 1.5σ)

u-chartNonconformances per unit within one subgroup

Independent Attributes† Large (≥ 1.5σ)

EWMA chart

Exponentially weighted moving average of quality characteristic measurement within one subgroup

IndependentAttributes or variables

Small (< 1.5σ)

CUSUM chartCumulative sum of quality characteristic measurement within one subgroup

IndependentAttributes or variables

Small (< 1.5σ)

Time series model

Quality characteristic measurement within one subgroup

AutocorrelatedAttributes or variables

N/A

Regression Control Chart

Quality characteristic measurement within one subgroup

Dependent of process control variables

VariablesLarge (≥ 1.5σ)

54

Page 55: TQM & SIX SIGMA etc.

†Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially-distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated.[6] Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data.[7] Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population.[8] It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts or λ > 500 for u- and c-charts.

Critics of this approach argue that control charts should not be used then their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.[citation needed]

[hide] v • d • e

Statistics

[hide]  

Descriptive statistics

Continuous data

LocationMean (Arithmetic, Geometric, Harmonic) · Median · Mode

Dispersion

Range  · Standard deviation  · Coefficient of variation  · Percentile  · Interquartile range

ShapeVariance · Skewness · Kurtosis · Moments · L-moments

Count dataIndex of dispersion

Summary tablesGrouped data  · Frequency distribution · Contingency table

DependencePearson product-moment correlation · Rank correlation (Spearman's rho, Kendall's tau) · Partial correlation · Scatter plot

55

Page 56: TQM & SIX SIGMA etc.

Statistical graphics

Bar chart · Biplot · Box plot · Control chart · Correlogram · Forest plot · Histogram · Q-Q plot · Run chart · Scatter plot · Stemplot · Radar chart

[show]  

Data collection

Designing studiesEffect size  · Standard error  · Statistical power  · Sample size determination

Survey methodologySampling  · Stratified sampling  · Opinion poll · Questionnaire

Controlled experiment

Design of experiments  · Randomized experiment  · Random assignment  · Replication · Blocking · Regression discontinuity · Optimal design

Uncontrolled studiesNatural experiment  · Quasi-experiment  · Observational study

[show]  

Statistical inference

Bayesian inferencePrior  · Posterior · Credible interval  · Bayes factor  · Bayesian estimator · Maximum posterior estimator

Classical inferenceConfidence interval  · Hypothesis testing  · Sampling distribution  · Meta-analysis

Specific testsZ-test (normal) · Student's t-test · F-test · Chi-square test · Pearson's chi-square · Wald test · Mann–Whitney U · Shapiro–Wilk · Signed-rank

General estimation

Mean-unbiased  · Median-unbiased  · Maximum likelihood · Method of moments · Minimum distance · Maximum spacing  · Density estimation

[show]  

Correlation and regression analysis

CorrelationPearson product-moment correlation · Partial correlation · Confounding variable · Coefficient of determination

Linear regressionSimple linear regression · Ordinary least squares · General linear model · Analysis of variance · Analysis of covariance

Non-standard predictorsNonlinear regression · Nonparametric · Semiparametric  ·

56

Page 57: TQM & SIX SIGMA etc.

Isotonic  · Robust

Generalized linear model

Exponential families  · Logistic (Bernoulli)  · Binomial  · Poisson

[show]  

Data analyses and models for other specific data types

Multivariate statistics

Multivariate regression · Principal components · Factor analysis · Cluster analysis · Copulas

Time series analysisDecomposition · Trend estimation · Box–Jenkins · ARMA models · Spectral density estimation

Survival analysisSurvival function · Kaplan–Meier · Logrank test · Failure rate · Proportional hazards models · Accelerated failure time model

Categorical dataMcNemar's test · Cohen's kappa

[show]  

Applications

Environmental statistics

Geostatistics  · Climatology

Medical statisticsEpidemiology  · Clinical trial  · Clinical study design

Social statisticsActuarial science  · Population  · Demography  · Census  · Psychometrics · Official statistics  · Crime statistics

Category · Portal · Outline · Index

Retrieved from "http://en.wikipedia.org/wiki/Control_chart"Categories: Product management | Quality | Quality control tools | Statistical charts and diagramsHidden categories: All articles with specifically-marked weasel-worded phrases | Articles with specifically-marked weasel-worded phrases from April 2010 | All articles with unsourced statements | Articles with unsourced statements from March 2010 | Articles with unsourced statements from April 2010 | Statistics articles with navigational template

HistogramFrom Wikipedia, the free encyclopediaJump to: navigation, search

57

Page 58: TQM & SIX SIGMA etc.

For the histograms used in digital image processing, see Image histogram and Color histogram.

Histogram

One of the Seven Basic Tools of Quality

First

described by

Karl Pearson

Purpose To roughly assess the probability distribution

of a given variable by depicting the

frequencies of observations occurring in

certain ranges of values

In statistics, a histogram is a technique to estimate the probability distribution of a variable, by counting the frequencies of data into discrete bins, and then plotting the number of members in each bin versus the bin number.[1] This is usually but not necessarily displayed by a bar chart where each bar is erected over an interval, with an area equal to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data. A histogram may also be normalized displaying relative frequencies. It then shows the proportion of cases that fall into each of several categories, with the total area equaling 1. The categories are usually specified as consecutive, non-overlapping intervals of a variable. The categories (intervals) must be adjacent, and often are chosen to be of the same size.[2]

Histograms are used to plot density of data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram

58

Page 59: TQM & SIX SIGMA etc.

used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.

An alternative to the histogram is kernel density estimation, which uses a kernel to smooth samples. This will construct a smooth probability density function, which will in general more accurately reflect the underlying variable.

The histogram is one of the seven basic tools of quality control.[3]

Contents

[hide] 1 Etymology 2 Examples 3 Activities and demonstrations 4 Mathematical definition

o 4.1 Cumulative histogram o 4.2 Number of bins and width

5 See also 6 References 7 Further reading

8 External links

[edit] Etymology

Look up histogram in Wiktionary, the free dictionary.

An example histogram of the heights of 31 Black Cherry trees.

The etymology of the word histogram is uncertain. Sometimes it is said to be derived from the Greek histos 'anything set upright' (as the masts of a ship, the bar of a loom, or

59

Page 60: TQM & SIX SIGMA etc.

the vertical bars of a histogram); and gramma 'drawing, record, writing'. It is also said that Karl Pearson, who introduced the term in 1895, derived the name from "historical diagram".

[4]

Examples

As an example we consider data collected by the U.S. Census Bureau on time to travel to work (2000 census, [1], Table 2). The census found that there were 124 million people who work outside of their homes. An interesting feature of this graph is that the number recorded for "at least 15 but less than 20 minutes" is higher than for the bands on either side. This is likely to have arisen from people rounding their reported journey time.[original

research?] This rounding is a common phenomenon when collecting data from people.

Histogram of travel time, US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table.

Data by absolute numbersInterval Width Quantity Quantity/width

0 5 4180 8365 5 13687 273710 5 18618 372315 5 19634 392620 5 17981 359625 5 7190 143830 5 16369 327335 5 3212 64240 5 4122 82445 15 9200 61360 30 6461 215

60

Page 61: TQM & SIX SIGMA etc.

90 60 3435 57

This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers.

Histogram of travel time, US 2000 census. Area under the curve equals 1. This diagram uses Q/total/width from the table.

Data by proportionInterval Width Quantity (Q) Q/total/width

0 5 4180 0.00675 5 13687 0.022110 5 18618 0.030015 5 19634 0.031620 5 17981 0.029025 5 7190 0.011630 5 16369 0.026435 5 3212 0.005240 5 4122 0.006645 15 9200 0.004960 30 6461 0.001790 60 3435 0.0005

This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram.

61

Page 62: TQM & SIX SIGMA etc.

In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also continuous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5-20.5 and 20.5-33.5, but not two connecting intervals of 10.5-20.5 and 22.5-32.5. Empty intervals are represented as empty and not skipped.)[5]

Activities and demonstrations

The SOCR resource pages contain a number of hands-on interactive activities demonstrating the concept of a histogram, histogram construction and manipulation using Java applets and charts.

Mathematical definition

An ordinary and a cumulative histogram of the same data. The data shown is a random sample of 10,000 points from a normal distribution with a mean of 0 and a standard deviation of 1.

In a more general mathematical sense, a histogram is a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:

Cumulative histogram

A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:

62

Page 63: TQM & SIX SIGMA etc.

Number of bins and width

There is no "best" number of bins, and different bin sizes can reveal different features of the data. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. You should always experiment with bin widths before choosing one (or more) that illustrate the salient features in your data. A good discussion of rules for choice of bin widths is in Modern Applied Statistics with S, § 5.6: Density Estimation.[6]

The number of bins k can be assigned directly or can be calculated from a suggested bin width h as:

The braces indicate the ceiling function.

Sturges' formula[7]

which implicitly bases the bin sizes on the range of the data, and can perform poorly if n < 30.

Scott's choice[8]

where σ is the sample standard deviation.

Square-root choice

which takes the square root of the number of data points in the sample (used by Excel histograms and many others)

Freedman–Diaconis' choice[9]

which is based on the interquartile range.

63

Page 64: TQM & SIX SIGMA etc.

Choice based on minimization of an estimated L2 risk function [10]  

where and are mean and biased variance of a histogram with bin-width ,

and .

Pareto chart

Pareto chart

One of the Seven Basic Tools of Quality

First described

by

Joseph M. Juran

Purpose To assess the most frequently-occurring

defects by category†

A Pareto chart, named after Vilfredo Pareto, is a type of chart that contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line.

64

Page 65: TQM & SIX SIGMA etc.

Simple example of a Pareto chart using hypothetical data showing the relative frequency of reasons for arriving late at work

The left vertical axis is the frequency of occurrence, but it can alternatively represent cost or another important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure. Because the reasons are in decreasing order, the cumulative function is a concave function. To take the example above, in order to lower the amount of late arriving by 80%, it is sufficient to solve the first three issues.

The purpose of the Pareto chart is to highlight the most important among a (typically large) set of factors. In quality control, it often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, and so on.

These charts can be generated by simple spreadsheet programs, such as OpenOffice.org Calc and Microsoft Excel and specialized statistical software tools as well as online quality charts generators.

The Pareto chart is one of the seven basic tools of quality control.[1]

Scatter plot

65

Page 66: TQM & SIX SIGMA etc.

Scatter plot

One of the Seven Basic Tools of Quality

First described

by

Francis Galton

Purpose To identify the type of relationship (if any)

between two variables

Waiting time between eruptions and the duration of the eruption for the Old Faithful Geyser in Yellowstone National Park, Wyoming, USA. This chart suggests there are generally two "types" of eruptions: short-wait-short-duration, and long-wait-long-duration.

66

Page 67: TQM & SIX SIGMA etc.

A 3D scatter plot allows for the visualization of multivariate data of up to four dimensions. The Scatter plot takes multiple scalar variables and uses them for different axes in phase space. The different variables are combined to form coordinates in the phase space and they are displayed using glyphs and colored using another scalar variable.[1]

A scatter plot or scattergraph is a type of mathematical diagram using Cartesian coordinates to display values for two variables for a set of data.

The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.[2] This kind of plot is also called a scatter chart, scatter diagram and scatter graph.

Contents

[hide] 1 Overview 2 Example 3 See also 4 References

5 External links

Overview

A scatter plot is used when a variable exists that is under the control of the experimenter. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on

67

Page 68: TQM & SIX SIGMA etc.

either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables.

A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the pattern of dots slopes from lower left to upper right, it suggests a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it suggests a negative correlation. A line of best fit (alternatively called 'trendline') can be drawn in order to study the correlation between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. Unfortunately, no universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships.

One of the most powerful aspects of a scatter plot, however, is its ability to show nonlinear relationships between variables. Furthermore, if the data is represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns.

The scatter diagram is one of the basic tools of quality control.[3]

Example

For example, to display values for "lung capacity" (first variable) and how long that person could hold his breath, a researcher would choose a group of people to study, then measure each one's lung capacity (first variable) and how long that person could hold his breath (second variable). The researcher would then plot the data in a scatter plot, assigning "lung capacity" to the horizontal axis, and "time holding breath" to the vertical axis.

A person with a lung capacity of 400 ml who held his breath for 21.7 seconds would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set, and will help to determine what kind of relationship there might be between the two variables.

[hide] v • d • e

Statistics

[hide]  

Descriptive statistics

68

Page 69: TQM & SIX SIGMA etc.

Continuous data

LocationMean (Arithmetic, Geometric, Harmonic) · Median · Mode

Dispersion

Range  · Standard deviation  · Coefficient of variation  · Percentile  · Interquartile range

ShapeVariance · Skewness · Kurtosis · Moments · L-moments

Count dataIndex of dispersion

Summary tablesGrouped data  · Frequency distribution · Contingency table

DependencePearson product-moment correlation · Rank correlation (Spearman's rho, Kendall's tau) · Partial correlation · Scatter plot

Statistical graphics

Bar chart · Biplot · Box plot · Control chart · Correlogram · Forest plot · Histogram · Q-Q plot · Run chart · Scatter plot · Stemplot · Radar chart

[show]  

Data collection

Designing studiesEffect size  · Standard error  · Statistical power  · Sample size determination

Survey methodologySampling  · Stratified sampling  · Opinion poll · Questionnaire

Controlled experiment

Design of experiments  · Randomized experiment  · Random assignment  · Replication · Blocking · Regression discontinuity · Optimal design

Uncontrolled studiesNatural experiment  · Quasi-experiment  · Observational study

[show]  

Statistical inference

Bayesian inferencePrior  · Posterior · Credible interval  · Bayes factor  · Bayesian estimator · Maximum posterior estimator

Classical inferenceConfidence interval  · Hypothesis testing  · Sampling distribution  · Meta-analysis

Specific testsZ-test (normal) · Student's t-test · F-test · Chi-square test · Pearson's chi-square · Wald test · Mann–Whitney U · Shapiro–Wilk · Signed-rank

69

Page 70: TQM & SIX SIGMA etc.

General estimation

Mean-unbiased  · Median-unbiased  · Maximum likelihood · Method of moments · Minimum distance · Maximum spacing  · Density estimation

[show]  

Correlation and regression analysis

CorrelationPearson product-moment correlation · Partial correlation · Confounding variable · Coefficient of determination

Linear regressionSimple linear regression · Ordinary least squares · General linear model · Analysis of variance · Analysis of covariance

Non-standard predictorsNonlinear regression · Nonparametric · Semiparametric  · Isotonic  · Robust

Generalized linear model

Exponential families  · Logistic (Bernoulli)  · Binomial  · Poisson

[show]  

Data analyses and models for other specific data types

Multivariate statistics

Multivariate regression · Principal components · Factor analysis · Cluster analysis · Copulas

Time series analysisDecomposition · Trend estimation · Box–Jenkins · ARMA models · Spectral density estimation

Survival analysisSurvival function · Kaplan–Meier · Logrank test · Failure rate · Proportional hazards models · Accelerated failure time model

Categorical dataMcNemar's test · Cohen's kappa

[show]  

Applications

Environmental statistics

Geostatistics  · Climatology

Medical statisticsEpidemiology  · Clinical trial  · Clinical study design

Social statisticsActuarial science  · Population  · Demography  · Census  · Psychometrics · Official statistics  · Crime statistics

70

Page 71: TQM & SIX SIGMA etc.

Category · Portal · Outline · Index

Stratified samplingIn statistics, stratified sampling is a method of sampling from a population.

When sub-populations vary considerably, it is advantageous to sample each subpopulation (stratum) independently. Stratification is the process of grouping members of the population into relatively homogeneous subgroups before sampling. The strata should be mutually exclusive: every element in the population must be assigned to only one stratum. The strata should also be collectively exhaustive: no population element can be excluded. Then random or systematic sampling is applied within each stratum. This often improves the representativeness of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population.

Contents

[hide] 1 Stratified sampling strategies 2 Disadvantages 3 Practical example 4 References

5 See also

Stratified sampling strategies

1. Proportionate allocation uses a sampling fraction in each of the strata that is proportional to that of the total population. If the population consists of 60% in the male stratum and 40% in the female stratum, then the relative size of the two samples (three males, two females) should reflect this proportion.

2. Optimum allocation (or Disproportionate allocation) - Each stratum is proportionate to the standard deviation of the distribution of the variable. Larger samples are taken in the strata with the greatest variability to generate the least possible sampling variance.

A real-world example of using stratified sampling would be for a political survey. If the respondents needed to reflect the diversity of the population, the researcher would specifically seek to include participants of various minority groups such as race or religion, based on their proportionality to the total population as mentioned above. A

71

Page 72: TQM & SIX SIGMA etc.

stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling.

Similarly, if population density varies greatly within a region, stratified sampling will ensure that estimates can be made with equal accuracy in different parts of the region, and that comparisons of sub-regions can be made with equal statistical power. For example, in Ontario a survey taken throughout the province might use a larger sampling fraction in the less populated north, since the disparity in population between north and south is so great that a sampling fraction based on the provincial sample as a whole might result in the collection of only a handful of data from the north.

Randomized stratification can also be used to improve population representativeness in a study.

Disadvantages

It is not useful when there are no similar subgroups. It cannot be used when amount of data in subgroups is not equal but total data in a subgroup are of equal importance as it gives more importance to subgroups with more data.

Practical example

In general the size of the sample in each stratum is taken in proportion to the size of the stratum. This is called proportional allocation. Suppose that in a company there are the following staff:

male, full time: 90 male, part time: 18 female, full time: 9 female, part time: 63 Total: 180

and we are asked to take a sample of 40 staff, stratified according to the above categories.

The first step is to find the total number of staff (180) and calculate the percentage in each group.

% male, full time = (90 / 180) x 100 = 50 % male, part time = ( 18 / 180 ) x100 = 10 % female, full time = (9 / 180 ) x 100 = 5 % female, part time = (63 / 180) x 100 = 35

This tells us that of our sample of 40,

50% should be male, full time.

72

Page 73: TQM & SIX SIGMA etc.

10% should be male, part time. 5% should be female, full time. 35% should be female, part time.

50% of 40 is 20. 10% of 40 is 4. 5% of 40 is 2. 35% of 40 is 14.

Another easy way without having to calculate the percentage is to multiply the group number by the sample size and divide by the total amount:

male, full time = (90 x 40) / 180 = 20 male, part time = (18 x 40) / 180 = 4 female, full time = (9 x 40) / 180 = 2 female, part time = (63 x 40) / 180 = 14

[1]

73