Training Materials Samples
-
Upload
melissa-ford -
Category
Documents
-
view
259 -
download
0
description
Transcript of Training Materials Samples
Training Materials Samples Important information about these
Samples
This document contains random samples of Open Source Six Sigmas
copyrighted intellectual property. They are intended to be used
exclusively for your own personal evaluation of the training
materials content. You are strictly prohibited from using these
samples for any other reason. The sample modules provided include
partial sections of modules from within the Open Source Six Sigma
training materials content. Since we offer two versions of the
training materials content - one featuring Minitab and one
featuring SigmaXL, modules from both versions are included in this
sample. When evaluating these samples notice the on-slide content
is accompanied by additional explanation per slide, where
applicable, in the notes section. Define Phase Six Sigma
Fundamentals
Now we will continue in the Define Phase with the Six Sigma
Fundamentals. The output of the Define Phase is a well developed
and articulated project. It has been correctly stated that 50% of
the success of a project is dependent on how well the effort has
been defined. Theres that Y = f(X) thinking again. Six Sigma
Fundamentals
Voice of the Customer Cost of Poor Quality Process Maps Process
Metrics Six Sigma Fundamentals Selecting Projects Elements of Waste
Understanding Six Sigma Wrap Up & Action Items The fundamentals
of this phase are Process Maps, Voice of the Customer, Cost of Poor
Quality and Process Metrics. Why have a process focus?
What is a Process? Why have a process focus? So we can understand
how and why work gets done To characterize customer & supplier
relationships To manage for maximum customer satisfaction while
utilizing minimum resources To see the process from start to finish
as it is currently being performed Defects: Blame the process, not
the people process (proses) n. A repetitive and systematic series
of steps or activities where inputs are modified to achieve a
value-added output What is a Process?Many people do or conduct a
process everyday but do they really think of it as a process?Our
definition of a process is a repetitive and systematic series of
steps or activities where inputs are modified and/or combined to
achieve a value-added output. Usually a successful process needs to
be well defined and developed. Examples of Processes Injection
molding Decanting solutions Filling vial/bottles Crushing ore
Refining oil Turning screws Building custom homes Paving roads
Changing a tire Recruiting staff Processing invoices Conducting
research Opening accounts Reconciling accounts Filling out a
timesheet Distributing mail Backing up files Issuing purchase
orders We go through processes every day.Below are some examples of
those processes.Can you think of other processes within your daily
environment? These are examples of processes and series of
processes.Is your process on the list? The purpose of a Process Map
is to:
Process Maps The purpose of a Process Map is to: Identify the
complexity of the process Communicate the focus of problem solving
Process Maps are living documents and must be changed as the
process is changed: They represent what is currently happening not
what you think is happening They should be created by the people
who are closest to the process Process Map Step A Start Inspect
Finish Step B Step C Step D Process Mapping, also called
flowcharting, is a technique to visualize the tasks, activities and
steps necessary to produce a product or a service. The preferred
method for describing a process is to identify it with a generic
name, show the workflow with a Process Map and describe its purpose
with an operational description. Remember a process is a blending
of inputs to produce some desired output. The intent of each task,
activity and step is to add value, as perceived by the customer, to
the product or service we are producing. You cannot discover if
this is the case until you have adequately mapped the process.
There are many reasons for creating a Process Map: - It helps all
process members understand their part in the process and how their
process fits into the bigger picture. - It describes how activities
are performed and how the work effort flows, it is a visual way of
standing above the process and watching how work is done. In fact,
Process Maps can be easily uploaded into model and simulation
software allowing you to simulate the process and visually see how
it works. - It can be used as an aid in training new people. - It
will show you where you can take measurements that will help you to
run the process better. - It will help you understand where
problems occur and what some of the causes may be. - It leverages
other analytical tools by providing a source of data and inputs
into these tools. - It identifies many important characteristics
you will need as you strive to make improvements. The individual
processes are linked together to see the total effort and flow for
meeting business and customer needs. In order to improve or to
correctly manage a process, you must be able to describe it in a
way that can be easily understood. Process Mapping is the most
important and powerful tool you will use to improve the
effectiveness and efficiency of a process. Standard symbols for
Process Mapping:
Process Map Symbols Standard symbols for Process Mapping:
(available in Microsoft Office, Visio, iGrafx , SigmaFlow and other
products) A RECTANGLE indicates an activity. Statements within the
rectangle should begin with a verb A DIAMOND signifies a decision
point. Only two paths emerge from a decision point: No and Yes An
ELLIPSE shows the start and end of the process A PARALLELOGRAM
shows that there are data An ARROW shows the connection and
direction of flow 1 A CIRCLE WITH A LETTER ORNUMBER INSIDE
symbolizesthe continuation of aflowchart to another page There may
be several interpretations of some of the Process Mapping symbols;
however, just about everyone uses these primary symbols to document
processes. As you become more practiced you will find additional
symbols useful, i.e. reports, data storage etc. For now we will
start with just these symbols. High Level Process Map One of the
deliverables from the Define Phase is a high level Process Map
which at a minimum must include: Start and stop points All process
steps All decision points Directional flow Value categories as
defined here: Value Added: Physically transforms the thing going
through the process Must be done right the first time Meaningful
from the customers perspective (is the customer willing to pay for
it?) Value Enabling: Satisfies requirements of non-paying external
stakeholders (government regulations) Non-Value Added: Everything
else At a minimum a high level Process Map must include; start and
stop points, all process steps, all decision points and directional
flow. Also be sure to include Value Categories such as Value Added
(Customer Focus) and Value Enabling (External Stakeholder focus).
Process Map for a Call Center -
Process Map Example Process Map for a Call Center - START LOGON TO
PC & APPLICATIONS SCHEDULED PHONE TIME? LOGON TO PHONE CALL or
WALK-IN? PHONE DATA CAPTURE BEGINS DETERMINE WHO IS INQUIRING
ACCESS CASE TOOL CASE TOOL RECORD? Y N A Z CALL WALK-IN DETERMINE
NATURE OF CALL & CONFIRM UNDERSTANDING C B D PHONE TIME REVIEW
CASE TOOL HISTORY & TAKE NOTES PUT ON HOLD, REFER TO REFERENCES
IMMEDIATE RESPONSE AVAILABLE? TRANSFER APPROPRIATE? ANSWER? QUERY
INTERNAL HRSC SME(S) OFF HOLD AND ARRANGE CALL BACK PHONE DATA ENDS
PROVIDE PHONE& NOTE DATA ENDS ADD TO RESEARCH LIST LOGOFF
PHONE, CHECK MAIL, ,VOICE MAIL E EXAMINE NEXT NOTE OR RESEARCH ITEM
ENTER APPROPRIATE SSAN (#,9s,0s) IF EMP DATA NOT POPULATED, ENTER
OLD CASE UPDATE ENTRIES INCL OPEN DATE/TIME CREATE A CASE INCL CASE
TYPE DATE/TIME, & NEEDED BY AUTO ROUTE CLOSED CLOSE CASE W/
DATE/TIME TAKE ACTION or DO RESEARCH F GO TO F or E DEPENDING ON
NEXT Here is an example of a detailed Process Map. This is a
Process Map of Call Center activities. Cross Functional Process
Map
When multiple departments or functional groups are involved in a
complex process it is often useful to use Cross Functional Process
Maps. Draw in either vertical or horizontal Swim Lanes and label
the functional groups then draw the Process Map General Accounting
Bank Financial Vendor Department Start Request transfer Attach ACH
form to Invoice Produce an Fill out ACH enrollment form Receive
payment End info in FRS? Input info into web interface Match
against bank batch and daily cash batch Accepts transactions,
transfer money and provide batch total Review and Process transfer
in FRS 3.0 Journey Entry 21.0 Reconciliation Maintain database to
balance ACH transfers ACH Automated Clearing House. No Yes Sending
Wire Transfers There are other types of Process Maps such as this
Cross Functional Process Map. These are best used in transactional
processes or where the process involves several departments. The
lines drawn horizontally across the map represent different
departments in the company and are usually referred to as Swim
Lanes. By mapping in this manner one can see how the various
departments are interdependent in this process. Create a high level
Process Map, use enough detail to make it useful.
Process Map Exercise Exercise objective:Using your favorite Process
Mapping tool create a Process Map of your project or functional
area. Create a high level Process Map, use enough detail to make it
useful. It is helpful to use rectangular post-its for process steps
and square ones turned to a diamond for decision points. Color code
the value added (green) and non-value added (red) steps. Be
prepared to discuss this with your mentor. Read Exercise Objective
and conduct the exercise. Measure Phase Process Discovery
Now we will continue in the Measure Phase with Process Discovery.
Measurement System Analysis
Process Discovery Detailed Process Mapping Cause and Effect
Diagrams FMEA Wrap Up & Action Items Process Capability
Measurement System Analysis Six Sigma Statistics Welcome to Measure
Process Discovery The purpose of this module is highlighted
above.We will review tools to help facilitate Process Discovery.
This will be a lengthy step as it requires a full characterization
of your selected process. There are four key deliverables from the
Measure Phase: (1) A robust description of the process and its
workflow (2) A quantitative assessment of how well the process is
actually working (3) An assessment of any measurement systems used
to gather data for making decisions or to describe the performance
of the process (4) A short list of the potential causes of our
problem, these are the Xs that are most likely related to the
problem. On the next lesson page we will help you develop a visual
and mental model that will give you leverage in finding the causes
to any problem. Overview of Brainstorming Techniques
We utilize Brainstorming techniques to populate a Cause and Effect
Diagram seeking ALL possible causes for our issue of concern.
Problem or Condition The Y The Xs (Causes) l Categories Material
Measurement Environment People Machine Method The Problem Cause and
Effect Diagram You will need to use brainstorming techniques to
identify all possible problems and their causes. Brainstorming
techniques work because the knowledge and ideas of two or more
persons is always greater than that of any one individual.
Brainstorming will generate a large number of ideas or
possibilities in a relatively short time. Brainstorming tools are
meant for teams, but can be used at the individual level also.
Brainstorming will be a primary input for other improvement and
analytical tools that you will use. You will learn two excellent
brainstorming techniques, Cause and Effect Diagram and affinity
diagrams. Cause and Effect Diagram are also called Fishbone
Diagrams because of their appearance and sometimes called Ishikawa
diagrams after their inventor. In a brainstorming session, ideas
are expressed by the individuals in the session and written down
without debate or challenge. The general steps of a brainstorming
sessions are: Agree on the category or condition to be considered.
Encourage each team member to contribute. Discourage debates or
criticism, the intent is to generate ideas and not to qualify them,
that will come later. Contribute in rotation (take turns), or free
flow, ensure every member has an equal opportunity. Listen to and
respect the ideas of others. Record all ideas generated about the
subject. Continue until no more ideas are offered. Edit the list
for clarity and duplicates. Cause and Effect Diagram
Products Measurement People Method Materials Equipment Environment
Transactional Policy Procedure Place Categories for the legs of the
diagram can use templates for products or transactional symptoms.
Or you can select the categories by process step or what you deem
appropriate for the situation. Problem or Condition The Y The Xs
(Causes) l Categories Material Machine The Problem Cause and Effect
Diagram A Cause and Effect Diagram is a composition of lines and
words representing a meaningful relationship between an effect, or
condition, and its causes. To focus the effort and facilitate
thought, the legs of the diagram are given categorical headings.
Two common templates for the headings are for product related and
transactional related efforts. Transactional is meant for processes
where there is no traditional or physical product; rather it is
more like an administrative process. Transactional processes are
characterized as processes dealing with forms, ideas, people,
decisions and services. You would most likely use the product
template for determining the cause of burnt pizza and use the
transactional template if you were trying to reduce order defects
from the order taking process. A third approach is to identify all
categories as you best perceive them. When constructing a Cause and
Effect Diagram, keep drilling down, always asking why, until you
find the Root Causes of the problem. Start with one category and
stay with it until you have exhausted all possible inputs and then
move to the next category. The next step is to rank each potential
cause by its likelihood of being the Root Cause. Rank it by the
most likely as a 1, second most likely as a 2 and so on. This make
take some time, you may even have to create sub-sections like 2a,
2b, 2c, etc. Then come back to reorder the sub-section in to the
larger ranking. This is your first attempt at really finding the
Y=f(X); remember the funnel? The top Xs have the potential to be
the critical Xs, those Xs which exert the most influence on the
output Y. Finally you will need to determine if each cause is a
control or a noise factor.This as you know is a requirement for the
characterization of the process. Next we will explain the meaning
and methods of using some of the common categories. Cause and
Effect Diagram
The Measurement category groups Root Causes related to the
measurement and measuring of a process activity or output: Examples
of questions to ask: Is there a metric issue? Is there a valid
measurementsystem? Is the data goodenough? Is data readily
available? Measurement Y The People category groups Root Causes
related to people, staffing and Organizational structure: Examples
of questions to ask: Are people trained, do they have the right
skills? Is there person to person variation? Are people
over-worked, under-worked? Y Please read the slide. Cause and
Effect Diagram
The Materials category groups Root Causes related to parts,
supplies, forms or information needed to execute a process:
Examples of questions to ask: Are bills of material current? Are
parts or supplies obsolete? Are there defects in the materials? How
is this performed? Are procedures correct? What might be unusual?
The Method category groups Root Causes related to how the work is
done, the way the process is actually conducted: Y Method Materials
Please read the slide. Cause and Effect Diagram
Display Slide: Equipment The Equipment category groups Root Causes
related to tools used in the process: Examples of questions to ask:
Have machines been serviced recently,what is the uptime? Have tools
been properly maintained? Is there variation? The Environment
(a.k.a. Mother Nature) category groups Root Causes related to our
work environment, market conditions and regulatory issues. Is the
workplace safe andcomfortable? Are outside regulations impacting
thebusiness? Does the company culture aid theprocess? Y Equipment
Environment Please read the slide. WHICH Xs CAUSE DEFECTS?
Classifying the Xs The Cause & Effect Diagram is a tool to
generate opinions about possible causes for defects. For each of
the Xs identified in the diagram classify them as follows:
Controllable:C (Knowledge) Procedural:P (People, Systems) Noise:N
(External or Uncontrollable) Think of procedural as a subset of
controllable.Unfortunately many procedures within a company are not
well controlled and can cause the defect level to increase.The
classification methodology is used to separate the Xs so they can
be used in the X-Y Matrix and the FMEA taught later in this module.
The Cause and Effect Diagram is an organized way to approach
brainstorming.This approach allows us to further organize ourselves
by classifying the Xs into controllable, procedural or noise types.
WHICH Xs CAUSE DEFECTS? Chemical Purity Example
Measurement Incoming QC (P) Measurement Method (P) Capability (C)
Manpower Skill Level (P) Adherence to procedure (P) Work order
variability (N) Materials Raw Materials (C) Multiple Vendors (C)
Specifications (C) Startup inspection (P) Handling (P) Purification
Method (P) Methods Room Humidity (N) RM Supply in Market (N)
Shipping Methods (C) Mother Nature Nozzle type (C) Data
collection/feedback (P) Equipment Column Capability (C) Temp
controller (C) Chemical Purity Insufficient staff (C) Training on
method (P) This example of the Cause and Effect Diagram addresses a
chemical purity issue.Notice how the input variables for each
branch are classified as Controllable, Procedural or Noise. Analyze
Phase Inferential Statistics (SigmaXL Version)
Now we will continue in the Analyze Phase with Inferential
Statistics. Inferential Statistics
Nature of Sampling Central Limit Theorem Inferential Statistics
Hypothesis Testing NND P1 Hypothesis Testing ND P1 Intro to
Hypothesis Testing X Sifting Welcome to Analyze Hypothesis Testing
ND P2 Wrap Up & Action Items Hypothesis Testing NND P2 The
fundamentals of this phase are Inferential Statistics, Nature of
Sampling and Central Limit Theorem. Putting the pieces of the
puzzle together.
Nature of Inference inference (n.) The act or process of deriving
logical conclusions from premises known or assumed to be true. The
act of reasoning from factual knowledge or evidence. 1 1.
Dictionary.com Inferential Statistics To draw inferences about the
process or population being studied by modeling patterns of data in
a way that accounts for randomness and uncertainty in the
observations. 2 2. Wikipedia.com Putting the pieces of the puzzle
together. inferenceis The act or process of deriving logical
conclusions from premises known or assumed to be true. The act of
reasoning from factual knowledge or evidence. Inferential
Statistics is used to draw inferences about the process or
population being studied by modeling patterns of data in a way that
accounts for randomness and uncertainty in the observations. One
objective of Six Sigma is to move from only describing the nature
of the data or Descriptive Statistics to the ability infer meaning
from data as to what will happen in the future or Inferential
Statistics. 5 Step Approach to Inferential Statistics
So many questions.? 1. What do you want to know? 2. What tool will
give you that information? 5. How confident are you with your data
summaries? 4. How will you collect the data? 3. What kind of data
does that tool require? As with most things you have learned
associated with Six Sigma there are defined steps to be taken. 4.
Lack of measurement validity
Types of Error 1.Error in sampling Error due to differences among
samples drawn at random from the population (luck of the draw).
This is the only source of error that statistics can accommodate.
2.Bias in sampling Error due to lack of independence among random
samples or due to systematic sampling procedures (height of horse
jockeys only). 3.Error in measurement Error in the measurement of
the samples (MSA/GR&R). 4.Lack of measurement validity Error in
the measurement does not actually measure what it is intended to
measure (placing a probe in the wrong slot measuring temperature
with a thermometer that is just next to a furnace). Types of error
contribute to uncertainty when trying to infer with data. There are
four types of error that are explained above. Population, Sample,
Observation
EVERY data point that has ever been or ever will be generated from
a given characteristic. Sample A portion (or subset) of the
population, either at one time or over time. Observation An
individual measurement. X Lets just review a few definitions: A
population is EVERY data point that has ever been or ever will be
generated from a given characteristic. A sample is a portion (or
subset) of the population, either at one time or over time.An
observation is an individual measurement. Significance is all about
differences
Practical difference and significance is: The amount of difference,
change or improvement that will be of practical, economic or
technical value to you. The amount of improvement required to pay
for the cost of making the improvement. Statistical difference and
significance is: The magnitude of difference or change required to
distinguish between a true difference, change or improvement and
one that could have occurred by chance. Twins: Sure there are
differences but do they matter? In general larger differences (or
deltas) are considered to be more significant.As you see here we
can experience both a practical difference and a statistical
difference. Six Sigma decisions will ultimately have a return on
resource investment (RORI) element associated with them. So the key
question of interest for our decisions is is the benefit of making
a change worth the cost and risk of making it? Mean Shift and
Variation Reduction
The Mission Variation Reduction LSL USL Mean Shift Mean Shift and
Variation Reduction LSL USL Shift & Reduce Your mission, which
you have chosen to accept, is to reduce cycle time, reduce the
error rate, reduce costs, reduce investment, improve service level,
improve throughput, reduce lead time, increase productivity change
the output metric of some process, etc. In statistical terms this
translates to the need to move the process Mean and/or reduce the
process Standard Deviation. You will be making decisions about how
to adjust key process input variables based on sample data, not
population data - this means you are taking some risks. How will
you know your key process output variable really changed and is not
just an unlikely sample?The Central Limit Theorem helps us
understand the risk we are taking and is the basis for using
sampling to estimate population parameters. Remember the law of
conservation of mass that, matter can never be created nor
destroyed. So the areas under the curves of each of these
distributions should always remain the same (relatively = 1) and so
when the variation is reduced the peak height will proportionately
increase as the sample size (or matter) remains constant. A
Distribution of Sample Means
Imagine you have some population. The individual values of this
population form some distribution. Take a sample of some of the
individual values to calculate the sample Mean. Keep taking samples
and calculating sample Means. Plot a new distribution of these
sample Means. The Central Limit Theorem says as the sample size
becomes large this new distribution (the sample Mean distribution)
will form a Normal Distribution no matter what the shape of the
population distribution of individuals. Please read the slide.
Sampling DistributionsThe Foundation of Statistics
3 5 2 12 10 1 6 14 11 9 Population Sample 1 Sample 2 Sample 3
Samples from the population, each with five observations: In this
example we have taken three samples out of the population each with
five observations in it. We computed a Mean for each sample. Note
the Means are not the same! Why not? What would happen if we kept
taking more samples? Every statistic derives from a sampling
distribution.For instance if you were to keep taking samples from
the population over and over a distribution could be formed for
calculating Means, Medians, Mode, Standard Deviations, etc.As you
will see the above sample distributions each have a different
statistic.The goal here is to successfully make inferences
regarding the statistical data. Constructing Sampling
Distributions
Roll em! To demonstrate how sampling distributions work we will use
some random data for die rolls. Create a sample of 1,000 individual
rolls of a die that we will store in a variable named Population.
From the population we will draw five random samples. Sampling
Distributions
SigmaXL allows us to take a random sample of the population.The
data is stored in a newly created sheet.Use Recall Last SigmaXL
Dialog with copy and paste to create multiple columns. Select the
Die Example worksheet. This sheet has been created using a sample
size of 5, 10 and 30 from the Population column. Improve Phase
Advanced Process Modeling Multiple Linear Regression (Minitab
Version)
Now we will continue with the Improve Phase Advanced Process
Modeling MLR. Advanced Modeling & Regression
Review Corr./Regression Non-Linear Regression Transforming Process
Data Multiple Regression Full Factorial Experiments Experimental
Methods Designing Experiments Advanced Process Modeling: MLR
Process Modeling: Regression Welcome to Improve Fractional
Factorial Experiments Wrap Up & Action Items Within this module
we are going to learn about Multiple Linear Regression. Correlation
and Linear Regression Review
Correlation and Linear Regression are used: With historical process
data.It is NOT a form of experimentation. To determine if two
variables are related in a linear fashion. To understand the
strength of the relationship. To understand what happens to the
value of Y when the value of X is increased by one unit. To
establish a Prediction Equation enabling us to predict Y for any
level of X. Correlation explores association. Correlation and
regression donot imply a causal relationship. Designed experiments
allow for true cause and effect relationships to be identified.
Correlations: StirRate, Impurity Pearson correlation of StirRate
and Impurity = 0.959 P-value = 0.000 Recall the Simple Linear
Regression and Correlation covered in the previous module.The
essential tools presented here define the relationship between two
variables.An independent or input factor and typically an output
response.Causation is NOT always proven; however the tools do
present a guaranteed relationship. Use the worksheet named RB STATS
CORRELATION.MTW Correlation Review Correlation is used to measure
the linear relationship between two Continuous Variables
(bi-variate data). Pearson Correlation Coefficient, r, will always
fall between 1 and +1. A Correlation of 1 indicates a strong
negative relationship, one factor increases the other decreases. A
Correlation of +1 indicates a strong positive relationship, one
factor increases so does the other. P-value > 0.05,Ho:No
relationship P-value < 0.05,Ha:Is relationship -1.0 +1.0
Decision Points Strong Correlation No Correlation r The Pearson
Correlation Coefficient, represented here as r, shows the strength
of a relationship in Correlation. An r of zero indicates no
correlation. The P-value proves the statistical confidence of our
conclusion representing the possibility a relationship exists.
Simultaneously, the Pearson Correlation Coefficient shows the
strength of the relationship.For example, P-value standardized at
.05, then 95% confidence in a relationship is exceeded by the two
factors tested. Linear Regression Review
Linear Regression is used to model the relationship between a
Continuous response variable (Y) and one or more Continuous
independent variables (X).The independent predictor variables are
most often Continuous but can be ordinal. Example of ordinal -Shift
1, 2, 3, etc. P-value > 0.05,Ho:Regression Equation is not
significant P-value < 0.05,Ha:Regression Equation is significant
The change in Impurity for every one unit change in StirRate (Slope
of the Line) Presented here StirRate is directly related to
Impurity of the process; the relationship between the two is one
unit StirRate causes Impurity increase. StirRate locked at 30 and
Impurity calculated by 30 times , subtracting gives us a 13.3
Impurity.Granted, we have an error in our model, the red points do
not lie exactly on the blue line.The dependent response variable is
Impurity and the StirRate is the independent predictor as both
variables in this example are perpetual. Regression Analysis
Review
Correlation tells us the strength of a linear relationship not the
numerical relationship. The last step to proper analysis of
Continuous Data is to determine the Regression Equation. The
Regression Equation can mathematically predict Y for any given X.
The Regression Equation from MINITABTM is the best fit for the
plotted data. Prediction Equations: Y = a + bx(Linear or 1st order
model) Y = a + bx + cx2(Quadratic or 2nd order model) Y = a + bx +
cx2 + dx3(Cubic or 3rd order model) Y = a (bx)(Exponential)
Numerical relationship is left out when speaking of
Correlation.Correlation shows potency of linear relationship,
mathematical relationship is shown by and through the Prediction
Equation of Regression.As shown these Correlations or Regressions
are not proven casual relationships. We are attempting to PROVE
statistical commonality.Exponential, quadratic, simple linear
relationships or even predictable outputs (Y) concern REGRESSION
equations.More complex relationships are approaching. Simple versus
Multiple Regression Review
Simple Regression One X, One Y Analyze in MINITABTM using
Stat>Regression>Fitted Line Plot or
Stat>Regression>Regression Multiple Regression Two or More
Xs, One Y Analyze in MINITABTM Using Stat>Regression>Best
Subsets Simple Regressions have one X and are referenced as the
regressors or predictors. Multiple Xs give reason to the output or
response variable, this is Multiple Regression analysis. In both
cases the R-sq value estimates the amount of variation explained by
the model. Regression Step Review
The basic steps to follow in Regression are as follows: Create
Scatter Plot (Graph>Scatterplot) Determine Correlation
(Stat>Basic Statistics>Correlation P-value less than 0.05)
Run Fitted Line Plot choosing linear option
(Stat>Regression>Fitted Line Plot) Run Regression
(Stat>Regression>Regression) (Unusual Observations?) Evaluate
R2, adjusted R2 and P-values Run Non-linear Regression if necessary
(Stat>Regression>Fitted Line Plot) Analyze residuals to
validate assumptions.(Stat>Regression>Fitted Line
Plot>Graphs) Normally distributed Equal variance Independence
Confirm one or two points do not overly influence model. One step
at a time. How to run a Regression is defined here.Create a Scatter
Plot, and understanding the variation between the Xs and Ys,
activate a Correlation analysis allowing a potential linear
relationship indication.The third step is to find existing linear
mathematical relationships using a Prediction Equation then,
fourth, to find the potency or strength of the linear relationship
if one exists. Linear Regression accompanied by the variation of
the input gives a variety of output results. Then completion of the
fifth step yields the percentage a given output has. It also
includes the answer to strength of statistical confidence within
our linear regression. To conclude a Linear Regression exists a 95%
statistical confidence or above has to be obtained.If unsatisfied
conclusions are drawn, as a point of contingency, step six becomes
essential.In step six we contemplate the potential Non-linear
Regression. However this is only necessary if we cannot find a
Regression Equation (statistical and practical) variation of output
by way of scoping the input or by analyzing the model error for
correctness.Step seven, depicted in subsequent slides, validates
residuals for a proper model. Simple Regression Example
This data set is from the mining industry. It is an evaluation of
ore concentrators. Graph > Scatterplot Recalling tools learned
in the Analyze Phase presented here is a Simple Regression example
examining a piece of equipment pertaining to a mining company. This
diagram plots output to input following the Regression steps.
Notice how the equipment is agitated by output of PGM
concentrate.Opening the MINITABTM file named Concentrator.MTW will
show how output is always applied to the Y axis (dependent) as
input is always applied to the X axis (independent). Correlation
Example Identifying the existing Linear Regression is the second
step.Having the Pearson Correlation Coefficient at .847 and a
P-value less than .05 we see with a very strong statistical
confidence a Linear Regression.If no Correlation existed the
coefficient would be closer to zero, remember? Regression Line
Example
Stat > Regression > Fitted Line Plot Now finding the
Prediction Equation of the linear relationship involves two
factors; output response and input variable.Grams per ton of the
PGM concentrate is output and the RPM of the agitator is
input.Knowing a positive slope exists, by a greater than zero
Correlation Coefficient, indicates the agitators RPM increases in
correlation with the PGM concentrate.The slope of Linear Regression
equals Did you recall the Pearson Correlation Coefficient exceeded
zero? Linear Regression Example
Notice the unusual observation may indicate a Non-linear analysis
may explain more of the variation in the data. The P-value <
0.05 therefore the Regression is significant. Shown here is a
Linear Regression of 70% process variation. Considering step five,
a 12 data point MINITABTM alert for a large residual comes to
fruition.R squared, R squared adjusted and a unusual listing of
observation pertain to our full Regression analysis.With these
concerns refer to MINITABTM window (if necessary) and a Non-linear
Regression might be in consideration. Regression Line Example
Stat>Regression>Fitted Line Plot Notice how the new line is a
more appropriate demonstration of our data since the curvature
better fits the plotted points.This is the essence of choosing a
Non-linear Regression and choosing quadratic Regression.This model
option can be used by simply clicking the word Quadratic in the
MINITABTM window. Linear and Non-Linear Regression Example
Linear Model Non- Linear Model More variation is explained using
the Non-linear model since the R-Squared is higher and the S
statistic is lower which is the estimated Standard Deviation of the
error in the model. We have here both Regression models.In terms of
R squared being higher in percentage rate on the Non-linear model
as opposed to that of the Linear we see more process variation. In
addition S presents the estimated Standard Deviation of errors,
Non-linear model has a lower decimal. Lets now consider the model
error. You need not be perplexed, model error has many
variables.Output dependency on the impact of other input variables
and measurement system errors of output and inputs can be causes.
MINITABTM Session Window displays these very Regression analyses so
feel free to use that functionality. Residual Analysis
Example
The recommendation here would be to use standardized residuals and
Four in one option for plotting.In the upper left window Graph
NEEDS to be clicked yielding appropriate modeling and analyzing the
residuals to conclude the seventh step. Control Phase Statistical
Process Control
We will now continue in the Control Phase with Statistical Process
Control or SPC. Statistical Process Control
Six Sigma Control Plans Defect Controls Lean Controls Advanced
Capability Advanced Experiments Welcome to Control Statistical
Process Control (SPC) Wrap Up & Action Items Methodology
Elements and Purpose Special Cause Tests Examples Statistical
techniques can be used to monitor and manage process performance.
Process performance, as we have learned, is determined by the
behavior of the inputs acting upon it in the form of Y = f(X). As a
result it must be well understood we can monitor only the
performance of a process output. Many people have applied
Statistical Process Control (SPC) to only the process outputs.
Because they were using SPC their expectations were high regarding
a new potential level of performance and control over their
processes. However, because they only applied SPC to the outputs
they were soon disappointed. When you apply SPC techniques to
outputs it is appropriately called Statistical Process Monitoring
or SPM. You of course know you can only control an output by
controlling the inputs exerting an influence on the output. This is
not to say applying SPC techniques to an output is bad, there are
valid reasons for doing this. Six Sigma has helped us all to better
understand where to apply such control techniques. In addition to
controlling inputs and monitoring outputs control charts are used
to determine the baseline performance of a process, evaluate
measurement systems, compare multiple processes, compare processes
before and after a change, etc. Control Charts can be used in many
situations that relate to process characterization, analysis and
performance. To better understand the role of SPC techniques in Six
Sigma we will first investigate some of the factors that influence
processes then review how simple probability makes SPC work and
finally look at various approaches to monitoring and controlling a
process. SPC Overview: Collecting Data
Population: An entire group of objects that have been made or will
be made containing a characteristic of interest Sample: A sample is
a subset of the population of interest The group of objects
actually measured in a statistical study Samples are used to
estimate the true population parameters Population Sample Control
Charts are usually derived from samples taken from the population.
Sampling must be collected in such a way it does not bias or
distort the interpretation of the population. The process must be
allowed to operate normally when taking a sample. If there is any
special treatment or bias given to the process over the period the
data is collected the Control Chart interpretation could be
invalid. The frequency of sampling depends on the volume of
activity and the ability to detect trends and patterns in the data.
At the onset you should error on the side of taking extra samples
then if the process demonstrates its ability to stay in control you
can reduce the sampling rate. Using rational subgroups is a common
way to assure you collect representative data. A rational subgroup
is a sample of a process characteristic in which all the items in
the sample were produced under very similar conditions over in a
relatively short time period. Rational subgroups are usually small
in size, typically consisting of 3 to 5 units to make up the
sample. It is important that rational subgroups consist of units
produced as closely as possible to each other especially if you
want to detect patterns, shifts and drifts. If a machine is
drilling 30 holes a minute and you wanted to collect a sample of
hole sizes a good rational subgroup would consist of 4
consecutively drilled holes. The selection of rational subgroups
enables you to accurately distinguish Special Cause variation from
Common Cause variation. Make sure your samples are not biased in
any way; meaning they are randomly selected. For example, do not
plot only the first shifts data if you are running multiple shifts.
Do not look at only one vendors material if you want to know how
the overall process is really running.Finally do not concentrate on
a specific time to collect your samples; like just before the lunch
break. If your process consists of multiple machines, operators or
other process activities producing streams of the same output
characteristic you want to control it would be best to use separate
Control Charts for each of the output streams. If the process is
stable and in control the sample observations will be randomly
distributed around the average. Observations will not show any
trends or shifts and will not have any significant Outliers from
the random distribution around the average. This type of behavior
is to be expected from a normally operating process and that is why
it is called Common Cause variation. Unless you are intentionally
trying to optimize the performance of a process to reduce variation
or change the average, as in a typical Six Sigma project, you
should not make any adjustments or alterations to the process if is
it demonstrating only Common Cause variation. That can be a big
time saver since it prevents wild goose chases. If Special Cause
variation occurs you must investigate what created it and find a
way to prevent it from happening again. Some form of action is
always required to make a correction and to prevent future
occurrences. You may have noticed there has been no mention of the
specification limits for the characteristic being controlled.
Specification limits are not evaluated when using a Control Chart.
A process in control does not necessarily mean it is capable of
meeting the requirements. It only states it is stable, consistent
and predictable. The ability to meet requirements is called Process
Capability, as previously discussed. SPC Overview: I-MR Chart
An I-MR Chart combines a Control Chart of the average moving range
with the Individuals Chart. You can use Individuals Charts to track
the process level and to detect the presence of Special Causes when
the sample size is one batch. Seeing these charts together allows
you to track both the process level and process variation at the
same time providing greater sensitivity to help detect the presence
of Special Causes. Using the Orders worksheet, column Avg. Orders
Per Month. Individual Values (I) and Moving Range (MR) Charts are
used when each measurement represents one batch. The subgroup size
is equal to one when I-MR charts are used. These charts are very
simple to prepare and use. The graphic shows the Individuals Chart
where the individual measurement values are plotted with the Center
Line being the average of the individual measurements. The Moving
Range Chart shows the range between two subsequent measurements.
There are certain situations when opportunities to collect data are
limited or when grouping the data into subgroups simply does not
make practical sense. Perhaps the most obvious of these cases is
when each individual measurement is already a rational subgroup.
This might happen when each measurement represents one batch, when
the measurements are widely spaced in time or when only one
measurement is available in evaluating the process. Such situations
include destructive testing, inventory turns, monthly revenue
figures and chemical tests of a characteristic in a large container
of material. All these situations indicate a subgroup size of one.
Because this chart is dealing with individual measurements it is
not as sensitive as the X-Bar Chart in detecting process changes.
SPC Overview: Xbar-R Chart
If each of your observations consists of a subgroup of data rather
than just individual measurements an Xbar-R Chart providers greater
sensitivity. Failure to form rational subgroups correctly will make
your Xbar-R Charts dangerously wrong. Use the Catapult X-Bar &
R worksheet An Xbar-R is used primarily to monitor the stability of
the average value. The Xbar Chart plots the average values of each
of a number of small sampled subgroups. The averages of the process
subgroups are collected in sequential, or chronological, order from
the process. The Xbar Chart, together with the Rbar Chart shown, is
a sensitive method to identify assignable causes of product and
process variation and gives great insight into short-term
variations. These charts are most effective when they are used as a
matched pair. Each chart individually shows only a portion of the
information concerning the process characteristic. The upper chart
shows how the process average (central tendency) changes. The lower
chart shows how the variation of the process has changed. It is
important to track both the process average and the variation
separately because different corrective or improvement actions are
usually required to effect a change in each of these two
parameters. The Rbar Chart must be in control in order to interpret
the averages chart because the Control Limits are calculated
considering both process variation and Center. When the Rbar Chart
shows not in control, the Control Limits on theaverages chart will
be inaccurate and may falsely indicate an out of control condition.
In this case, the lack of control will be due to unstable variation
rather than actual changes in the averages. Xbar and Rbar Charts
are often more sensitive than I-MR but are frequently done
incorrectly. The most common error is failure to perform rational
sub-grouping correctly. A rational subgroup is simply a group of
items made under conditions that are as nearly identical as
possible. Five consecutive items made on the same machine with the
same setup, the same raw materials and the same operator are a
rational subgroup. Five items made at the same time on different
machines are not a rational subgroup. Failure to form rational
subgroups correctly will make your Xbar-Rbar Charts dangerously
wrong. C Charts and U Charts are for tracking defects.
SPC Overview: U Chart C Charts and U Charts are for tracking
defects. A U Chart can do everything a C Chart can so we will just
learn how to do a U Chart. This chart counts flaws or errors
(defects).One search area can have more than one flaw or error.
Search area (unit) can be practically anything we wish to define.
We can look for typographical errors per page, the number of paint
blemishes on a truck door or the number of bricks a mason drops in
a workday. You supply the number of defects on each unit inspected.
Use the worksheet C and U Charts, column numexperr. The U Chart
plots defects per unit data collected from subgroups of equal or
unequal sizes. The U in U Charts stands for defects per Unit. U
Charts plot the proportion of defects that are occurring. The U
Chart and the C Chart are very similar. They both are looking at
defects but the U Chart does not need a constant sample size as
does the C Chart. The Control Limits on the U Chart vary with the
sample size and therefore they are not uniform; similar to the P
Chart which we will describe next. Counting defects on forms is a
common use for the U Chart. For example, defects on insurance claim
forms are a problem for hospitals. Every claim form has to be
checked and corrected before going to the insurance company. When
completing a claim form a particular hospital must fill in 13
fields to indicate the patients name, social security number, DRG
codes and other pertinent data. A blank or incorrect field is a
defect. A hospital measured their invoicing performance by
calculating the number of defects per unit for each days processing
of claims forms. The graph demonstrates their performance on a U
Chart. The general procedure for U Charts is as follows:
1.Determine purpose of the chart 2.Select data collection point
3.Establish basis for sub-grouping 4.Establish sampling interval
and determine sample size 5.Set up forms for recording and charting
data and write specific instructions on use of the chart 6.Collect
and record data. 7.Count the number of nonconformities for each of
the subgroups 8.Input into Excel or other statistical software.
9.Interpret chart together with other pertinent sources of
information on the process and take corrective action if necessary
NP Charts and P Charts are for tracking defectives.
SPC Overview: P Chart NP Charts and P Charts are for tracking
defectives. A P Chart can do everything an NP Chart can so we will
just learn how to do a P Chart! Used for tracking defectives the
item is either good or bad, pass or fail, accept or reject. Center
Line is the proportion of rejects and is also your Process
Capability. Input to the P Chart is a series of integers number
bad, number rejected.In addition you must supply the sample size.
Use the P Chart worksheet, column Late Reports, subgroup size: 100.
The P Chart plots the proportion of nonconforming units collected
from subgroups of equal or unequal size (percent defective). The
proportion of defective units observed is obtained by dividing the
number of defective units observed in the sample by the number of
units sampled. P Charts name comes from plotting the Proportion of
defectives. When using samples of different sizes the upper and
lower control limits will not remain the same - they will look
uneven as exhibited in the graphic. These varying Control Chart
limits are effectively managed by Control Charting software. A
common application of a P Chart is when the data is in the form of
a percentage and the sample size for the percentage has the chance
to be different from one sample to the next. An example would be
the number of patients arriving late each day for their dental
appointments. Another example is the number of forms processed
daily requiring rework due to defects. In both of these examples
the quantity would vary from day to day. The general procedure for
P Charts is as follows: 1.Determine purpose of the chart 2.Select
data collection point 3.Establish basis for sub-grouping
4.Establish sampling interval and determine sample size 5.Set up
forms for recording and charting data and write specific
instructions on use of the chart 6.Collect and record data. It is
recommended that at least 20 samples be used to calculate the
Control Limits 7.Compute P, the proportion nonconforming for each
of the subgroups 8.Load data into Excel or other statistical
software. 9.Interpret chart together with other pertinent sources
of information on the process and take corrective action if
necessary SPC Overview: Control Methods/Effectiveness
Type 1 Corrective Action = Countermeasure:improvement made to the
process which will eliminate the error condition from occurring.The
defect will never be created.This is also referred to as a
long-term corrective action in the form of Mistake Proofing or
design changes. Type 2 Corrective Action = Flag:improvement made to
the process which will detect when the error condition has
occurred.This flag will shut down the equipment so the defect will
not move forward. SPC on Xs or Ys with fully trained operators and
staff who respect the rules.Once a chart signals a problem everyone
understands the rules of SPC and agrees to shut down for Special
Cause identification. (Cpk > certain level). Type 3 Corrective
Action = Inspection:implementation of a short-term containment
which is likely to detect the defect caused by the error
condition.Containments are typically audits or 100% inspection. SPC
on Xs or Ys with fully trained operators.The operators have been
trained and understand the rules of SPC, but management will not
empower them to stop for investigation. S.O.P. is implemented to
attempt to detect the defects.This action is not sustainable
short-term or long-term. SPC on Xs or Ys without proper usage =
WALL PAPER. Worst Best The most effective form of control is called
a type 1 corrective action. This is a control applied to the
process which will eliminate the error condition from occurring.The
defect can never happen. This is the prevention application of the
Poka-Yoke method. The second most effective control is called a
type 2 corrective action. This a control applied to the process
which will detect when an error condition has occurred and will
stop the process or shut down the equipment so that the defect will
not move forward. This is the detection application of the
Poka-Yoke method. The third most effective form of control is to
use SPC on the Xs with appropriate monitoring on the Ys. To be
effective employees must be fully trained, they must respect the
rules and management must empower the employees to take action.Once
a chart signals a problem everyone understands the rules of SPC and
agrees to take emergency action for Special Cause identification
and elimination. The fourth most effective correction action is the
implementation of a short-term containment which is likely to
detect the defect caused by the error condition.Containments are
typically audits or 100% inspection. Finally you can prepare and
implement an S.O.P. (standard operating procedure) to attempt to
manage the process activities and to detect process defects.This
action is not sustainable, either short-term or long-term. Do not
do SPC for the sake of just saying that you do SPC. It will quickly
deteriorate to a waste of time and a very valuable process tool
will be rejected from future use by anyone who was associated with
the improper use of SPC. Using the correct level of control for an
improvement to a process will increase the acceptance of
changes/solutions you may wish to make and it will sustain your
improvement for the long-term. Purpose of Statistical Process
Control
Not this special cause!! Every process has Causes of Variation
known as: Common Cause:Natural variability Special Cause: Unnatural
variability Assignable:Reason for detected Variability Pattern
Change:Presence of trend or unusual pattern SPC is a basic tool to
monitor variation in a process. SPC is used to detect Special Cause
variation telling us the process is out of control but does NOT
tell us why. SPC gives a glimpse of ongoing process capability AND
is a visual management tool. SPC has its uses because every process
has variation known as both Special Cause and Common Cause
variation.Special Cause variation is unnatural variability because
of assignable causes or pattern changes.SPC is a powerful tool to
monitor the variation of a process.This powerful tool is often an
aspect used in Visual Factories.If a supervisor or operator or
staff is able to quickly monitor how its process is operating by
looking at the key inputs or outputs of the process this would
exemplify a Visual Factory. SPC is used to detect Special Causes in
order to have those operating the process find and remove the
Special Cause.When a Special Cause has been detected the process is
considered to be out of control. SPC gives an ongoing look at the
process capability.It is not a capability measurement but it is a
visual indication of the continued Process Capability of your
process. Elements of Control Charts
Process Center (usually the Mean) Special Cause Variation Detected
Control Limits Graphical and visual plot of changes in the data
over time. This is necessary for visual management of your process.
Control Charts were designed as a methodology for indicating change
inperformance, either variation or Mean/Median. Charts have a
Central Line and Control Limits to detect Special Cause variation.
Use the Orders worksheet, column Avg. Orders Per Month 2. Control
Charts were first developed by Dr. Shewhart in the early 20th
century in the U.S.Control Charts are a graphical and visual plot
of a process and charts over time like a Time Series Chart.From a
visual management aspect a Time Plot is more powerful than
knowledge of the latest measurement.These charts are meant to
indicate change in a process.All SPC charts have a Central Line and
Control Limits to aid in Special Cause variation. Notice again we
never discussed showing or considering specifications.We are
advising you to never have specification limits on a Control Chart
because of the confusion often generated.Remember we want to
control and maintain the process improvements made during the
project.These Control Charts and their limits are the Voice of the
Process. These charts give us a running view of the output of our
process relative to established limits. Understanding the Power of
SPC
Control Charts indicate when a process is out of control or
exhibiting Special Cause variation but NOT why! SPC Charts
incorporate upper and lower Control Limits. The limits are
typically +/- 3 from the Center Line. These limits represent 99.73%
of natural variability for Normal Distributions. SPC Charts allow
workers and supervision to maintain improved process performance
from Lean Six Sigma projects. Use of SPC Charts can be applied to
all processes. Services, manufacturing and retail are just a few
industries with SPC applications. Caution must be taken with use of
SPC for Non-normal processes. Control Limits describe the process
variability and are unrelated to customer specifications.(Voice of
the Process instead of Voice of the Customer) An undesirable
situation is having Control Limits wider than customer
specification limits.This will exist for poorly performing
processes with a Cp less than 1.0 Many SPC Charts exist and
selection must be appropriate for effectiveness. Please read the
slide. The Control Chart Cookbook
General Steps for Constructing Control Charts ~ 1.Select
characteristic (Critical X or CTQ) to be charted. 2.Determine the
purpose of the chart. 3.Select data-collection points. 4.Establish
the basis for sub-grouping (only for Ys). 5.Select the type of
Control Chart. 6.Determine the measurement method/criteria.
7.Establish the sampling interval/frequency. 8.Determine the sample
size. 9.Establish the basis of calculating the Control Limits.
10.Set up the forms or software for charting data. 11.Set up the
forms or software for collecting data. 12.Prepare written
instructions for all phases. 13.Conduct the necessary training.
Stirred or Shaken? Please read the slide. Training Materials Sample
End