MSA for Attribute or Categorical Data

18
MSA Example: Attribute or Categorical Data

description

Measurement Analysis Study.

Transcript of MSA for Attribute or Categorical Data

  • MSA Example: Attribute or Categorical Data

    All Rights Reserved, Juran Institute, Inc.

    MSA Operational DefinitionsAccuracy: Overall agreement of the measured value with the true value (which may be an expert value). Bias plus precision.Attribute Data: Discrete qualitative data. Attribute Measurement System: Compares parts to a specific set of criteria and accepts the item if the criteria are satisfied.Bias: A systematic difference from the true value. Revealed in the differences in averages from the true value.Precision: Variation in the measurement process.R&R: Repeatability and Reproducibility. Two elements of precision.Repeatability: The variation observed when the same operator measures the same item repeatedly with the same device.Reproducibility: The variation observed when different operators measure the same parts using the same device, sometimes it can be the same operator using different devices.

    All Rights Reserved, Juran Institute, Inc.

    The Fundamental MSA Question

    All Rights Reserved, Juran Institute, Inc.

    Bias

    All Rights Reserved, Juran Institute, Inc.

    Repeatability

    All Rights Reserved, Juran Institute, Inc.

    Reproducibility

    All Rights Reserved, Juran Institute, Inc.

    Attribute Measurement Systems StudyDiscrete qualitative dataGo/no-go basis; or limited data categoriesCompares parts to specific criteria for accept/not accept or to be placed in categoryMust screen for effectiveness to discern good parts from badAt least two appraisers and two trials eachIf available, have Quality Master rate parts first

    All Rights Reserved, Juran Institute, Inc.

    Attribute MSA Study

    All Rights Reserved, Juran Institute, Inc.

    Challenges of Continuous Process MSAMSA study is an experimentRequires two or more trials for calculating RepeatabilityNeeds a way to present the inspection units to the appraiser multiple timesIs not possible within the continuous process

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Visual Inspection of Glass

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Challenges to OvercomeBias to the standard could be evaluated on-line.Repeatability and Reproducibility (R & R) could not be evaluated on-line.A method had to be devised to allow the inspectors to view the same pieces of glass repeatedly.The solution was an off-line conveyor which simulated the on-line condition as closely as possible.

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Method Employed20 pieces of glass from the process that included both good and bad samples were selected. A team of people well versed in the quality standard classified each piece of glass as either pass or fail.All regular inspectors independently evaluated each piece twice (in random order).The inspectors used a log sheet to record the data. Minitab was used to analyze the data.

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study Data

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study Results

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study Results (continued)

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study Results (continued)

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study Results (continued)

    All Rights Reserved, Juran Institute, Inc.

    Case Example: Attribute MSA Study ConclusionsWhat could have caused the poor agreement?What was done to improve consistency?

    MSA for Attribute or Categorical Data Accuracy: Overall agreement of the measured value with the true value (which may be an expert value). Bias plus precision.

    Attribute Data: Discrete qualitative data.

    Attribute Measurement System: Compares parts to a specific set of criteria and accepts the item if the criteria are satisfied.

    Bias: A systematic difference from the true value. Revealed in the differences in averages from the true value.

    Precision: Variation in the measurement process.

    R&R: Repeatability and Reproducibility. Two elements of precision.

    Repeatability: The variation observed when the same operator measures the same item repeatedly with the same device.

    Reproducibility: The variation observed when different operators measure the same parts using the same device, sometimes it can be the same operator using different devices.

    The list provides a quick reference for key terms used in Measurement System Analysis.

    MSA for Attribute or Categorical Data Like all processes, the measurement process has CTQs. The graph above lists some of the most common CTQs used for the measurement process. MSA quantifies the amount of variation for:AccuracyRepeatabilityReproducibilityStability (this is typically covered in the Black Belt workshop)Linearity (covered in Black Belt workshops)MSA for Attribute or Categorical Data Bias is the difference between the observed average of measurements and the true average. Validating accuracy is the process of quantifying the amount of bias in the measurement process. Experience has shown that bias and linearity are typically not major sources of measurement error for continuous data, but they can be.In service and transaction applications, evaluating bias most often involves testing the judgment of people carrying out the measurements.

    ExampleA team wants to establish the accuracy of its process to measure defects in invoices. First, they gather a standard group of invoices and have an expert panel establish the type and number of defects in the group. Next, they have the standard group of invoices measured by the normal measurement process. Differences between averages the measurement process came up with, and what the known defect level was from the expert panel represented the bias of the measurement process.

    MSA for Attribute or Categorical Data Repeatability is the variation in measurements obtained when one operator uses the same measurement process for measuring the identical characteristics of the same parts or items.

    Repeatability is determined by taking one person, or one measurement device, and measuring the same units or items repeatedly. Differences between the repeated measurements represent the ability of the person or measurement device to be consistent.

    Possible causes of the lack of repeatability are listed on the slide.

    MSA for Attribute or Categorical Data Reproducibility is very similar to repeatability. The only difference is that instead of looking at the consistency of one person, you are looking at the consistency between people.

    Reproducibility is the variation in the average of measurements made by different operators using the same measurement process when measuring identical characteristics of the same parts or items.

    Possible causes of poor reproducibility include: measurement process is not clear, operator not properly trained in using the measurement system, and operational definitions are not clear nor well established.

    MSA for Attribute or Categorical Data Listed here are the key highlights of conducting a MSA for attribute or categorical data. The parts can be invoices, parts or reason codes for customer returns, for example.

    MSA for Attribute or Categorical Data This shows the results of 2 rounds using 2 appraisers, assessing the same 20 items.

    MSA for Attribute or Categorical Data When conducting an MSA for a continuously running process, parts should be taken off-line to conduct the MSA study.

    MSA for Attribute or Categorical Data The example given here is that of visual inspection of glass.MSA for Attribute or Categorical Data MSA for Attribute or Categorical Data There were two outcomes in this inspection or measurement process: pass or fail. Twenty pieces, a team of inspectors, and two rounds (or trials) were used in the MSA.

    MSA for Attribute or Categorical Data This slide shows the data in MINITAB. The Standard column documents the correct or expert answer for each piece of glass.

    MSA for Attribute or Categorical Data The graph on the left shows the agreement (or repeatability) of each appraiser between Trial 1 and Trial 2.The graph on the right shows the agreement of each appraiser with the Standard.As shown on both graphs, the blue dots show the percent agreement and the redlines are the 95% confidence intervals.

    MSA for Attribute or Categorical Data This slide shows the Within Appraiser agreement.For example, Larry scored 100% - Trial 1 and Trial 2 are in full agreement.On the other hand, Allen scored only 50% agreement there is only a 50% agreement between his Trial 1 and Trial 2 measurements. Or he disagrees with himself half the time!

    MSA for Attribute or Categorical Data This slide shows the Agreement of each Appraiser (across both trials) with the Standard.For example, Larry has 89% agreement with the Standard but, Allen only has 39% agreement with the Standard.MSA for Attribute or Categorical Data This shows the level of agreement across all Appraisers. In this case, only 5.56% agreement!

    MSA for Attribute or Categorical Data Given the results of the MSA study, what could have caused the poor agreement? And what should be done to improve the measurement system?

    The measurement system must be improved and tested again (with another MSA study) to reach at least 90% agreement before the data can be used for base-lining process performance or further analysis.MSA for Attribute or Categorical Data