sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE...

36
UNIT III QUALITY CONTROL AND RELIABILITY Quality Control And Reliability:Tools for Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity models Rayleigh model Reliability growth models for quality assessment Reliability and Quality Control Although the terms reliability and quality are often used interchangeably, there is a difference between these two disciplines. While reliability is concerned with the performance of a product over its entire lifetime, quality control is concerned with the performance of a product at one point in time, usually during the manufacturing process. As stated in the definition, reliability assures that components, equipment and systems function without failure for desired periods during their whole design life, from conception (birth) to junking (death). Quality control is a single, albeit vital, link in the total reliability process. Quality control assures conformance to specifications. This reduces manufacturing variance, which can degrade reliability. Quality control also checks that the incoming parts and components meet specifications, that products are inspected and tested correctly, and that the shipped products have a quality level equal to or greater than that specified. The specified quality level should be one that is acceptable to the users, the consumer and the public. No product can perform reliably without the inputs of quality control because quality parts and components are needed to go into the product so that its reliability is assured. The seven basic statistical tools for quality control promoted by Ishikawa are widely used in manufacturing productions . They have indeed become an integral part of the quality control literature, and have been known as Ishikawa's seven basic tools. the applications of Ishikawa's seven tools represent a set of basic operations. Ishikawa's seven basic tools for quality control are: Checklist (or check sheet),

Transcript of sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE...

Page 1: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

UNIT III QUALITY CONTROL AND RELIABILITY

Quality Control And Reliability:Tools for Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity models Rayleigh model – Reliability growth models for quality assessment

Reliability and Quality ControlAlthough the terms reliability and quality are often used interchangeably, there is a

difference between these two disciplines. While reliability is concerned with the performance of a product over its entire lifetime, quality control is concerned with the performance of a product at one point in time, usually during the manufacturing process. As stated in the definition, reliability assures that components, equipment and systems function without failure for desired periods during their whole design life, from conception (birth) to junking (death). Quality control is a single, albeit vital, link in the total reliability process. Quality control assures conformance to specifications. This reduces manufacturing variance, which can degrade reliability. Quality control also checks that the incoming parts and components meet specifications, that products are inspected and tested correctly, and that the shipped products have a quality level equal to or greater than that specified. The specified quality level should be one that is acceptable to the users, the consumer and the public. No product can perform reliably without the inputs of quality control because quality parts and components are needed to go into the product so that its reliability is assured.

The seven basic statistical tools for quality control promoted by Ishikawa are widely used in manufacturing productions . They have indeed become an integral part of the quality control literature, and have been known as Ishikawa's seven basic tools. the applications of Ishikawa's seven tools represent a set of basic operations.Ishikawa's seven basic tools for quality control are:

Checklist (or check sheet), Pareto diagram, histogram, scatter diagram, run chart, control chart, and cause-and-effect diagram.

The following figure shows a simple representation of the tools.

Page 2: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

1. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool that can be adapted for a wide variety of purposes

2. Pareto chart: Shows on a bar graph which factors are more significant3. Histogram: The most commonly used graph for showing frequency distributions, or how

often each different value in a set of data occurs4. Runchart: Graphs used to study how a process changes over time5. Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a

relationship.6. Control charts: Graphs used to study how a process changes over time with upper or

lower limits and warning levels7. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many

possible causes for an effect or problem and sorts ideas into useful categories

1. ChecksheetA check sheet is a paper form with printed items to be checked. Its main purposes are to

facilitate gathering data and to arrange data while collecting it so the data can be easily used later. Another type of check sheet is the check-up confirmation sheet. It is concerned mainly with the quality characteristics of a process or a product. To distinguish this confirmation check sheet from the ordinary data-gathering check sheet, we use the term checklist. In most software development environments, the data-gathering aspect is automated electronically and goes far beyond the data-gathering check sheet approach, which has been used in manufacturing production.The checklist plays a significant role in software development, checklists summarize the key points of the process are much more effective than the lengthy process documents The software development process consists of multiple phases, for example, requirements (RQ), system

Page 3: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

architecture (SD), high-level design (HLD), low-level design (LLD), code development (CODE), unit tests (UT), integration and building (I/B), component tests (CT), system tests (ST), and early customer programs (EP). Each phase has a set of tasks to complete and the phases with formal hand-off have entry and exit criteria. Checklists help developers and programmers ensure that all tasks are complete and that the important factors or quality characteristics of each task are covered. Several examples of checklists are design review checklist, code inspection checklist, moderator (for design review and code inspection) checklist, pre-code-integration (into the system library) checklist, entrance and exit criteria for system tests, and product readiness checklist.The use of checklists is pervasive. Checklists, used daily by the entire development community, are developed and revised based on accumulated experience. Checklists are often a part of the process documents. Their daily use also keeps the processes alive .

2. Pareto diagramA Pareto diagram is a frequency chart of bars in descending order; the frequency bars

are usually associated with types of problems. In software development, the X-axis for a Pareto diagram is usually the defect cause and the Y-axis the defect count. By arranging the causes based on defect frequency, a Pareto diagram can identify the few causes that account for the majority of defects. It indicates which problems should be solved first in eliminating defects and improving the operation. Pareto analysis is commonly referred to as the 80–20principle (20% of the causes account for 80% of the defects),although the cause-defect relationship is not always in an 80–20distribution.

Pareto analysis helps by identifying areas that cause most of the problems, which normally means you get the best return on investment when you fix them. It is most applicable in software quality because software defects or defect density never follow a uniform distribution. Rather, almost as a rule of thumb, there are always patterns of clustering ”defects cluster in a minor number of modules or components , a few causes account for the majority of defects, some tricky installation problems account for most of the customer complaints, and so forth. It is, therefore, not surprising to see Pareto charts in software engineering literature

Figure-2 shows an example of a Pareto analysis of the causes of defects for an IBM Rochester product. Interface problems (INTF) and data initialization problems (INIT) were found to be the dominant causes for defects in that product. By focusing on these two areas throughout the design, implementation, and test processes, and by conducting technical education by peer experts, significant improvement was observed . The other defect causes in the figure include complex logical problems (CPLX), translation-related national language problems (NLS), problems related to addresses (ADDR), and data definition problems (DEFN).

Figure-2 Pareto Analysis of Software Defects

Page 4: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

3. HistogramThe histogram is a graphic representation of frequency counts of a sample or a

population. TheX-axis lists the unit interval so a parameter(e.g.,severity level of software defects) ranked in ascending order from left to right, and the Y-axis contains the frequency counts. In a histogram, the frequency bars are shown by the order of the X variable, whereas in a Pareto diagram the frequency bars are shown by order of the frequency counts. The purpose of the histogram is to show the distribution characteristics of a parameter such as overall shape, central tendency, dispersion, and skewness. It enhances understanding of the parameter of interest.

Figure- 3 shows two examples of 2 histograms used for software project and quality management. Panel A shows the defect frequency of a product by severity level (from 1 to 4 with 1 being the most severe and 4 the least). Defects with different severity levels differ in their impact on customers. Less severe defects usually have circumventions available and to customers they mean inconvenience. In contrast, high-severity defects may cause system downtime and affect customers' business. Therefore, given the same defect rate (or number of defects), the defect severity histogram tells a lot more about the quality of the software. Panel B shows the frequency of defects during formal machine testing by number of days the defect reports have been opened (1 “7 days, 8 “14, 15 “21, 22 “28, 29 “35, and 36+). It reflects the response time in fixing defects during the formal testing phases; it is also a workload statement. Figure-4 shows the customer satisfaction profile of a software product in terms of very satisfied, satisfied, neutral, dissatisfied, and very dissatisfied. Although one can construct various metrics with regard to the categories of satisfaction level, a simple histogram conveys the complete information at a glance.

Page 5: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Figure -3 Two Histograms

Figure -4 Profile of Customer Satisfaction with a Software Product

Such charts are commonly referred to as bar charts. Both histograms and bar charts are frequently used in software development.

4. Scatter diagramA scatter diagram vividly portrays the relationship of two interval variables. In a cause-

effect relationship, the X-axis is for the independent variable and the Y-axis for the dependent variable. Each point in a scatter diagram represents an observation of both the dependent and independent variables. Scatter diagrams aid data-based decision making (e.g.,if action is planned on the X variable and some effect is expected on the Y variable).

One should always look for a scatter diagram when the correlation coefficient of two variables is presented.. It is often used with other techniques such as correlational analysis, regression, and statistical modeling.

Figure- 5 is a scatter diagram that illustrates the relationship between McCabe's complexity index and defect level. Each data point represents a program module with the X coordinate being its complexity index and the Y coordinate its defect level. Because program complexity can be measured as soon as the program is complete, whereas defects are discovered over a long time, the positive correlation between the two allows us to use program complexity to predict defect

Page 6: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

level. We can reduce the program complexity when it is developed (as measured by McCabe's index), thereby reducing the chance for defects. Reducing complexity can also make programs easier to maintain. Low-complexity indexes coupled with high defects are clear indications of modules that are poorly designed or implemented and should also be scrutinized.

Figure -5. Scatter Diagram of Program Complexity and Defect Level

5. Run chartA run chart tracks the performance of the parameter of interest over time. The X-axis is

time and the Y-axis is the value of the parameter. A run chart is best used for trend analysis, especially if historical data are available for comparisons with the current trend.An example of a run chart in software is the weekly number of open problems in the backlog; it shows the development team's workload of software fixes.

Run charts are also frequently used for software project management; numerous real life examples can be found in books and journals on software engineering. For example, the weekly arrival of defects and defect backlog during the formal machine testing phases can be monitored via run charts. These charts serve as real-time statements of quality as well as workload. Often these run charts are compared to the historical data or a projection model so that the interpretation can be placed into proper perspective. Another example is tracking the percentage of software fixes that exceed the fix response time criteria. The goal is to ensure timely deliveries of fixes to customers.

Figure shows a run chart for the weekly percentage of delinquent open reports of field defects (defect reports that were not yet closed with fixes by the response time criteria) of a software product. The horizontal line (denoted by the letter T ) is the target delinquency rate. The dashed vertical line denotes the time when special remedial actions were rolled out to combat the high delinquency rate. For each delinquent defect report, causal analysis was done and corresponding actions implemented. A sample of the cause categories and the actions implemented are shown in Figure-6. As a result, the delinquent-defect report rate was brought down to target in about one month. The rate fluctuated around the target for about four months and eventually was brought under control.

Page 7: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Figure-6 : Run Chart of Percentage of Delinquent Fixes

6. Control chart A control chart can be regarded as an advanced form of a runchart for situations where

the process capability can be defined. It consists of a central line, a pair of control limits (and sometimes a pair of warning limits within the control limits),and values of the parameter of interest plotted on the chart, which represent the state of a process. The X- axis is real time. If all values of the parameter are within the control limits and show no particular tendency, the process is regarded as being in a controlled state. If they fall outside the control limits or indicate a trend, the process is considered out of control. Such cases call for causal analysis and corrective actions are to be taken.

The control chart is a powerful tool for achieving statistical process control (SPC). Some examples of metrics from the software development process can be control charted, for instance, inspection defects per thousand lines of source code (KLOC) or function point, testing defects per KLOC or function point, phase effectiveness, and defect backlog management index

Figure-7 : Control Chart for Defects/KLOC

7. Cause-and-effect diagramThe cause-and-effect diagram, also known as the fish bone diagram, It was first used to

Page 8: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

explain factors that affect the production of steel. It shows the relationship between a quality characteristic and factors that affect that characteristic. Its layout resembles a fishbone, with the quality characteristic of interest labeled at the fish head, and factors affecting the characteristics placed where the bones are located. While the scatter diagram describes a specific bipartite relationship in detail, the cause-and-effect diagram identifies all causal actors of a quality characteristic in one chart.

The cause-and-effect diagram is one of the less frequently used tools in software development. Perhaps the best example among fishbone diagrams is the one given by Grady and Caswell (1986).. With the help of a cause-and-effect diagram, they conducted brainstorming sessions on those problems. As Figure -8 shows, they found side effects of register usage and incorrect processor register usage to be the two primary causes. Ultimately, both were found to be caused by incomplete knowledge of the operation of the registers. With this finding, that HP division took aggressive steps to provide proper training and documentation regarding registers and processors prior to subsequent projects. Figure -9 shows a fishbone diagram relating the key factors to effective inspections. Such a diagram was part of the process education material for a project at IBM Rochester.Figure -8. Cause-and-Effect Diagram of Design inspection

Figure -9. Cause-and-Effect Diagram relating the key factors to effective inspections

Page 9: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

CASE TOOLS

CASE tools are computerized software development tools that support thedeveloper when performing one or more phases of the software life cycleand/or support software maintenance

Computer aided software engineering CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers, managers and others to see the project milestones during development.

It can serve as a repository for project-related documents like business plans, requirements and design specifications.

Delivery of the final product is more likely to meet real-world requirements as it ensures that customers remain part of the process.

An increasing variety of specialized computerized tools (actually softwarePackages) have been offered to assist in the development and maintenance of software The purpose of these tools is to make the work of development and maintenance teams more efficient and more effective. Collectively named CASE (computer-aided software engineering) tools, they offer:■ Substantial savings in resources required for software development■ Shorter time to market■ Substantial savings in resources required for maintenance■ Greater reuse due to increased standardization of the software systems■ Reduced generation of defects coupled with increased “interactive” identificationof defects during development.

Case tools are made up of a set of tools or toolkit .It is customary to distinguish between upper CASE tools that support the analysis and design phases, and lower CASE tools that support the coding phase (where “upper” and “lower” refer to the location of these phases in the Waterfall Model ), and integrated CASE tools that support the analysis, design and coding phases.

The main component of CASE tools is the repository that stores all the information related to the project. The project information accumulates in the repository as development proceeds and is updated as changes are initiated during the development phases and maintenance stage. The repository of the previous development phase serves as a basis for the next phase. The accumulated development information stored in the repository provides support for the maintenance stage in which corrective, adaptive and functionality improvement tasks are performed. The computerized management of the repository guarantees the information’s consistency and its compliance with project methodology as well as its standardization according to style and structure procedures and work instructions. It follows that CASE tools are

capable of producing full and updated project documentation at any time.Some lower CASE and integrated CASE tools can automatically generate code based entirely on the design information stored in the repository.

Reverse engineering (re-engineering) tools are also considered to be CASE tools. Based on the system’s code, these tools are applied mainly for recovery and replication of (now non-existing) design documents for currently used, well-established software systems (“legacy” software).

Page 10: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

In other words, reverse engineering CASE tools operate in the opposite direction of “regular” CASE tools: instead of creating system code on the basis of design information, they automatically create complete, updated repository and design documents on the basis of system code.The support that CASE tools provide the developer can be in one or more of the following areas, listed in Table

Page 11: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

CASE ARCHITECTURE

Advantages of CASE Tools:

Reduce the cost as they automate many repetitive manual tasks. Reduce development time of the project as they support

standardization and avoid repetition and reuse. Develop better quality complex projects as they provide greater

consistency andcoordination.

Create good quality documentation. Create systems that are maintainable because of proper control of

configuration item that support traceability requirements.

Page 12: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Disadvantages of using CASE Tools:

Produce initial system that is more expensive to build and maintain. Require more extensive and accurate definitions of user needs and

requirements. May be difficult to customize. Require training of maintenance staff. May be difficult to use with existing system

DEFECT PREVENTION AND REMOVAL EFFECTIVENESS

The concept of defect removal effectiveness and its measurement are central to software development. Defect removal is one of the top expenses in any software project and it greatly affects schedules. Effective defect removal can lead to reductions in the development cycle time and good product quality. For improvements in quality, productivity, and cost, as well as schedule, it is important to use better defect prevention and removal technologies to maximize the effectiveness of the project. It is important for all projects and development organizations to measure the effectiveness of their defect removal processes.Prevention is better than cure” applies to defects in the software development life cycle as well as illnesses in medical science. Defects, as defined by software developers, are variances from a desired attribute. These attributes include complete and correct requirements and specifications as drawn from the desires of potential customers. Thus, defects cause software to fail to meet requirements and make customers unhappy.And when a defect gets through during the development process, the earlier it is diagnosed, the easier and cheaper is the rectification of the defect. The end result in prevention or early detection is a product with zero or minimal defects.It is estimated that up to 60 percent of software developers are involved in fixing errors,. This fact alone shows the value of preventing software defects.

Advantage of Early Defect DetectionData to support the need for early fixes of software defects is supplied by several reportse.The Systems Sciences Institute at IBM has reported that the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase (Figure 1).

Page 13: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Figure 1: Relative Costs to Fix Software Defects (Source: IBM Systems Sciences Institute)Defect prevention involves a structured problem-solving methodology to identify, analyze and prevent the occurrence of defects. Defect prevention is a framework and ongoing process of collecting the defect data, doing root cause analysis, determining and implementing the corrective actions and sharing the lessons learned to avoid future defects.

Principles of Defect PreventionHow does a defect prevention mechanism work? The answer is in a defect prevention cycle (Figure 2). The integral part of the defect prevention process begins with requirement analysis – translating the customer requirements into product specifications without introducing additional errors. Software architecture is designed, code review and testing are done to find out the defects, followed by defect logging and documentation.

Figure 2: Defect Prevention Cycle (Source: 1998 IEEE Software Productivity Consortium)

Page 14: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

The blocks and processes in the gray-colored block represent the handling of defects within the existing philosophy of most of the software industry – defect detection, tracking/documenting and analysis of defects for arriving at quick, short-term solutions.The processes that form the integral part of the defect prevention methodology are on the white background. The vital process of the defect prevention methodology is to analyze defects to get their root causes, to determine a quick solution and preventive action. These preventive measures, after consent and commitments from team members, are embedded into the organization as a baseline for future projects. The methodology is aimed at providing the organization a long-term solution and the maturity to learn from mistakes.Most of the activities of the defect prevention methodology require a facilitator. The facilitator can be the software development project leader (wearing another hat of responsibility) or any member of the team. The designated defect prevention coordinator is actively involved in leading defect prevention efforts, facilitating meetings and communication among team and management, and consolidating the defect prevention measures/guidelines.

THE FIVE GENERAL CTIVITIES OF DEFECT PREVENTION ARE:1. Software Requirements AnalysisErrors in software requirements and software design documents are more frequent than errors in the source code itself, according to Computer Finance Magazine. Defects introduced during the requirements and design phase are not only more probable but also are more severe and more difficult to remove. Front-end errors in requirements and design cannot be found and removed via testing, but instead need pre-test reviews and inspections. Table 1 shows the defects introduced during different phases of the software development life cycle.From the studies made by rious software development communities, it is evident that most failures in software products are due to errors in the requirements and design phases – as high as 64 percent of total defect costs (Figure 3), according to Crosstalk, the Journal of Defense Software Engineering.

Table 1: Division of Defects Introduced into Software by Phase

Software Development Phases Percent of Defects Introduced

Requirements 20 Perecent

Design 25 Percent

Coding 35 Percent

User Manuals 12 Percent

Bad Fixes 8 Percent

Source: Computer Finance Magazine

Page 15: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Figure 3: Origin of Software Defects (Source: Crosstalk, the Journal of Defense Software Engineering)

Hence, it is important to have a proper process of analyzing the requirements in place to ensure that the customer needs are correctly translated into product specifications. Perhaps two or three iteration of interactive sessions with the customer can be of great help in verifying the understanding of the developer about actual requirements.

2. Reviews: Self-Review and Peer ReviewSelf-review is one of the most effective activites in uncovering the defects which may later be discovered by a testing team or directly by a customer. The majority of the software organizations are now making this a part of “coding best practices” and are really increasing their product quality.Often, a self-review of the code helps reduce the defects related to algorithm implementations, incorrect logic or certain missing conditions. Once the developer feels they are ready with the module code, a glance through the code and understanding what it does compared to what it is supposed to do, would complete the self-review.Peer review is similar to self-review in terms of the objective – the only difference is that it is a peer (someone who understands the functionality of the code very well) who reviews the code. The advantage is that of a “fresh pair of eyes.”

3. Defect Logging and DocumentationEffective defect tracking begins with a systematic process. A structured tracking process begins with initially logging the defects, investigating the defects, then providing the structure to resolve them. Defect analysis and reporting offer a powerful means to manage defects and defect depletion trends, hence, costs.A defect logging tool should document certain vital information regarding the defect such as:

Correct and complete description of the defect – so that everyone on the development team understands what it is and how to reproduce it.

Phase at which it is found – so that preventive measures can be taken and propagation of the defect to next phase (software build) is avoided.

Further description of the defect by screenshots. Names of those who uncover defects – so everyone knows who to contact for a better

understanding of the defect.

Page 16: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

4. Root Cause Analysis and Preventive Measures DeterminationAfter defects are logged and documented, the next step is to analyze them. Generally the designated defect prevention coordinator or development project leader facilitates a meeting to explore root causes.The root cause analysis of a defect is driven by three key principles:

Reducing the defects to improve the quality: The analysis should lead to implementing changes in processes that help prevent defects and ensure their early detection.

Applying local expertise: The people who really understand what went wrong are the people present when the defects were inserted – members of the software engineering team. They can give the best suggestions for how to avoid such defects in the future.

Targeting the systematic errors: There may be many errors or defects to be handled in such an analysis forum; however, some mistakes tend to be repeated. These systematic errors account for a large portion of the defects found in the typical software project. Identifying and preventing systematic errors can have a big impact on quality (in terms of defects) for a relatively small investment.With these guidelines, defects are analyzed to determine their origins. A collection of such causes will help in doing the root cause analysis. The defects are classified based on their types. A Pareto chart is prepared to show the defect category with the highest frequency of occurrence – the target. An example of defect classification in a Pareto chart is shown in Figure 4.

Figure 4: Example of Defect Classification in a Pareto ChartRoot cause analysis is the process of finding and eliminating the cause, which would prevent the problem from recurring. Finding the causes and eliminating them are equally important.

Page 17: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Each defect category and the causes making those defects happen can be represented using a cause-and-effect diagram, as shown in Figure 5.

Figure 5: Cause-and-Effect Diagram for a DefectThe cause-and-effect diagram, also known as a fishbone diagram, is a simple graphical technique for sorting and relating factors that contribute to a given situation. A team usually develops the cause-and-effect diagram in a facilitated brainstorming session. Once the root causes are documented, finding ways to eliminate them requires another round of brainstorming. The object is to determine what changes should be incorporated in the processes so that recurrence of the defects can be minimized.

5. Embedding Procedures into Software Development ProcessImplementation is the toughest of all activities of defect prevention. It requires total commitment from the development team and management. A plan of action is made for deployment of the modification of the existing processes or introduction of the new ones with the consent of management and the team. Some of the other activities in this phase of defect prevention are:

Monthly status of the team should mention the severe defects and their analyses. Fortnightly or monthly (based on the project schedule) meetings to make the team aware of the

systematic errors/defects, their symptoms and solutions. Embedding the defect prevention measures in software development life cycle processes. Learning from the previous project’s root cause analysis of defects should be used as the baseline

for future projects. Monitoring the defect prevention progress. Is the number of defects decreasing? Is the

development team learning from past mistakes?Finally, defect prevention is not an individual exercise but a team effort. The software development team should be striving to improve its process by identifying defects early, minimizing resolution time and therefore reducing project costs.

Page 18: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

DEFECT REMOVAL EFFECTIVENESS.Defects are injected into the product or intermediate deliverables of the product (e.g., design document) at various phases. It is wrong to assume that all defects of software are injected at the beginning of development. Table 1 shows an example of the activities in which defects can be injected or removed for a development process.For the development phases before testing, the development activities themselves are subject to defect injection, and the reviews or inspections at end-of-phase activities are the key vehicles for defect removal. For the testing phases, the testing itself is for defect removal. When the problems found by testing are fixed incorrectly, there is another chance to inject defects. In fact, even for the inspection steps, there are chances for bad fixes. Figure 6.3 describes the detailed mechanics of defect injection and removal at each step of the development process. From the figure, defect removal effectiveness for each development step, therefore, can be defined asActivities Associated with Defect Injection and Removal

Development Phase Defect Injection Defect Removal

Requirements Requirements-gathering process and the development of programming functional specifications

Requirement analysis and review

High-level design Design work High-level design inspections

Low-level design Design work Low-level design inspections

Code implementation

Coding Code inspections

Integration/build Integration and build process Build verification testing

Unit test Bad fixes Testing itselfComponent test Bad fixes Testing itselfSystem test Bad fixes Testing itself

Phase defect removal effectiveness and related metrics associated with effectiveness analyses (such as defect removal and defect injection rates) are useful for quality planning and quality management. These measurements clearly indicate which phase of the development process we should focus on for improvement Effectiveness analyses can be done for the entire project as well as for local areas, such as at the component level and specific departments in an organization, and the control chart technique can be used to enforce consistent improvement across the board.

The most important contributor to the quality of software-intensive systems is the quality of the software components. The most important single metric for software quality is that of defect removal efficiency (DRE). The DRE metric measures the percentage of bugs or defects found and removed prior to delivery of the software.Serious software quality control involves measurement of defect removal efficiency (DRE). Defect removal efficiency is the percentage of defects found and repaired prior to release. In principle the measurement of DRE is simple. Keep records of all defects found during development. After a fixed period of 90 days, add customer-reported defects to internal defects

Page 19: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

and calculate the efficiency of internal removal. If the development team found 90 defects and customers reported 10 defects, then DRE is of course 90%. In real life DRE measures are tricky because of bad-fix injections, defects found internally after release; defects inherited from prior releases; invalid defects; and other complicating factors. Defect Removal Effectiveness is calculated: Since the latent defects in a software product is unknown at any point in time, it is approximated by adding the number of defects removed during the phase to the number of defects found later (but that existed during that phase). For example, assume that the following table reflects the defects detected during the specified phases and the phase where those defects were introduced.

The Defect Removal Effectiveness for each of the phases would be as follows: Requirements DRE = 10 / (10+3+0+2+1) x 100% = 63%

Design DRE = (3+18) / (3+0+2+1+18+4+5+2) x 100% = 60%

Coding DRE = (0+4+26) / (0+2+1+4+5+2+26+8+7) x 100% = 55%

Testing DRE = (2+5+8) / (2+1+5+2+8+7) x 100% = 60%

Defect Removal Effectiveness can also be calculated for the entire development cycle to examine defect detection efforts before the product is released to the field.

Development DRE = (Pre-release Defect) / (Total Defects) x 100% = (10+3+2+18+4+5+26+8) / (10+3+2+1+18+4+5+2+26+8+7) x 100 = 88%

The longer a defect exists in a product before it is detected, the more expensive it is to fix. Knowing the DRE for each phase can help an organization target its process improvement efforts to improve defect detection methods where they can be most effective. Future DRE measures can then be used to monitor the impact of those improvement efforts. Most forms of testing are less than 50% efficient in finding bugs or defects. However, formal design and code inspections are more than 65% efficient in finding bugs or defects and often top 85%. Static analysis is also high in efficiency against many kinds of coding defects. Therefore all leading projects in leading companies utilize formal inspections, static analysis, and formal testing. This combination is the only known way of achieving cumulative defect removal levels higher than 95% and approaching or exceeding 99%.

Page 20: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Software Reliability ModelsA proliferation of software reliability models have emerged as people try to understand

the characteristics of how and why software fails, and try to quantify software reliability. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.Most software models contain the following parts:

assumptions, factors, mathematical function that relates the reliability with the factors. The mathematical

function is usually higher order exponential or logarithmic. Software modeling techniques can be divided into two subcategories: prediction modeling and estimation modeling.

Both kinds of modeling techniques are based on observing and accumulating failure data and analyzing with statistical inferenceSoftware reliability assessment is very important in developing a quality software product efficiently. Software reliability models are used to assess a software product's reliability or to estimate the number of latent defects when it is available to the customers.

Such an estimate is important for two reasons: (1) as an objective statement of the quality of the product and (2) for resource planning for the software maintenance phase.

The criterion variable under study is the number of defects (or defect rate normalized to lines of code or function points) in specified time intervals (weeks, months, etc.), or the time between failures .

Reliability models can be broadly classified into two categories: static models and dynamic models

A static model uses other attributes of the project or program modules to estimate the number of defects in the software.

A dynamic model, usually based on statistical distributions, uses the current development defect patterns to estimate end-product reliability

.Static models are static in the sense that the estimated coefficients of their parameters are based on a number of previous projects. The product or project of interest is treated as an additional observation in the same population of previous projects.the parameters of the dynamic models are estimated based on multiple data points gathered to date from the product of interest; therefore, the resulting model is specific to the product for which the projection of reliability is attempted.Dynamic software reliability models can be classified into two categories: those that model the entire development process and those that model the back-end testing phase. The former is represented by the Rayleigh model. The latter is represented by the exponential model and other reliability growth models

Page 21: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Rayleigh ModelThe Rayleigh model or the phase-based defect model provides a framework for quality

management, covering the entire development process.The most important principle in software engineering is “do it right the first time.” This principle speaks to the importance of managing quality throughout the development process.The best scenario is to prevent errors from being injected into the development process.When errors are introduced, improve the front end of the development process to remove as many of them as early as possible.If the project is beyond the design and code phases, unit tests and any additional tests is the last chance to do it right the “first time.”Rayleigh model has been found to be most suitable for predicting reliability of software product. It predicts the expected value of defect density at different stages of life cycle of the project, once parameters like total number of defects or total cumulative defect rate and peak of the curve in terms of unit of time for the curve are decided. The Rayleigh model is a parametric model in the sense that it is based on a specific statistical distribution. When the parameters of the statistical distribution are estimated based on the data from a software project, projections about the defect rate of the project can be made based on the modelThe Rayleigh model is a good overall model for quality management. It articulates the points on defect prevention and early defect removal Based on the model, if the error injection rate is reduced, the entire area under the Rayleigh curve becomes smaller, leading to a smaller projected field defect rate. Also, more defect removal at the front end of the development process will lead to a lower defect rate at later testing phases and during maintenance. Both scenarios aim to lower the defects in the latter testing phases, which in turn lead to fewer defects in the field. The more defects found during formal testing, the more that remained to be found later. The reason is that at the late stage of formal testing, error injection of the development process (mainly during design and code implementation) is basically determined (except for bad fixes during testing). High testing defect rates indicate that the error injection is high; if no extra effort is exerted, more defects will escape to the field.

Figure 1 indicates a Rayleigh curve that models the defect removal pattern of a software product in relation to a six-step development process Given the defect removal pattern up through system test (ST), the purpose is to estimate the defect rate when the product is shipped: the post general-availability phase (GA) in the figure.The nature of curve indicates the pattern of defect removal rate in the life cycle of the project. The area bounded by the x-axis and the curve is the measure of total defects likely to be unearthed from the software being developed. In this example the X -axis is the development phase, which can be regarded as one form of logical equivalent of time. The phases other than ST and GA in the figure are: high-level design review (I0), low-level design review (I1), code inspection (I2), unit test (UT), and component test (CT).

Page 22: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Figure 1. Rayleigh Model

Using the Rayleigh curve to model software development quality involves two basic assumptions.

The first assumption is that the defect rate observed during the development process is positively correlated with the defect rate in the field, as illustrated in Figure 7.3. In other words, the higher the curve (more area under it), the higher the field defect rate (the GA phase in the figure), and vice versa. This is related to the concept of error injection. Assuming the defect removal effectiveness remains relatively unchanged, the higher defect rates observed during the development process are indicative of higher error injection; therefore, it is likely that the field defect rate will also be higher.

Figure 2.. Rayleigh Model Illustration I

The second assumption is that given the same error injection rate, if more defects are discovered and removed earlier, fewer will remain in later stages. As a result, the field quality will be better. This relationship is illustrated in Figure 3, in which the areas under the curves are the same but the curves peak at varying points. Curves that peak earlier have smaller areas at the tail, the GA phase.

Figure 3 Rayleigh Model Illustration II

Page 23: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Both assumptions are closely related to the "Do it right the first time" principle. This principle means that if each step of the development process is executed properly with minimum errors, the end product's quality will be good. It also implies that if errors are injected, they should be removed as early as possible, preferably before the formal testing phases when the costs of finding and fixing the defects are much higher than that at the front end.

Page 24: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Reliability Growth Models For Quality AssessmentSoftware reliability assessment is very important in developing a quality software product efficiently. A reliability growth model is a model of how the system reliability changes over time during the testing process. As system failures are discovered, the underlying faults causing these failures are repaired so that the reliability of the system should improve during system testing and debugging. To predict reliability, the conceptual reliability growth model must then be translated into a mathematical model. Reliability growth modeling involves comparing measured reliability at a number of points of time with known functions that show possible changes in reliability. For example, an equal step function suggests that the reliability of a system increases linearly with each release. By matching observed reliability growth with one of these functions, it is possible to predict the reliability of the system at some future point in time. Reliability growth models can therefore be used to support project planning.There are various reliability growth models that have been derived from reliability experiments in a number of different application domains. Most of these models are exponential, with reliability increasing quickly as defects are discovered and removed. The increase then tails off and reaches a plateau as fewer and fewer defects are discovered and removed in the later stages of testing.The simplest model that illustrates the concept of reliability growth is a step function model (The reliability increases by a constant increment each time a fault (or a set of faults) is discovered and repaired (Figure 1) and a new version of the software is created. This model assumes that software repairs are always correctly implemented so that the number of software faults and associated failures decreases in each new version of the system. As repairs are made, the rate of occurrence of software failures (ROCOF) should therefore decrease, as shown in Figure 1. Note that the time periods on the horizontal axis reflect the time between releases of the system for testing so they are normally of unequal length.

Figure 1 Equal-step function model of reliability growthIn practice, however, software faults are not always fixed during debugging and when you change a program, you sometimes introduce new faults into it. The probability of occurrence of these faults may be higher than the occurrence probability of the fault that has been repaired. Therefore, the system reliability may sometimes worsen in a new release rather than improve.The simple equal-step reliability growth model also assumes that all faults contribute equally to reliability and that each fault repair contributes the same amount of reliability growth. However,

Page 25: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

not all faults are equally probable. Repairing the most common faults contributes more to reliability growth than does repairing faults that occur only occasionally. You are also likely to find these probable faults early in the testing process, so reliability may increase more than when later, less probable, faults are discovered. Later models, take these problems into account by introducing a random element into the reliability growth improvement effected by a software repair. Thus, each repair does not result in an equal amount of reliability improvement but varies depending on the random perturbation Some model allows for negative reliability growth when a software repair introduces further errors. It also models the fact that as faults are repaired, the average improvement in reliability per repair decreases. The reason for this is that the most probable faults are likely to be discovered early in the testing process. Repairing these contributes most to reliability growth.

Figure 2 Random-step function model of reliability growth

The above models are discrete models that reflect incremental reliability growth. When a new version of the software with repaired faults is delivered for testing it should have a lower rate of

failure occurrence than the previous version. However, to predict the reliability that will be achieved after a given amount of testing continuous mathematical models are needed. Many

models, derived from different application domains, have been proposed and compared

Page 26: sqm883837392.files.wordpress.com …  · Web viewfor Quality – Ishikawa’s basic tools – CASE tools Defect prevention and removal– Reliablity. models Rayleigh model – Reliability

Questions of Unit III1. Discuss on the differences between Quality Control and Reliability.2. Enlist and explain in detail on the seven basic statistical tools for quality control

promoted by Ishikawa using neat diagrams to illustrate each of the seven tools. 3. Compare and contrast Histogram and Pareto Diagram, illustrate with relevant diagrams.4. Compare and contrast Run chart and Control chart, illustrate with relevant diagrams.5. Describe Case tools gives its advantages and disadvantages.6. Explain the architecture of case tools with a neat diagram.7. Enlist and explain any 10 tools in the Case toolkit.8. Describe the benefits of early defect detection.9. With a neat diagram explain the principle of defect detection.10. Explain in detail the five general activities of defect prevention.11. Explain how the Pareto chart and Ishikawa’s cause and effect diagram is used in Root

cause analysis.12. Explain what is defect removal effectiveness and the importance of the defect removal

effectiveness metric13. For the following table calculate the DRE for the following phases: Requirements,

Design Coding and the entire development cycle.

14. Explain Reliability models its types and uses.15. Describe the Rayleigh Model elaborating on the principle and assumptions on which it

works.16. Elaborate on the Reliability Growth Models For Quality Assessment