index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince...

109

Transcript of index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince...

Page 1: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,
jprince
http://www.therationaledge.com/index.jsp
jprince
Copyright Rational Software 2002
Page 2: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Editor's Notes:

Summer Rules!

Cover art: Stained glass, Untitled: 1980, by Margy Duffy

Isn't it great? The beach, the barbecue, the long afternoons that last until about 9 p.m. Ok, even if you live in the southern hemisphere, where it's almost winter, not summer, certain rules still apply. Please read on. In this issue of The Rational Edge, we're ushering in the season with some specific advice you can apply to your software development projects over the coming months. Our lead story by Eric Cordozo is a smart adaptation of Stephen Covey's The Seven Habits of Highly Effective People. Focusing on the specific ingredients for making iterative development projects more successful, this article also includes the top five reasons why such projects fail -- i.e., rules for what not to do.

Have you ever wanted an easy-to-understand description of the dramatic differences between building software and building just about anything else? Walker Royce's "The Case for Results-based Software Management" lays out his top principles for successfully managing (and understanding) a project using the phases of the Rational Unified Process (the RUP). Essentially, according to Walker, you

Develop the business case, vision, and prototype the solution; elaborate this into a basic architecture; create usable, iterative releases; and then finalize into field-ready code.

So if you're trying to convince others in your organization that the "activity-based" approach to project management (e.g., waterfall method) is inadequate, this brief article is a great starting point. Plus, it gets to the heart of what Rational Software stands for: improvements in our customers' products, processes, and ROI.

Elsewhere in this issue, OODS guru Paul R. Reed, Jr. describes the

jprince
http://www.therationaledge.com/content/jun_02/index.jsp
jprince
Copyright Rational Software 2002
Page 3: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

potentially confusing transition from requirements to design in a software project. How do you know when you've adequately described your use cases? How do you connect the requirements workflow to the one for design? Consider this RUP-savvy guidance. Also, Rational partner RTTS offers strategic advice on end-to-end testing, which takes into account the enormous complexity of today's n-tiered IT systems.

This month's technical section focuses on the use case. Ellen Gottesdiener is back with ways to correct "abuse cases." Dr. Use Case tackles the vexing question, "Can a clock be an actor?" in use-case design. And John Morrison of Sears, Roebuck & Co. offers a very interesting technique for versioning requirements artifacts (use cases) with Rational® RequisitePro® and ClearCase.® Note: this article is a preview of his Rational User Conference (RUC) presentation at the Disney World Swan and Dolphin hotels in Lake Buena Vista Florida this coming August 18th. Be sure to register for the conference soon!

Happy iterations,

Mike PerrowEditor-in-Chief

Copyright Rational Software 2002 | Privacy/Legal Information

Page 4: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

The Case for Results-Based Software Management

by Walker Royce

Vice PresidentStrategic Services

Rational Software

Editor's note: This article recently appeared on InformationWeek.com.

Consider the old way of thinking about project management: You develop a detailed plan of activities, track that plan's execution, and adjust for variances between planned performance and actual performance. Traditionally, you assess a plan's quality by scrutinizing its level of detail. There's little uncertainty about the steps or the eventual outcome in the construction industry, for example, where the laws of physics, properties of materials, and maturity of building codes and practices are established engineering disciplines. Success simply depends on resource management.

But in the software-development industry, these traditional methods cause a huge percentage of software-development projects to founder. True, a sequential, activity-based construction approach may be better than nothing, but the success rate for software projects following this approach (typically called the waterfall model) is about one in 10.

It's tempting to compare software construction to the building of a house. After all, both projects involve requirements management, design (blueprints), scheduling, specialty teams (comparable to roofers, carpenters, plumbers, etc.), and inspections. But decades of software projects have shown us that traditional modes of construction are very different from the diverse ways in which software is designed and delivered to the customer. One reason for such a low success rate is that traditional project-management approaches do not account for the level of creativity often required to complete software projects that are initiated with significant levels of uncertainty regarding:

jprince
http://www.therationaledge.com/content/jun_02/f_resultsbased_wr.jsp
jprince
Copyright Rational Software 2002
Page 5: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

1. The problem: what the user really wants or needs

2. The solution: what architecture and technology mix is most appropriate

3. The planning: cost and time constraints, team composition, stakeholder communication, desirable phases, etc.

Why so much uncertainty? One of my colleagues points to the "soft" in "software." Software requirements (what it's required to do) usually change over the course of completion. For this reason, it's counterproductive to require five-digit precision in the design when the development team has only one-digit precision in understanding of the problem.

What we've learned is that software management is better described in terms of "software economics" than "software construction" or "software engineering." Day-to-day decisions in software management are about value judgments, cost tradeoffs, human factors, macroeconomic trends, technology trends, market strength, and timing. They are rarely concerned with mathematics, materials, physics, or established engineering tenets.

So what we need is a modern way of thinking about software-quality management that accommodates our industry's 30 years of lessons learned and patterns of successful projects. Today's modern software-management approaches steer software projects through the minefield of "uncertainties" rather than tracking against a precise long-term plan. Delivering innovation on schedule and on budget requires iterative life cycles, constant risk management, objective oversight, and a "steering" style of leadership that demands creativity throughout the team, however large or small.

The Waterfall Approach

The traditional "waterfall" approach to project management is a sequential, activity-based paradigm -- i.e., do requirements activities, then design activities, then coding activities, then unit testing, integration activities, and finally system acceptance. I've identified 10 classic principles of this approach:

1. Freeze requirements before design.

2. Avoid coding before detailed design review.

3. Use a higher-order programming language.

4. Complete unit testing before integration.

5. Maintain detailed traceability among all artifacts.

6. Document and maintain the design.

7. Assess quality with an independent team.

8. Inspect everything.

Page 6: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

9. Plan everything early with high fidelity.

10. Control source code baselines rigorously.

To continue the building analogy, these principles align with traditional construction approaches. Don't do any blueprints until you've got every detail discussed and approved with the owner; don't build anything until the owner has totally approved the blueprints; pneumatic nail guns are better than hammers for the bulk of construction; buy standard plumbing fixtures, toilets, appliance, and electrical fixtures and they will install and integrate with no issue; require city building inspectors at each major milestone to ensure compatibility with established codes. Nothing wrong with that -- unless you're building software.

The Results-Based Approach

By contrast, iterative development techniques, industry best practices, and economic motivations drive software-development companies to take a more results-based approach. Develop the business case, vision, and prototype the solution; elaborate this into a basic architecture; create usable, iterative releases; and then finalize into field-ready code. Here are my top 10 principles of results-based software development:

● Use an architecture-first approach. An early focus on the architecture results in a solid foundation for 20% of the stuff (requirements, components, user interactions, project risks, etc.) that drives the overall success of the project. Get the architecturally important things to be well-understood and stable before worrying about the complete breadth and depth of all the artifacts, and you'll see far less scrap and rework over the course of the project.

● Confront risks early. Resolving the critical issues first results in a predictable production with fewer surprises to impact your budget and schedule.

● Use component-based development methods. The complexity of any software effort is mostly a function of the number of human-generated elements. Reduce this complexity by using existing architectural frameworks (like .Net and Java 2 Enterprise Edition) and their rich libraries of pre-built components.

● Establish a change-management environment. Along with the advantages of iterative development comes the need to carefully manage changes to artifacts over the course of the project.

● Use tools that support "round-trip engineering." Full automation from design to compiled code, as well as reverse engineering from code to design, is referred to as "round-trip engineering." This enables teams to spend more time designing and writing the software, less time on overhead tasks.

● Design software with rigorous, model-based notation. An engineering notation for design enables complexity control, objective assessment, and automated analyses.

● Use automated metrics for quality and progress assessment.

Page 7: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Progress and quality indicators are derived directly from the evolving artifacts for more meaningful insight into trends and correlation with requirements.

● Maintain working versions of the software. Because the integration of software components occurs early, then continues throughout the project, it's important to maintain demonstrable working code. Intermediate results are objective and tangible, so integration issues emerge quickly and are more readily solved.

● Plan releases with evolving levels of detail. Each project increment and demonstration should reflect current levels of detail for both requirements and architecture, since these things evolve in balance. What's more, the level of precision in the software evolves along with the level of understanding of the project team.

● Establish a scalable, configurable process. No single process is suitable for all software-development projects. To be pragmatic, a process framework needs to be configurable for a broad spectrum of applications. This ensures economy of scale and best return on investment.

The Mark Of Success

The most discriminating characteristic of a successful software-development process is a well-defined separation between "research-and-development" activities and "production" activities. When software projects do not succeed, the primary reason is usually a failure to crisply define and execute these two stages, with proper balance and appropriate emphasis. This is true for both traditional (waterfall) and iterative processes. Most unsuccessful projects exhibit one of these characteristics:

● An overemphasis on the R&D aspects. Teams perform too many analyses or paper studies, or procrastinate on the construction of engineering baselines.

● An overemphasis on the production aspects through rush-to-judgment designs, premature work by overeager coders, and continuous hacking.

By contrast, successful projects tend to have very well-defined project milestones in which there's a noticeable transition from a research attitude to a production attitude. Earlier phases focus on achieving functionality; later phases revolve around achieving a product that can be shipped to a customer.

Software management is hard work. Technical breakthroughs, process breakthroughs, and new tools will make it easier, but management discipline will continue to be the crux of software-project success. New technological advances will be accompanied by new opportunities for software applications, new complexities, new modes of automation, and new customers with different priorities. Accommodating these changes will perturb many of our ingrained software-management values and priorities. However, striking a balance among requirements, designs, and plans will

Page 8: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

remain the underlying objective of future soft ware-management endeavors, just as it is today.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 9: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

The Seven Habits of Effective Iterative Development

by Eric Lopes Cardozo Director, Empulsys

In his book The Seven Habits of Highly Effective People,1 Stephen Covey describes seven related principles that can make a person more effective in his or her personal relations, interpersonal relations, and management efforts. At the time I read this book, I was mentoring the implementation of the Rational Unified Process® (RUP®) into an organization. After reading three or four chapters, it came to me that the "seven habits" could serve as a framework to highlight the essentials of managing iterative software projects.

Based on my experience, many project managers have problems with planning and controlling iterative projects, determining iteration content, selling the iterative approach to customers and sponsors, and establishing effective communication, both among project team members and with others in the project environment.

Because of its collaborative, problem-solving character,2 iterative software development is similar to a multidisciplinary project or parallel development, which places a high demand on communication between the project team and project stakeholders, and among team members themselves. Covey places great emphasis on communication, so his seven habits seem to fit iterative development perfectly.

In this article, we will first have a look at five common causes of iterative project failure that managers must avoid. Then, based on Covey's work, we will define seven habits project managers can use to help their iterative projects succeed.

jprince
http://www.therationaledge.com/content/jun_02/f_sevenHabits_ec.jsp
jprince
Copyright Rational Software 2002
Page 10: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Five Top Reasons for Iterative Project Failure

An iterative approach to software development goes a long way toward addressing some of the fundamental reasons why projects fail, but it is still up to the project manager to implement the approach effectively. We will discuss the five top reasons for failure below.

1. Lack of Effective Communication

This is one of the most common sources of failure on development projects. Considerable research has been done on the need for effective communication, and the conclusions are clearly captured by the following definition:

"Communication is a prerequisite for effective coordination, as it is the vehicle through which personnel from multiple functional areas share information critical to the successful implementation of projects."3

A well-performed project start-up can lay a foundation for effective communication. The goal of a project start-up is to establish a credible basis for the project that is acceptable for all stakeholders. However, project managers too often fail to seek answers to some fundamental questions:

● What business objectives/benefits is the project intended to achieve?

● What level of quality is expected for the end product(s)?

● What risks did the customer consider in deciding to set up this project?

Getting the answers to these questions requires effective communication with the customer; building a project around these answers requires effective communication among project team members.

2. Lack of Buy-In for an Iterative Development Approach

A common fail factor on iterative projects is lack of buy-in for the process from both the customer and the project team.

Customer buy-in on the development process, particularly end-user involvement, is crucial. Insufficient end-user involvement is the number one reason why projects fail. Unfortunately, some customers resist getting involved. "Why should I spend time thinking about what I want?" they complain. "I'm paying you a lot -- you go figure it out!" In fact, in such cases, the customer is delegating the definition of the problem rather than the task of solving it. That is why it is important to get their buy-in on the development process during project start-up.

The same is true for project team buy-in. In contrast to a waterfall approach, iterative software development is a team effort. Activities such

Page 11: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

as requirements capture, analysis and design, and testing are performed in parallel rather than sequentially, and this requires more coordination. Wars over methodology, languages, and tools will spell disaster for any iterative project.

3. Faulty Methodology

Of course, a major source of failure is the methodology itself. Everyone knows the big complaint about the waterfall approach: It doesn't provide a complete picture of project progress until near the end of the project, at the system integration phase, so you may not be able to detect serious problems until that point. This phenomenon, known as "late design breakage," can result in unnecessary rework and a lot of stress on people and budget.

But the waterfall methodology is not the only one that fails. Many projects using Rapid Application Development (RAD) or the Dynamic Systems Development Method (DSDM) have failed because they followed an incremental approach instead of an iterative one. The incremental approach looks like a sequence of small, but perfect, waterfall-like projects. It tends to deliver systems with a brittle or "stovepipe" software architecture, because the initial architecture is based on only a part of the problem space. The result is that the system's architecture never stabilizes, and the project team spends many hours doing unnecessary rework to the architecture instead of adding functionality with each iteration.

4. Poor Requirements Gathering and Documentation

Another fail factor lies in the way requirements are treated and documented. In some projects, requirements engineers seem to focus mainly on the customer, disregarding other stakeholders. They do not recognize that requirements must be unambiguous for a whole range of stakeholders, including developers and testers. In addition, traditional software development approaches typically document requirements as functions, which are not directly related to business value. This makes them hard to prioritize, thus compromising scope management and project steering. Another problem with functions is that they make it hard to develop test cases. Test designers are forced to guess at scenarios that will cover the real usage of the system from the user's point of view, when in fact these should have been captured by the requirements.

5. Lack of Unified Tool Use

Effective iterative development depends on the effective use of software development tools. However, unless the project managers choose the right tools and train their teams to use them correctly, then the tools may absorb more time and attention than the processes they are supposed to support. Many teams waste precious time and resources trying to use unsuitable tools that are forced upon them by organizational policies. Also, because iterative development depends on team effort, projects can get derailed if members are not using the same toolset or using it according to project guidelines. When a project member becomes ill and no one knows

Page 12: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

where his or her code resides on the network, or a new team member comes on board who is unfamiliar with the practices of other developers on the team, then the project is at risk. Every developer brings his or her experience and habits to the project, and project managers should leverage this experience to improve project performance -- but only if it fits the project.

Seven Habits That Foster Success

Now that we have seen what often causes projects to fail, let's look at some of the habits a project manager can adopt to make their projects succeed.

1. Be Proactive

In the context of a software development project, Covey's "Be Proactive" habit means "Actively attack risks and seek out the project's stakeholders

if they don't come to you."

The main benefit of working iteratively is risk mitigation. Every project is confronted with risks that may include design flaws, ambiguous requirements, lack of experience, new technology, inadequate user involvement, an inadequate development environment, and so forth. It is almost impossible to recognize them all on day one. At project start-up and near project closeout, most risks are associated with the project's environment: Do we really have users? Are the users trained and ready for deployment? When the project team starts to analyze requirements, however, risks are increasingly associated with technical issues. The risk associated with a "big-bang" integration strategy is that the design may not reflect the (real) requirements. Late discovery of design defects can then cause budget and schedule over-runs, which may eventually kill the project.4

Develop the "Delivery Habit"

In iterative development, risks are mitigated by developing parts in sequence, so that the system evolves instead of being constructed and integrated all at once, near the end of a project. From a cultural perspective, this means that the project team must adopt a "delivery habit" that ensures progress (i.e., a demonstration-based approach). With each iteration, they will mitigate more risks and deliver more function to the user. Progress will be measured by the results of system tests and user feedback that indicate which requirements are now specified, designed, incorporated, tested, or deployed. If delivery stops, there will be no visible progress, and the project will be in danger. One important practice in iterative development is the delivery of an executable system at the end of each iteration (except for the first or second one). The motto is: Make sure you're making progress, even imperfect progress.

The "delivery habit" reflects a proactive attitude. When they develop iteratively, project members work cooperatively on more artifacts within a tighter timeframe; they do not sit around waiting for someone else to finish an activity before they get started. They realize that they can get a

Page 13: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

lot done, even if the other person is only halfway through.

Take Action in the Face of Risk

Adopting a proactive attitude also means that one must act when confronted with risks.5 For example:

● When the scope of the iteration turns out to be too large to deliver on schedule, reduce the scope and deliver a smaller solution.

● If the customer is not able to visit the development site, try to go to them.

● If the customer cannot seem to express his or her ideas about the user interface, then develop a prototype they can react to.

● When a project depends on services to be developed by another project, create stubs in case the other project does not deliver on schedule.

Build a Partnership with Stakeholders

A proactive attitude also helps in developing a partnership with project stakeholders and establishing effective communications. Demonstrate that you understand the business needs the project is designed to meet and that the project team is committed to building the right solution for the right problem. Also explain the development process to stakeholders and show how it supports building the right solution.

To be proactive, it is essential to determine who the stakeholders are: Who influences and who makes decisions? This activity is known as stakeholder analysis. Once you know who these people are, you can think of ways to get them on board (or deliberately not bring them in) and keep it that way. In general, there are four types of stakeholders:

● End users. People who will use or buy the product.

● System users. People who will keep the product "alive" during its post-deployment lifecycle (i.e., maintenance and support personnel, suppliers).

● Temporary users. People who develop the product or are involved with the product roll-out (e.g., the project team, engineers, marketing people, trainers).

● Other Stakeholders. People who are not directly involved in the project but have the power to either make or break it (e.g., laws and regulations, other projects, environmental movements).

2. Begin with the End in Mind

For our purposes, we can interpret Covey's "Begin with the End in Mind" as "Structure project iterations according to project lifecycle goals."

One of the pitfalls of iterative development is that you can get yourself

Page 14: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

into a never-ending sequence of iterations. Obviously, when this happens, your project will be late and over budget. In most cases the project team begins enthusiastically but then gets lost somewhere near the "end." One way to avoid this is to begin with the end in mind. Iterative projects should be structured around goals. In the Rational Unified Process (RUP), these goals take the form of phases and milestones.

Lifecycle Objectives Milestone

The Lifecycle Objectives milestone marks the end of the Inception phase. The most important goal of this phase is to achieve concurrence among stakeholders on the project's scope and vision. According to the RUP, the Vision artifact, along with the Use-Case Model, is used to document the project's scope. The Vision artifact captures the problem statement, business needs (problem space), and traceable high-level requirements or features. The Vision describes the "end" of the project, and project closeout is achieved when the Vision is successfully delivered to the stakeholders. After the Vision is stabilized, all subsequent iterations must deliver a part of that Vision.

Lifecycle Architecture Milestone

The Lifecycle Architecture milestone marks the end of the Elaboration phase. In this phase, the most important goal is to develop an architectural baseline that demonstrates that the software architecture is capable of delivering the Vision. Technical risks are mitigated by developing and testing the architecturally significant part of the Use-Case Model. Iteration goals could also incorporate development of non-architectural parts of the Use-Case Model in order to mitigate non-technical risks such as scope risks. Also, a system must satisfy non-functional requirements such as performance and availability. In order to provide the requested performance (the end), the project team must first determine whether the software architecture is capable of satisfying the requested performance requirements. Furthermore, the team must determine whether the requested performance requirements are both realistic and capable of satisfying business needs.

Initial Operational Capability Milestone

The Initial Operational Capability milestone marks the end of the Construction phase. The goal of this phase is to produce a first product release (beta), which will be used for product acceptance in the next and last phase. Once the software architecture is stabilized (Elaboration), the project team is capable of developing the remaining (big) part of the Vision safely (and with more people) in two to four iterations. In this stage, "speed and quality" is the motto.

Product Release Milestone

The Product Release milestone marks the end of the Transition phase. In this phase, the goals are to deploy the system in the user environment and to achieve project closeout.

Keep in mind that each phase mentioned in these milestone descriptions

Page 15: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

actually consists of one or more iterations. Phase goals are used to structure the sequence of iterations. Iteration goals are refinements of these high-level phase goals, and the instruction to "begin with the end in mind" applies to each phase and each iteration. For example, we noted that the goal of the Elaboration phase is to provide an architectural baseline capable of delivering the project's Vision. From an activity-driven point of view, this involves analysis, design, coding, integration, and test activities. From a goal-driven point of view, it involves addressing technical and scope risks. Each iteration should address one or more risks by developing related parts of the Use-Case Model.

From a project management point of view, there is a part of the project plan where being proactive intersects with beginning with the end in mind. Early in the project (e.g., during Inception) the project manager, together with the stakeholders, should develop a Product Acceptance Plan. This plan helps to clearly define the end of the project and will also confront stakeholders with conditions for project closeout early in the lifecycle. A Product Acceptance Plan describes who accepts what and when, against which criteria. In most cases it identifies new stakeholders (e.g., maintenance and support) and new requirements. It also provides important information about the contents of the Transition phase (e.g., how many iterations it will entail). Often the Transition phase is limited to one iteration because of cost constraints. However, the number of Transition iterations and the length of each iteration are defined by the stakeholders who are responsible for product acceptance.

3. Put First Things First

We can translate Covey's "Put First Things First" as "Organize and perform activities according to their priority."

Within the context of iterative development, this habit is where "Be Proactive" (be risk-driven) meets "Begin with the End in Mind" (be goal-driven). We have already discussed the importance of a demonstration-based approach, which means that intermediate artifacts (user interface prototypes, product baselines) are assessed in an executable environment rather than by reviewing reports. Transitioning intermediate artifacts into an executable demonstration of relevant scenarios stimulates earlier convergence on integration, a more tangible understanding of design trade-offs, and earlier elimination of architectural defects.6 The demonstration-based approach ensures that progress is measured against tangible results instead of placing blind faith in projections on paper.

What is to be demonstrated in which iteration very much depends on the project's state with respect to its lifecycle and risks inherent in the project. These risks can vary over the project lifecycle, as it is defined by the RUP.

Risks During Inception

During Inception, most risks fall within the project environment, which includes the organization, funding, people, tight schedules, and expected benefits of new technology. During this phase, developing a business case is a crucial step. A business case promotes understanding of the business

Page 16: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

problem and buy-in from the project sponsors. It also helps to explain the project's business drivers to other stakeholders. Furthermore, a business case is the most powerful weapon against feature creep. Developing one must be a joint effort by the project team -- which is responsible for determining development costs and schedule -- and the customer, which is responsible for defining the benefits.

Risks During Elaboration

During Elaboration, the focus shifts to technology. The goal of the Elaboration phase is to achieve an architectural baseline by mitigating (mostly) technical risks. Mitigation is achieved by developing and testing critical parts of the system (i.e., through an architecture-first approach). Therefore, be very careful with the functional load of elaboration iterations. Developing and testing the tricky parts of the system is hard enough. Remember that each iteration must result in executable software. Although tight schedules are necessary to keep the project team going, unrealistic ones can wear out project teams. In addition, the customer will not be pleased if the goals of iterations are not met.

Which part of the system must be developed during Elaboration? The trick is to match technical risks with use cases or scenarios that:

● Represent crucial business functions (without these there is not much of a system).

● Interact with external systems.

● Incorporate critical non-functional requirements such as availability and performance.

● Represent parts of the system whose scope isn't clear (so that delivering a small solution might clear the fog.7

Another risk in the Elaboration phase is that the user will see an early version of the system for the first time, and typically only about 20 percent of it will work. To maintain buy-in from the user group, it's critical to manage expectations (i.e., to be proactive).

Risks During Construction

In the Construction phase, the focus is on completing the system, and, as I noted earlier, the motto is "speed and quality." This can only be achieved by adding functionality to a stable architecture and having an effective build/release process. Both should be established in the Elaboration phase. So what can go wrong? Well first, when customers see the system in Elaboration, they might come up with new requirements. In addition, because more developers are brought in during Construction, communications can begin to break down within the project team. In this phase, whatever happens, the team must be focused on adding use cases to the system. To overcome feature creep, they must negotiate any new requirements (by sticking to the business case) and ensure that an effective change process is in place.

Page 17: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

At the end of the Construction phase, the project team hands over a preliminary product release to the customer for acceptance testing. A proactive attitude is necessary to ensure that the customer is ready for this. The team should be thinking through all concerns about hardware, software, manuals, test data, maintenance and support personnel, and end-users (or representatives). As mentioned earlier, developing a product acceptance plan during Inception is a start to mitigating any risks associated with the hand-over. The team should also start thinking about deployment during the Elaboration phase, when the Deployment View of the software architecture (mapping of software to hardware) stabilizes (an instance of "Begin with the end in mind").

Risks During Transition

In the Transition phase, the focus is on acceptance testing and repairing defects. The project team must facilitate the acceptance process. Therefore, the release and change process must be quick and reliable. The deployment of new releases or fixes must be rapid to keep the acceptance-testing going. When repairing defects, the team must ensure that the integrity of the system is not jeopardized and that old defects do not return. "First things first" applies to the order in which defects should be addressed. The chances of introducing new defects when fixing known defects is high, and in this phase, the focus must be on repairing must-fix defects -- those that impact primary use cases (i.e., that crash the system or make it unreliable or inaccurate). Think twice before attempting to add missing functionality, and don't even think about adding "nice-to-have" features and functions. Whatever you do, ensure you have a change control board in place that includes the customer. This board should classify all defects and authorize repair only of those they identify as must-fix.

4. Think Win/Win

For a software project, we can interpret Covey's "Think Win/Win" to mean: "Satisfy a maximum number of business needs with a minimum of effort."

In iterative development, win/win relates to scope management. The return on investment (ROI) for software projects can be improved by reducing the product size. This can mean reducing the amount of code required for the product to fulfill the needs of the business, and/or reducing the number of system features. In most systems, 20 percent of the features solve 80 percent of the business needs. Ever see a remote control for a stereo set with thirty buttons? Chances are that only six of those will ever get used.

One way to manage scope effectively is to follow the RUP's recommendation for adopting a use-case-driven approach to iterative development. This means that use cases are the basis for the entire development process.8 Traditional software development approaches use functions rather than use cases. The trouble with functions is that, unlike use-cases, they are not directly related to business value. Use cases more or less tie system functions together. They describe what the system must do in the language of the customer, so they are understandable by a wide range of stakeholders. They also form the basis for estimating, planning,

Page 18: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

defining test cases and test procedures, and traceability.

Delivering software that is valuable to the business is as essential to the iterative approach as risk management, and with use cases, projects are driven by business value rather than by time or money. Customers do not like spending more money than was budgeted, nor do they like late delivery, or poor quality software. To help overcome these classic problems, the RUP offers the discipline of

● Prioritizing business needs and their use cases.

● Ensuring that use cases are traceable to business needs.

In combination with an architecture-first approach, this leads to a traceable development order based on risk and customer priority. Use cases should be developed in the following order:

1. High Risk, High Priority.

2. Low Risk, High Priority.

3. Low Risk, Low Priority.

4. High Risk, Low Priority.

Murray Cantor describes this way of ordering use cases in his book Object-Oriented Project Management with UML.9 High Risk, High Priority use cases should typically be developed in the Elaboration phase. High Risk, Low Priority use cases should be eliminated altogether because they add unnecessary risk to the project at a time when ROI is low. A better option is to be proactive and negotiate for less risky alternatives.

Thinking win/win gives the customer as well as the project team an edge when time-to-market is important. Ordering the use cases by technical risk and business priority and steering the project with use cases makes it very easy to reduce the scope. Removing low priority use cases should lead to fewer iterations during Construction and enable the team to tighten up the project schedule.

5. Seek First to Understand, Then to Be Understood

Covey urges us to "Seek First to Understand, Then to Be Understood." In a software development context, we can interpret this as "Understand the

business objectives before thinking of solutions to realize them."

The initial iterations of a project must focus on what the system should do and why it should be built in the first place instead of how to build it. Seek first to understand, then to be understood has to do with understanding the real needs of the business and stakeholders before plunging into prototyping, detailing requirements, and writing code. As one ancient saying goes, "Once we know the problem, we can solve it many ways."

With the possible exception of very technical projects, at least some effort must be put into business modeling, especially when building information

Page 19: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

systems. And when the customer is creating a new business process to implement along with the system, you are likely to fail unless you do business modeling, because chances are high that you will create a system that does not support the process. You may also increase development time and expense because end users and support and maintenance departments are not prepared to use the product. Again, it's critical to begin with the end in mind.

Another instance in which the Seek first... habit can serve you well is in dealing with organizational technology decisions. Most of the time business people have some idea of how the system should be built, but in some cases they have misguided notions about technology and insist upon using a certain brand of database or middleware for the wrong reasons. If this becomes a big source of friction, then you must be capable of explaining the negative impact of their decisions on the business. Therefore, it is necessary to understand the business needs first. This will also help you establish a strong partnership with the customer.

Finally, this habit will also help you set the focus for early iterations (put first things first). During the initial iterations (during Inception), the team should take great care not to get bogged down with developing user-interface prototypes and haggling with customers over details such as the size of buttons and colors. Establishing a user-interface style guide can be very important but there will be enough time to think about these details after your team has a better grasp of the problem.

6. Synergize

Covey's "Synergize" habit translates to "The value of a well-functioning software team is greater than the sum of the value of all individuals

working on the team."

Synergize relates to the team aspect of software development. From one perspective, all team activities should contribute to writing code. In truth, however, writing code is usually the easiest part of a software project. As Rational's Grady Booch likes to say: "Software development is a team sport," and like all team sports, each person must have a clear role with clear responsibilities. Furthermore, team members must understand one another's strengths and weaknesses and be willing to compensate for them.

There are two basic prerequisites for a high-functioning team:

1. A well-chosen group of talented, highly skilled team players.

2. An environment within which these team players can operate synergistically.

Some people just can't get along with each other because their characters are either too similar or too different. According to team-building guru Dr. R. Meredith Belbin,10 each team member plays a double role. The first is a clear, functional role: A Java developer knows how to write Java code, for example. The second, or team role, characterizes a team member's

Page 20: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

behavior in relation to other members and to facilitating team progress. Belbin recognizes nine team roles, as shown in Table 1.

Table 1: Belbin's Nine Team Roles

Role S/W Description

Innovator11 Strength: People who fall into this role are creative, imaginative, unorthodox, and capable of solving difficult problems.

Weakness: These innovators tend to ignore incidentals and can be too preoccupied to communicate effectively.

Coordinator Strength: A coordinator is mature, confident, and a good chairperson. Also clarifies goals, promotes decision-making, and delegates well.

Weakness: Coordinators can be manipulative. They also tend to off-load personal work.

Monitor Strength: A monitor is sober, strategic, and discerning. Sees all options and judges accurately.

Weakness: Monitors lack drive and ability to inspire others.

Implementer Strength: An implementer is disciplined, reliable, conservative, efficient, and capable of turning ideas into practical actions.

Weakness: Implementers are somewhat inflexible and slow to respond to new possibilities.

Completer Strength: A completer is painstaking, conscientious, and anxious. Excels at searching out errors and omissions and delivers on time.

Weakness: Completers tend to worry unduly and are reluctant to delegate.

Resource Investigator Strength: A resource investigator is extroverted, enthusiastic, and communicative. Good at exploring opportunities and developing contacts.

Weakness: Resource investigators are overly optimistic and tend to lose interest once initial enthusiasm has passed.

Page 21: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Shaper Strength: A shaper is challenging and dynamic, and thrives on pressure and courage to overcome obstacles.

Weakness: Shapers are prone to provocation and tend to offend people.

Teamworker Strength: A teamworker is co-operative, mild, perceptive, and diplomatic. Also listens, builds, and averts friction.

Weakness: Teamworkers are indecisive in crunch situations.

Specialist Strength: A specialist is single-minded, self-starting, and dedicated. Also provides knowledge and skills in rare supply.

Weakness: Specialists contribute only on a narrow front and may dwell on technicalities.

Ideal teams comprise all of these team roles. Belbin's descriptions can help project managers assemble a compatible team, and help team members identify each person's role. Understanding someone's habitual way of communicating or interacting often makes his or her annoying habits easier to tolerate. Of course, if resources are scarce, then it can be hard to incorporate each of these roles into the project team, and team members may have to either play multiple roles or adopt a role that may not be his or her first preference. Belbin calls this shift in preferred behavior a "team-role sacrifice."

Once the project manager has assembled a team, the next thing is to establish an environment or structure within which the team can operate. Effective teamwork depends on sufficient communication. Communication is often insufficient when teams are structured according to their specialties or phase deliverables. Ever heard people complaining about bureaucracy and a "throw it over the wall" mentality? On the other hand, lack of structure can lead to too much communication, because everyone has to talk to everyone else to figure out how to get the job done.

One way to ensure the right amount of communication is through an Integrated Product Team (IPT) approach. This involves forming teams that are responsible for the major activities of the software development process; most project members belong to more than one team. This approach optimizes communication because of the overlap in team membership. It also brings down the walls because the IPTs consist of stakeholders who have an interest in software process activities rather than in the ownership of deliverables. For example, the project manager and the software architect together define the content of an iteration (i.e., they think win/win). The project manager must ensure that the iteration adds business value, and the software architect must ensure that technical risks are mitigated and the integrity of the architecture is guaranteed. When the content of the iteration has been defined, the project manager, software architect, implementer, and tester together review the iteration plan and define the number of builds within it, as well as the content and

Page 22: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

delivery date for each build. They then document the results in an Integration Build plan. In this approach, planning is a team effort that involves members of several teams.

7. Sharpen the Saw

We can translate Covey's "Sharpen the Saw" into "Learn and improve during the project."

According to Stephen Covey, "Improvement is the principle and the process on which one gains strength to grow in an upward spiral." Software projects offer many opportunities for team members to learn and improve their skills, but this habit also applies to activities and artifacts. Improvements can be made in plans, requirements, designs, and code as well as in the use of tools, processes, guidelines, standards, and so on.

Iterations also provide an excellent mechanism for learning and improvement that goes beyond the process of making mistakes and then resolving them. The key to "sharpening the saw" lies in artifact evolution. Traditional software development emphasizes the completion of artifacts. In other words, it is activity-driven. Iterative software development, in contrast, is goal-driven (as in begin with the end in mind). For example, a project does not need a fully specified requirements document to enter the Elaboration phase. In general only 20 percent of the requirements set is required. The remaining part can be developed during the Elaboration phase and maybe the first iteration of the Construction phase. In other words, the requirements set evolves.

The concept of artifact evolution applies to all disciples of a software project. During the Inception phase, one or more candidate software architectures are selected, and those architectures are refined during the Elaboration phase by designing, coding, integrating, and testing the critical part of the requirements set. The same concept applies to project management. Plans and estimates grow more and more detailed as the project progresses. The state of these artifacts should be in line with phase and iteration goals.

Even the project team evolves. It grows during the period from Inception to Transition and shrinks at the beginning of the Transition phase.

Another important mechanism for learning and improvement is the use of metrics. The amount of elements to measure depends on the size, complexity, and risks of a project. However, metrics must serve a clear purpose. Tracking the state of a use-case reflects progress over time. Measuring the number of subsystem defects over time can help detect specific risk areas in the software architecture. Measuring the baselined number of source-lines-of-code can be used to evaluate estimates. Measuring the number of open and closed defects over time can be used to assess whether the software architecture is stabilizing and to ensure that the team isn't on overload.

Above all keep in mind that the project is finished only after the customer formally accepts ownership of the software product. Until that point, all

Page 23: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

team members should strive to "sharpen the saw by: being proactive; begin each phase and iteration with the end in mind; put first things first; think win/win; seek first to understand and then to be understood, and synergize with other team members.

Acknowledgments

I would like to thank Mark Tolsma for convincing me to read The Seven Principles of Highly Effective People. I would also like to thank Jakob Moll and Alex Krouwel for their support. Finally, I would like to thank my associate DJ de Villiers for his support and encouragement.

References

Stephen R. Covey, The Seven Habits of Highly Effective People. Simon & Schuster, 1989.

Walker Royce, Software Project Management: A Unified Framework. Addison-Wesley, 1998.

Alistair Cockburn, Surviving Object-Oriented Projects. Addison-Wesley, 1998.

Murray Cantor, Object-Oriented Project Management with UML. Wiley Computer Publishing, 1998.

For more information on team roles, see Dr. Meredith R. Belbin's Web site: http://www.belbin.com

M.B. Pinto and J.K Pinto, "Project Team Communication and Cross-Functional Cooperation in New Program Development." Journal of Product Innovation Management, No. 7, pp. 200-12, 1990.

Philippe Kruchten, The Rational Unified Process: An Introduction, 2nd Edition. Addison-Wesley, 2000.

Notes

1 Stephen Covey, The Seven Habits of Highly Effective People. Simon & Schuster, 1989.

2 Murray Cantor, Object-Oriented Project Management with UML. Wiley Computer Publishing, 1998.

3 M.B. Pinto and J.K Pinto, "Project Team Communication and Cross-Functional Cooperation in New Program Development." Journal of Product Innovation Management, No. 7, pp. 200-12, 1990.

4 It is worth mentioning that project cancellation in itself is not necessarily a bad thing. Some projects are just not worth the effort because they will never lead to the expected benefits (business case). These projects should be cancelled as early as possible.

5 Alistair Cockburn describes a number of risk reduction strategy patterns related to the

Page 24: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

subject in his book Surviving Object-Oriented Projects, Addison-Wesley, 1998.

● Prototype

● Clear the Fog

● Gold Rush

● Someone Always Makes Progress

● Team per Task

● Sacrifice one Person

6 Walker Royce, Software Project Management: A Unified Framework. Addison-Wesley, 1998.

7 Alistair Cockburn, Op.Cit.

8 Philippe Kruchten, The Rational Unified Process: An Introduction, 2nd Edition. Addison-Wesley, 2000.

9 Murray Cantor, Object-Oriented Project Management with UML. Wiley Computer Publishing, 1998.

10 See http://www.belbin.com

11 Editor's note: In Belbin's list, this role is actually called "plant" -- so named because some teams in Belbin's original experiments included individuals who were considered innovative brainchildren, and the organizers wanted to see what effect "implanting" these individuals on teams might have on team behavior and performance. For the purposes of this article, we've renamed it along the lines of the other role names.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 25: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Transitioning from Requirements to Design

by Paul Reed President

Jackson-Reed, Inc.

One of the biggest challenges facing software projects is determining when and how to begin the transition from specifying requirements to working on a system design. Questions arise such as: When am I done with use cases? Do I detail all the use cases first and then jump into design? Which requirements should I tackle first in design?

Most team members seem to realize the benefits of making a correct decision as well as the downside of making an incorrect one. Leap too early, and the project runs the risk of formulating a design based on sketchy requirements. Leap too late, and you take on more risk by postponing high-risk architecture decisions until later in the project lifecycle. In addition, when preparing the artifacts to transition from requirements to design, you must take care not to lose the context in which they were originally captured (e.g., Who made the decisions? Were there unique relationships between certain requirements?).

This article lays the foundation for a smooth transition from requirements specification to design by focusing in on those items that present the most trouble. First I will discuss just how far a team should go with use cases before beginning design. Second, I will review a framework for identifying architecturally significant requirements. And finally, I will explain how to utilize use-case realizations as the pivotal artifact to bridge the transition from requirements specification to design.

When Exposing Risk Is a "Good Thing"

We know, based on history, that the traditional waterfall approach to software development does not mitigate the greatest risks early in the project's lifecycle. This stems from the fact that high-risk decisions, such as the architectural direction of the project, are not tested for validity until coding begins. This can lead to disastrous results; many waterfall projects

jprince
http://www.therationaledge.com/content/jun_02/m_requirementsToDesign_pr.jsp
jprince
Copyright Rational Software 2002
Page 26: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

are either trimmed back or cancelled altogether after significant investments have been made.

The Rational Unified Process® (RUP®) stands in contrast to this approach because of two key characteristics: 1) RUP is risk-based; 2) RUP is architecture centric. RUP challenges the project team to, early on, identify requirements that will both expose the greatest potential risks and allow them to put plans in place to mitigate those risks. These so-called architecturally significant requirements are typically uncovered during the Inception phase of RUP and are the target of further analysis and design in the first iteration of the Elaboration phase.

Exposing this risk is a good thing for the project, because it demands that the team formulate mitigation plans as soon as possible. The challenge in moving forward is answering the all important question: How far does the team need to go with use cases before taking that initial leap into design?

What's Up with Actors and Use Cases?

Projects have struggled for years to conceive the best approach to elicit, document, and trace functional requirements. Approaches range from mini-specifications -- a text narration of the requirements in paragraph form -- to diagrams that show each requirement's flow of control.

Rational's Ivar Jacobson pioneered the notion of use cases while working on complex telecommunications projects at Ericsson. The use-case approach focuses first on identifying the Actors or users of the application. Actors typically take one of four forms: humans, other systems, hardware devices, or timers. They have a goal that needs to be satisfied by the system, and they rely on the use case to accomplish that.

Use cases represent major categories of functionality as perceived by the application's user; it is the use case that ultimately provides measurable value to the Actor. Use-case diagrams are complemented by a use-case template for each use case. Figure 1 shows a use case with Actors.

A Mile Wide and Five Inches Deep

Before the project can identify architecturally significant use cases (see sidebar), it must flesh out the requirements enough to make intelligent decisions about where the greatest risk lies. The key to how much detail to provide lies in the mantra, "a mile wide and five inches deep."

In the Inception phase of a project, my standard script when dealing with project teams is to have them perform the following tasks:

1. Identify the project stakeholders and key product features for the system under discussion (RUP's Vision artifact).

2. Brainstorm an event list by first identifying the actors (humans, other systems, hardware devices, and timers) and what events they stimulate the system with.

3. Brainstorm which use cases satisfy those

Page 27: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 1: A Diagram for the Place Order Use Case

Want more information on Actors and Use Cases? Try these sources:

Web

http://www.rational.com/uml/index.jsphttp://www.usecases.org/http://www.therationaledge.com/admin/archives.jsp (search for Use Cases)

Books

Alistair Cockburn, Writing Effective Use Cases. Addison-Wesley, 2001.

Daryl Kulak and Eamonn Guiney, Use Cases: Requirements in Context. Addison-Wesley, 2000.

events.

4. For each use case, complete a use-case template that at a minimum will identify the key pathways. These pathways may be broken into categories: most common or happy path, variations to the happy path, and exception paths.

5. Time permitting, detail the steps of the happy path for each use case, or at a minimum the happy path for the project's central use cases.

6. Identify those requirements that represent the greatest amount of risk.

These tasks can be done in an iterative fashion, cycling through them as packages of related use cases for larger projects if necessary.

At this point in the project, the team has traveled a mile wide across the entire breadth of the project but only five inches into its depths. You cannot say at this point that all the functional requirements are known. However, you do know enough about the project to identify those requirements that will expose the greatest amount of architectural risk.

Architecturally Significant Requirements

When working with project teams, I focus on coverage areas that are architecturally significant (i.e., that carry a high risk; see Table 1).

Table 1: Architecturally Significant Coverage Areas

Coverage Area Risk Factors

1. New technology or frameworks not presently employed in other projects within the organization

Identifies areas in which the organization is not yet adept at using a new technology.

Page 28: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

2. New or revised business processes that the customer is introducing via new technology

Exposes expectations the customer may have about workflows that have not been tested on a new technology. For example, if a branch bank is switching its account data entry and retrieval processes to a thin client, Web-based application, then that application might not be nearly as flexible as an earlier, client-centric solution when it comes to workflow and user interface possibilities.

3. Time based processing There are very few robust, off-the-shelf products that facilitate time-based event processing. Many applications require either a customized solution or a combination of pre-purchased components and custom code.

4. Batch processing Don't believe the myth that batch-oriented tasks have disappeared with newer generation technology choices. This is just plain wrong; batch processing can be a delicate part of the architecture. Sharing business logic between batch and online can be a tricky proposition.

5. Multi-panel interactions requiring state management throughout the process (workflow management)

This is targeted primarily at Web-based solutions, given the stateless nature of the Web. The way in which software vendors manage state when dealing with multi-page interactions ranges in complexity and affects availability.

6. Security, logging, and archiving demands

Given most customers' desire for single sign-on (SSO) and integration of security and archiving capability with pre-purchased solutions, this area alone can consume tremendous amounts of effort to integrate into the overall solution.

7. Persistence management If the solution will be based on object-oriented techniques, then care must be given to the impedance mismatch when mapping objects to relational stores.

Page 29: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

8. Quality of service Performance is always a consideration. Although actual volume testing may not be feasible in the early stages of Elaboration, simulation tools can be applied to provide meaningful approximation of potential throughput.

Keep in mind that the use cases must be assessed for their ability to exercise one or more of the coverage areas noted in Table 1. All too often, a project's first misstep is to identify requirements that are easy to implement but have a relatively small impact on lessening architectural risk.

The use-case pathways, versus the entire use case, should be the focus of the search for architecturally significant requirements. These should be drawn from the entire pool of use cases. In the first iteration of the Elaboration phase, I try to avoid an excessive number of CRUD (Create, Read, Update, Delete) paths. Although these may be good for tweaking the coverage areas, doing too many of them can be a way of avoiding other, more important areas.

The Most Important Iteration You Will Ever Undertake

Without a doubt, the first iteration in the Elaboration phase is the most important iteration the project will ever undertake (see Figure 1 and RUP sidebar). The inputs for this iteration's deliverable, the Architectural Prototype, are the architecturally significant requirements selected at the end of the Inception phase.

Page 30: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 2: Phases and Iterations of the RUP

It is also during the first iteration in Elaboration that the team explores architecturally significant use-case pathways in depth. All the business rules and dependencies are completely fleshed out. This not only adds more functional knowledge to the project but also further tests the soundness of the architecture. Once the Elaboration phase is complete, there should be absolutely no more hard architecture decisions to make. Remaining use-case pathways should be picked up and detailed in subsequent iterations during Elaboration, Construction, and even Transition.

The Rational Unified Process (RUP)

As Figure 2 depicts, RUP consists of four phases and nine disciplines or workflows. An iteration is a vertical project that slices through the nine workflows, drawing from the available activities in each workflow. Each iteration usually incorporates elements from all nine disciplines. The resulting set of tasks comprises what is known as the iteration plan. A given phase may have multiple iterations, and the number is usually a factor of the technical complexity and size of the project. Sample iteration plans for each of the four phases are provided with the RUP product.

The end of each phase is marked by the completion of a milestone. The milestones for the four phases of RUP are: Lifecycle Objective, Lifecycle Architecture, Initial Operational Capability, and Product Release.

Unlike waterfall-based process models, the RUP's iterative model acknowledges that activities from the broad spectrum of workflows (i.e., requirements, design) actually take place concurrently throughout the lifecycle of the project.

Get more information on RUP from these sources:

Web:

http://www.rational.com/products/rup/index.jsphttp://www.therationaledge.com/admin/archives.jsp (search for RUP)

Books:

Philippe Kruchten, The Rational Unified Process: An Introduction, 2nd Edition. Addison-Wesley, 2000.

Use-Case Realizations: The Bridge between Requirements and Design

A key part of smoothing the transition from requirements to design is having an artifact that directly ties the two workflows together. The project team uses the use-case realization as their transitional artifact. This Design activity takes place initially in the first iteration of the Elaboration phase.

A use-case realization is a Design View of a use case. Initially, the use case only identifies "what" the user wants. The use-case realization is the transitional element that specifies "how" the use case will be implemented. However, the use-case realization is actually a composite artifact, containing other design models to represent the actual realization. The most commonly used models contained within the realization are the UML sequence and/or collaboration diagrams.

The project can graphically represent the use-case realization using Rational Rose® (see Figure 3). A stereotyped dependency relationship is

Page 31: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

created between the use case from the Use-Case View and a use case created in the Logical View stereotyped as a realization.

Click to enlarge

Figure 3: Use-Case Realization in Rational Rose

The tie back to the actual use case isn't just for looks. Remember that a use case in the Use-Case View contains pathways. These pathways are articulated as a sequence of steps with a myriad of business rules that enforce the structure of the pathway, all stated in terms of the user. It is these steps that we now model in Design as a collection of objects messaging with one another to satisfy the goal of the actor(s). This messaging is modeled with one of two interaction diagrams: Sequence or Collaboration. (Note: I find in practice that projects prefer either the Sequence diagram or the Collaboration diagram and don't usually commingle the two.)

The Tree View in Figure 4 shows the explosion of Sequence diagrams within each use-case realization. An interaction diagram is created for every key pathway through the use cases. Initially, during the first iteration of Elaboration, this means only those pathways that are selected for their ability to flush out high-risk project areas.

Page 32: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Click to enlarge

Figure 4: Sequence Diagrams Within the Use-Case Realizations (As Seen in the Tree View)

Use-Case Realizations: When One Isn't Enough

On recent projects I have been involved with, there was a need to have more than one use-case realization per use case within the Use-Case View. This usually indicates that the use case requires a multi-technology solution. For example, for the Place Order use case, you might have an additional non-functional requirement that orders can be placed either online through the Web or via a wireless interface for PDAs and the like. In this case, it would be appropriate to have one use-case realization for the Web implementation and one for the wireless implementation (see Figure 5).

Page 33: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Click to enlarge

Figure 5: Two Realizations for the Same Use Case

The reason behind having two realizations is that, technically, the solutions are quite different. In the case of the Web solution, assuming we are using Java, there will be Servlet classes plus any number of support classes that won't be used at all for the wireless solution. However, at the same time we can leverage the messaging that is common between the two different technical solutions. Let's face it: When all is said and done, the messaging that goes on between the entity classes (i.e., Order, Customer) to actually get the order in the system and to enforce the business rules that govern that process are identical for the two approaches.

In this case, the interaction diagrams in the Place Order-Web realization and the interaction diagrams in the Place Order-Wireless realization would both point to a common interaction diagram that deals with the common entity class messaging.

It is the use-case realization that provides the context when transitioning from requirements to design. I had a seminar attendee once ask if it wasn't dangerous to have realizations tied directly to how the requirements were structured. My response was that there could be nothing more natural. Just as object-oriented thought brought us the concept of real-world entities that represent both structure and behavior, so do use-case realizations represent the natural, real-world transition to the Design View of the use case.

Birds of a Feather

In past lifecycle approaches, there was an aura of mystery surrounding the transition from requirements gathering activities to design activities. Requirements were usually described in paragraphs within a textual format, and the visual design documents were completely untraceable to those requirements. This process of recording static requirements and then translating them into design artifacts promoted loss of context and often obscured the user's original intentions for the system.

In contrast, an iterative process like RUP allows you to select requirements early on that expose the high risks any software development project entails. The use-case realization provides a real-word Design View of those cursory requirements captured during Inception. Targeting those use-case pathways that expose the project to the greatest risk during the first iteration of Elaboration greatly enhances the team's chances for successful future iterations.

Don't settle for artifacts that don't transition well across project phases. Every artifact should be traceable, both forward and backward through the life of the project. Transitioning from requirements to design shouldn't be a mysterious or a miraculous event. It should be a natural process that is easy to explain and easily understood by all project team members.

Page 34: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

References

Philippe Kruchten, The Rational Unified Process: An Introduction, 2nd Edition. Addison-Wesley, 2000.

Paul R. Reed, Jr., "Object-Oriented Analysis and Design using the UML." Seminar Material, 1993-2002.

Paul R. Reed, Jr., Developing Applications with Java and UML. Addison-Wesley, 2002.

Notes

1 Ivar Jacobson, Object-Oriented Software Engineering, Addison-Wesley, 1992.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 35: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

End-to-End Testing of IT Architecture and Applications

by Jeffrey BocarslyDivision Manager, Automated Functional TestingRTTS

Jonathan HarrisDivision Manager, Scalability TestingRTTS

Bill HaydukDirector, Professional ServicesRTTS

Not long ago, industry-standard testing practices, which had evolved in response to quality issues facing the client/server architecture, centered either on the client for front-end functional tests, or the server for back-end scalability and performance tests. This "division of labor" derived largely from the fact that the classic client/server architecture, a two-tier structure, is relatively simple compared to current multi-tier and distributed environments. In the standard client/server arrangement, issues are either on the client side or on the database side.

Today, however, the typical computing environment is a complex, heterogeneous mix of legacy, homegrown, third party, and standardized components and code (see Figure 1). Since the advent of the Web, architectures have increased in complexity, often with a content tier placed between one or more back-end databases and the user-oriented presentation tier. The content tier might deliver content from multiple services that are brought together in the presentation tier, and might also contain business logic that previously would have been found in the front end of a client/server system.

This increase in complexity, overlaid with the problems of integrating legacy and cutting-edge development, can make the characterization, analysis, and localization of software and system issues (including functional and

jprince
http://www.therationaledge.com/content/jun_02/m_endToEndTesting_jb.jsp
jprince
Copyright Rational Software 2002
Page 36: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

scalability/performance problems) major challenges in the development and delivery of software systems. Further, with the acceptance of SOAP/XML as a standard data transmission format, issues of XML data content have become increasingly crucial on both .NET and J2EE development platforms. Simply put, the complexity of current architectures and computing environments has rendered the original client/server-oriented testing scheme obsolete.

Figure 1: A Typical Multi-Tier Architecture Today

An Overall Quality Strategy

Clearly, new, aggressive quality enhancement strategies are necessary for successful software development and deployment. The most potent strategy combines testing the environment's individual components with testing the environment as a whole. In this strategy, testing at both the component and system levels must include functional tests to validate data integrity as well as scalability/performance tests to ensure acceptable response times under various system loads.

For assessment of performance and scalability, these parallel modes of analysis aid in determining the strengths and weaknesses of the architecture, and in pinpointing which components must be involved in resolving the performance- and scalability-related issues of the architecture. The analogous functional testing strategy, full data integrity validation, is becoming increasingly critical, because data may now derive from diverse sources. By assessing data integrity -- including any functional transformations of data that occur during processing -- both within components and across component boundaries, testers can localize each potential defect, making the tasks of system integration and defect isolation part of the standard development process. End-to-End Architecture Testing refers to the concept of testing at all points of access in a computing environment and combines functionality and performance testing at the component and system levels (see Figure 2).

In some ways, End-to-End Architecture Testing is essentially a "gray box" approach to testing -- a combination of the strengths of white box and black

Page 37: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

box testing. In white box testing, a tester has access to, and knowledge of, the underlying system components. Although white box testing can provide very detailed and valuable results, it falls short in detecting many integration and system performance issues. In contrast, black box testing assumes little or no knowledge of the internal workings of the system, but instead focuses on the end-user experience -- ensuring the user is getting the right results in a timely manner. Black box tests cannot typically pinpoint the cause of problems. Nor can they ensure that any particular piece of code has been executed, runs efficiently, and does not contain memory leaks or other similar problems. By merging white and black box testing techniques, End-To-End Architecture Testing eliminates the weaknesses inherent in each, while capitalizing on their respective advantages.

Figure 2: End-to-End Architecture Testing Includes Functional and Performance Testing at All Points of Access

For performance and scalability testing, points of access include hardware, operating systems, applications databases, and the network. For functional testing, points of access include the front-end client, middle tier, content sources, and back-end databases. With this in mind, the term architecture defines how all of the components in the environment interact with other components in the environment, and how users interact with all of these components. Individual components' strengths and weaknesses are defined by the specific architecture that organizes them. It is the uncertainty of how an architecture will respond to the demands placed on it that creates the need for End-to-End Architecture Testing.

To implement End-to-End Architecture Testing effectively, RTTS has developed a successful, risk-based test automation methodology. The Test Automation Process (TAP) is based upon years of successful test implementations, utilizing best-of-breed automated test tools. It is an iterative approach to testing with five distinct phases:

● Project assessment

● Test plan creation or improvement

● Test case development

● Test automation, execution, and tracking

● Test results evaluation

Page 38: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

The individual functional and performance tests required for End-to-End Architecture Testing are conducted in the "test automation, execution, and tracking" phase. As shown in Figure 3, this phase is repeated, and the associated tests are refined, with each iteration of the process.

Figure 3: The RTTS Test Automation Process (TAP) for End-To-End Architecture Testing

Component Level Testing

Clearly, individual components must be developed before they can be assembled into a functioning system. Because components are available for testing early on, End-to-End Architecture Testing in TAP begins with component testing. In component testing, appropriate tests are conducted against individual components as the environment is being built. Both functional and scalability testing at the component level are exceptionally valuable as diagnostics to help identify weak links before and during the assembly of the overall environment.

Functional Tests at the Component Level

Functional testing applied at this level validates the transactions that each component performs. This includes any data transformations the component or system is required to perform, as well as validations of business logic that apply to any transaction handled by the component. As application functionality is developed, infrastructure testing verifies and quantifies the flow of data through the environment's infrastructure, and as such, simultaneously tests functionality and performance. Data integrity must be verified as data begins to be passed between system components. For example, XML Testing validates XML data content on a transaction-by-transaction basis, and where desirable, validates formal XML structure (metadata structure). For component tests, an automated and extensible testing tool like Rational® Robot can greatly reduce the amount of time and effort needed to drive GUI tests as well as functional tests on GUI-less components. Rational Robot's scripting language offers support for calling into external COM (Component Object

Page 39: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Model) DLLs, making it an ideal tool for testing GUI-less objects either directly or via a COM test harness. Also, the new Web and Java testing functionality in Rational Suite® TestStudio® and Rational® TeamTest provides additional capabilities for testing J2EE architectures and writing or recording test scripts in Java.

Scalability and Performance Tests at the Component Level

In parallel to these functional tests, scalability testing at this level exercises each component within the environment to determine its transaction (or volume) limitations. Once enough application functionality exists to create business related transactions, transaction characterization testing is used to determine the footprint of business transactions -- including how much network bandwidth is consumed, as well as the CPU and memory utilization on back-end systems. Resource testing expands on this concept with multi-user tests conducted to determine the total resource usage of applications and subsystems or modules. Finally, configuration testing can identify what changes in hardware, operating system, software, network, database, or other configurations are needed to achieve optimal performance. As with functional testing, effective automated tools such as those found in Rational Suite TestStudio and Rational TeamTest can greatly simplify scalability and performance testing. In this case, the ability to create, schedule, and drive multi-user tests and monitor resource utilization for resource, transaction characterization, and configuration testing is essential to efficiently and successfully complete these tests.

System Level Testing

When the system has been fully assembled, testing of the environment as a whole can begin. Again, End-to-End Architecture Testing requires verification of both the functionality and performance/scalability of the entire environment.

Functional Tests at the System Level

One of the first issues that must be considered is that of integration. Integration testing addresses the broad issue of whether the system is integrated from a data perspective. That is, are the hardware and software components that should be talking with one another communicating properly? If so, is the data being transmitted between them correct? If possible, data may be accessed and verified at intermediary stages of transmission between system components. These points may occur, for example, when data is written to temporary database tables, or when data is accessible in message queues prior to being processed by target components. Access to data at these component boundaries can provide an important additional dimension to data integrity validation and characterization of data issues. For cases in which data corruption can be isolated between two data transmission points, the defective component is localized between those points.

Scalability and Performance Tests at the System Level

For every question that can be asked about how an environment scales or performs, a test can be created to answer that question.

Page 40: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

● How many users can access the system simultaneously before it can no longer maintain acceptable response times?

● Will my high-availability architecture work as designed?

● What will happen if we add a new application or update the one I currently use?

● How should the environment be configured to support the number of users we expect at launch? In six months? In one year?

● We only have partial functionality -- is the design sound?

Answers to these questions are obtained through a wide range of testing techniques, including: scalability/load testing, performance testing, configuration testing, concurrency testing, stress and volume testing, reliability testing, and failover testing, among others.

In the area of system capacity, whole environment testing typically begins with scalability/load testing. This kind of test places an increasing load on the target environment, until either performance requirements such as maximum response times are exceeded or a particular resource is saturated. These tests are designed to determine the upper limits of transaction and user volume and are usually combined with other test types to optimize performance. Related to scalability/load testing, performance testing is used to determine whether the environment meets requirements at set loads and mixes of transactions by testing specific business scenarios (see Figure 4).

Paralleling configuration testing at the component level, configuration testing at the system level provides tradeoff information on specific hardware or software settings as well as metrics and other information needed to effectively allocate resources.

Figure 4: Performance Testing: Will the System Perform as Required with a Specific User Load?

Concurrency testing (Figure 5) profiles the effects of multiple users simultaneously accessing the same application code, module, or database records. It identifies and measures the levels of locking and deadlocking, and

Page 41: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

use of single-threaded code and locking semaphores in a system. Technically, concurrency testing could be categorized as a kind of functional testing. However, it is often grouped with scalability/load tests because it requires multiple users or virtual users to drive the system.

Figure 5: Concurrency Testing Identifies Deadlocking and Other Concurrent Access Problems

Stress testing (Figure 6) exercises the target system or environment at the point of saturation (depletion of a resource such as CPU, memory, etc.) to determine if the behavior changes and possibly becomes detrimental to the system, application, or data. Volume testing is related to stress testing and scalability/load testing, and is conducted to determine the volume of transactions that a complete system can process. Stress and volume testing are performed to test the resiliency of an environment to withstand burst or sustained high-volume activity, respectively -- without failing due to defects such as memory leaks or queue overruns.

Figure 6: Stress Testing Determines the Effect of High-Volume Usage

Once the environment or application is working and optimized for performance, a long-duration reliability test exercises an environment at sustained 75 percent to 90 percent utilization to discover any issues associated with running the environment for extended periods of time.

In environments that employ redundancy and load balancing, failover testing

Page 42: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

(Figure 7) analyzes the theoretical failover procedure, and tests and measures the overall failover process and its effects on the end-user. Essentially, failover testing answers the question, "Will users be able to continue accessing and processing with minimal interruption if a given component fails?"

Figure 7: Failover Testing: What Will Hhappen If Component X Fails?

And finally, if an environment employs third-party software or accepts feeds from outside sources or hosted vendors, then SLA (Service Level Agreement) testing can be conducted to ensure end-user response times and inbound and outbound data streams are within contract specifications. A typical agreement guarantees a specified volume of activity over a predetermined time period with a specified maximum response time.

Once external data or software sources are in place, monitoring of these sources on an ongoing basis is advisable, so that corrective action can be taken quickly if problems develop, minimizing the effect on end users.

As with scalability testing at the component level, Rational Suite TestStudio, Rational TeamTest, and similar tools offer advanced, multi-user testing capabilities and can be used to effectively drive many if not all of the above scalability and performance tests.

A Real-World Example

Perhaps the best way of illustrating End-to-End Architecture Testing is through an example. Consider the following scenario:

An eRetailer has built a public Web bookstore that uses four content-providing Web services in its content tier. One of the services provides the catalog, including book titles, blurbs, and authors. A second service provides the current inventory for all products. The third service is the price server, which provides pricing, shipping, and tax information, based on the purchaser's locale, and executes transactions. The final service holds user profiles and purchasing history.

The presentation tier transforms user requests entered through the UI into XML and submits requests to the proper content server. Response XMLs are then transformed to HTML by the presentation layer and served to the user's

Page 43: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

session. Each of the content-tier services updates the others as needed. (See Figure 8.) For example, the price service must update the profile service as the user's purchasing history changes.

Figure 8: Points of Access for a Typical eRetailer Application

An End-to-End Architecture Testing strategy for the system outlined above starts by applying both functional and scalability/load testing to each of the content-tier systems separately. XML requests are submitted to each of the content services, and the response XML documents are captured and evaluated for either data content or response time. As each of these services is integrated into the system, both functional and scalability/load testing are performed on the assembled system, by submitting transactions to the Web server. Transactions can be validated through the entire site infrastructure, both for functional testing (using SQL queries) and scalability/load testing.

As the system is developed, individual test harnesses, applied at all points of access, can be used to tune each service to function within the whole assembly, both in terms of data-content (i.e., functionality) and performance (i.e., scalability). When issues are discovered in the front end (i.e., via the browser), the test suites and test harnesses that were used to test individual components facilitate rapid pinpointing of the defect's location.

The Benefits of Network Modeling

When it is included as part of the design process -- either prior to the acquisition of hardware or during the initial test phase -- modeling different network architectures can amplify the benefits of End-to-End Architecture Testing by helping make network designs more efficient and less error-prone. Prior to deployment, network infrastructure modeling can help pinpoint performance bottlenecks, errors in routing tables and configurations. In addition, application transaction characterizations obtained during testing can be input into the model to identify and isolate application "chattiness"1 and potential issues within the infrastructure.

Conclusion

Page 44: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

End-to-End Architecture Testing exercises and analyzes computing environments from a broad-based quality perspective. The scalability and functionality of every component is tested individually and collectively during development as well as in prerelease quality evaluation. This provides both diagnostic information that enhances development efficiency and a high degree of quality assurance upon release. End-to-End Architecture Testing provides a comprehensive, reliable solution for managing the complexity of today's architectures and distributed computing environments.

Of course, given the broad range of tests and analysis required, an end-to-end testing effort requires considerable expertise and experience to organize, manage, and implement. But from a business perspective, organizations that embrace an end-to-end testing philosophy will be able to guarantee higher levels of application and system performance and reliability. And ultimately, these organizations will reap the benefits of increased quality: better customer relationships, lower operating costs, and greater revenue growth.

For the past six years RTTS, a Rational Partner, has developed and perfected its End-to-End Architecture Testing approach, working with hundreds of clients to ensure application functionality, reliability, scalability, and network performance. Visit the RTTS Web site at www.rttsweb.com

Notes

1 An application is "chatty" if it requires numerous queries and responses to complete a transaction between components.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 45: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Book Review

Software Craftsmanship by Pete McBreen

Addison-Wesley, 2001

ISBN: 0-201-73386-2Cover Price: US$29.99192 Pages

My grandfather was a master craftsman, a shoemaker in a small town in Italy. I remember my father and other relatives talking with pride about how he made boots for the last king of Italy. This was quite an impressive accomplishment, and one that served him well when he came to the United States in the early part of the last century. This book by Pete McBreen brought my grandfather to mind. It supports the view that, just as craftsmen like Grandfather gave each pair of shoes and boots individual attention and took great pride in each of their creations, software developers should take the same approach to the craft of building great software.

McBreen admits that one of his purposes in writing the book was to be provocative. He succeeds marvelously. I found myself cheering some of his ideas while vigorously opposing others. In all cases, I had to think about how the values that I hold influence the way I approach software development.

The book begins by questioning the value of software engineering -- that is, software engineering as a defined, disciplined, approach to the creation and maintenance of software. McBreen begins by presenting the IEEE definition of software engineering as the standard,1 but then quickly gets into what we commonly call systems engineering: systems with hardware and software components.

He traces all of the bad things about software projects -- poor requirements management, high costs and defect rates, late deliveries to name just a few -- back to software engineering as the root cause. I was waiting for him to blame global warming on software engineering. Then, when he started singing the praises of eXtreme Programming, I was about ready to close the book. I thought, "Here's another book that's hopping on the XP/agile bandwagon and it's going to say all the same things I've read in dozens of other places." Lucky for me that I kept on reading.

jprince
http://www.therationaledge.com/content/jun_02/r_softwareCraftsmanship_gp.jsp
jprince
Copyright Rational Software 2002
Page 46: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Developers as Craftsmen

When I arrived at the chapter in which McBreen states that there is no "one size fits all" approach to software development, that caught my attention. I agree. Then he talks about how most projects are done by small teams. This is something Grady Booch said at the 2001 Rational Users Conference, and again, I agree. So by now I was thinking, "Maybe this really is more than just another book about the latest fad."

Drawing an analogy to the men (and women) who belonged to craft guilds in past ages, McBreen makes his case for considering software developers as craftsmen. We have three types:

● Master craftsmen who have wide and deep knowledge of one or more domains. There are few of these, but they are the true experts who can craft an application that exactly meets the customer's expectations and is maintainable and defect free.

● Journeymen who are experienced developers with sound knowledge of many types of software development. They go from project to project, learning more about their trade, working hand-in-hand with the master craftsmen to produce excellent applications. They may or may not become master craftsmen themselves, but they are highly valued for their knowledge and work.

● Apprentices who are learning the trade. Typically they will apprentice under a specific master craftsman for a significant period of time, honing their skills before journeying out to take on more responsible roles on projects.

One key possession of these craftsmen, at every level, is their portfolio of applications. McBreen suggests that you evaluate potential craftsmen for your project by looking closely at their portfolio and the projects they have done. For example, if a person worked on the Chrysler C3 payroll project, you know that he or she is an experienced XP developer. When we think of the super-productive engineers we know, they almost always have a portfolio of projects they can point to. Regardless of the person's education and formal training, you would not hire someone to be the lead developer on a project to implement an online warehousing system if he or she had never worked on a similar project. Instead, you would try to find someone who had as much experience as possible with this type of system.

Cultural Barriers

McBreen eloquently makes his case for taking a craftsman-like approach to developing software, suggesting that businesses could lower their costs over time by approaching projects in this way. In the last part of the book, he provides guidelines on how to find, audition, and reward good craftsmen. Of course, there are cultural barriers that might prevent companies from implementing some of his suggestions. Few companies would consider any software developer to be as valuable as the CEO, although it is well known that some super programmers are worth ten

Page 47: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

times more than typical developers. That they do not get rewarded proportionately is one reason many great developers in corporate organizations leave the technical track and try to move up to management. They may be unhappy, but they will get greater financial rewards. Unfortunately, in many cases they also fail at management.

Often the views McBreen expresses are quite extreme. But they always made me consider whether applying his ideas would lead to a better way of developing software. In many cases, my conclusion was "Yes," but I know that a shift toward the software craftsmanship approach could occur only if it were accompanied by a significant shift in corporate attitudes.

This book is a good read for anyone who is a software developer or who has to manage software developers. If you are a developer, it will make you think about what you really want to do and what is possible. If you are a manager, it will make you think about what can be done and maybe give you some ideas about how to get there. For everyone who reads it, be prepared to question many of your current views and argue with McBreen. He has certainly succeeded in being provocative. Grandfather would like this book.

-Gary PolliceRUP Curmudgeon

1 The IEEE definition quoted is: Software engineering is the application of a systematic, disciplined, quantifiable approach to development, operation, and maintenance of software; that is, the application of engineering to software. Taken from the IEEE Standard Computer Dictionary.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 48: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Book Review

Dreamweaver in a Nutshell: A Desktop Quick Reference by Heather Williamson and Bruce Epstein

O'Reilly and Associates, 2002

ISBN: 0-596-00239-4Cover Price: US$29.95414 Pages

Heather Williamson and Bruce Epstein have written a very thorough and comprehensive book that delves into intricate detail on the inner workings of Dreamweaver 4.0 and Dreamweaver UltraDev 4.0. Although this book deals mainly with the former, all of the material also applies to UltraDev, because both versions share the same core features.

Who Should Use This Book?

This book can be beneficial to a whole slew of Web-related roles. Although it is geared mainly toward the conscientious Web designer, it has value for people working in any roles related to site development, including those involved with JavaScript, ColdFusion, ASP, and so on. Williamson and Epstein bring the reader step by step through the process of setting up and maintaining a site with Dreamweaver. They accomplish this with a combination of logical instructions supplemented by screenshots to paint a clear picture of the procedures. For Web programmers, the book introduces Dreamweaver's code library and validation features. It also shows you tips, tricks, and time-savers you can use to design or keep your content up to date, either page by page or on a site-wide scale.

The book does assume a minimum knowledge of HTML and does not cover HTML basics. Instead, it focuses on ways to use Dreamweaver's user interface as an alternative to hand coding.

Although it is classified as a Desktop Quick Reference, I think it helps to read the book at least once from beginning to end. However, there are mechanisms that help users flip to any section quickly to find what they are looking for. The Table of Contents, of course, is the primary starting place; it outlines subsections as well as main sections, providing page numbers for each one. In addition, each page has an embedded tab on the right side that tells users what section they are in. Using these tabs, I could flip to another section quickly without having to rely on the Table of

jprince
http://www.therationaledge.com/content/jun_02/r_dreamweaver_et.jsp
jprince
Copyright Rational Software 2002
Page 49: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Contents. And finally, the Index is another reference point to find exactly what you need.

This book held a lot of surprises for me. Before I read it, I considered myself to be an advanced user of Dreamweaver; when I started reading, however, I soon realized that I rarely tapped into the full power of Dreamweaver. There are so many features in the program that the book revealed to me for the first time. For instance, it discusses the O'Reilly Reference Panel in detail and credits it as a central source of information on HTML, JavaScript, and style sheet questions. This is something I did not even know the program had. The section on site management really uncovered a lot of things I didn't know about Dreamweaver's site administration functions. And as if this was not enough, there are little tips and tricks embedded throughout the pages in boxes distinguished with an owl icon. These catch the reader's eye and key in on valuable tips that Dreamweaver pros use. If you do go through this book page by page, you'll find useful information on nearly every one.

Organization

The book is organized into five main sections that are right in sync with a Web publisher's steps to creating a site: Interface, Site Management, Coding, Extensions, and Customizability. This order is not only very logical, but it also helps designers and developers find what they need. By figuring out what stage they are at in their site development, they can figure out where in the book to find relevant information.

● Dreamweaver Interface. This is a natural place to start; the book goes over the assortment of windows and menu options that are available to the user, describing the various functions of each one and how to access them. Categories include:

❍ Tables and layouts

❍ Frames and layers

● Managing a Website. This section goes over Dreamweaver's powerful tools for group collaboration on site development, including:

❍ Template management

❍ Code/assets management

❍ Site management

■ Broken link checks

■ Orphan files

● Behaviors/Javascript/Timelines. This section covers the common libraries that Dreamweaver has to make your Web pages fully dynamic and interactive.

● Third-Party Plug-Ins and Extensions to Dreamweaver.

❍ Outlines some key third-party vendors that create plugins for Dreamweaver.

Page 50: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

❍ Provides a general overview of each of these product functions and how they can be incorporated to work in tandem with Dreamweaver.

● Preferences, Customizability, and Keyboard Shortcuts. This section describes ways to customize the Dreamweaver work environment and shortcuts that can speed up the work.

All in all, I think any user who is in the Web publishing/development field would get much use out of this book. The information is structured in a logical manner that pretty much mirrors how someone would start using the application: Interface/GUI, Page Design, Page & Site Management, Coding, Customizability, and Plug-Ins. Most chapters begin with an overview, and then drill down more deeply into the process of how to do a task. The combination of step-by-step instructions, graphs, screenshots, and tables help make the material easy to understand. For me, the information on Dreamweaver site management and page design features has been invaluable in supporting my day-to-day work on Rational Web sites. This book is a must have for any conscientious Web publisher.

-Elvis TamWeb DesignerRational Software

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 51: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Dear Dr. Use Case: Is the Clock an Actor?

by Anthony Crain Software Engineering Specialist

Rational Software

Dear Dr. Use Case,

I have a question about using a nonhuman trigger as the Primary Actor of a use case. For example, I have seen System Clock used as a Primary Actor. Is this really the best way to model a use case that is triggered by time?

Signed,Wondering about Nonhuman Actors

Dear Wondering,

This is an excellent question, and one that I have been asked many times before. When I first started modeling use cases, I tried showing the clock as a primary actor to initiate use cases that are triggered automatically. Since then, though, my experience has shown me a better way.

Let me illustrate by borrowing an example use case called Run Payroll from our Rational University course Object Oriented Analysis and Design with the UML (OOAD). Here's the brief description given in the courseware: "This use case describes how the payroll will be run automatically every Friday and the last working day of the month." Originally, this use case had System Clock as its Primary Actor (a.k.a. the actor that wanted to run the payroll). See Figure 1.

jprince
http://www.therationaledge.com/content/jun_02/t_drUseCase_ac.jsp
jprince
Copyright Rational Software 2002
Page 52: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 1: System Clock as the Primary Actor in the Run Payroll Use Case

The first thing that bothered me about this was that System Clock was the Primary Actor. In truth, the system clock is actually a design decision and not really an actor. Fundamentally, it is just one way of capturing time, so it's more accurate to think of the Primary Actor as Time rather than System Clock (see Figure 2).

Figure 2: Time as the Primary Actor in the Run Payroll Use Case

Primary Actors Have System-Related Goals

It's also important to understand that a Primary Actor has goals relating to the system we are trying to build. If a student has a goal of "register for courses" or "view report card," for example, and our system supports these goals, then we can transform these goals into use cases, with the student as the Primary Actor (see Figure 3)..

Figure 3: Transforming Goals into Use Cases

When I see an actor whose <<communicates>> association (the only kind

Page 53: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

allowed on a use-case model diagram) points at a use case, I interpret that to mean that the use case is that actor's goal.

So suppose that Time is really the Primary Actor for Run Payroll. Then we can assume that Time has the goal of running payroll. Does that ring true? Well, let's look at it another way for a moment. One could say that Time has many other goals: Wither the young, erode continents, heal all wounds, and so on. But since these goals do not intersect with what our system provides, none of them would translate into a use case for our system. Run Payroll, however, is a goal Time has that intersects with our system, so therefore it is a legitimate use case (see Figure 4).

Figure 4: Determining What Goals Intersect with the System

Should we accept this logic? If we do, then suddenly, any nonhuman trigger can be the Primary Actor for a use case: Humidity, Dew Point, Temperature, Light, and more! At first, this concept seemed so powerful to me that I felt it just couldn't be wrong.

A Problem for Black Box Use Cases

One thing about it plagued me, however. The technique I use for writing use-case scenarios and flows is to discuss only the visible behavior of a system. This is called a black-box use case. But to what (or whom) is that behavior visible? To any actor that touches the use case. So for Run Payroll, if Time starts the use case, then that's it for the visible behavior! By extension, whenever an event triggers a use case, then, by definition, the use-case specification will be trivial. When I realized this, I stopped using triggers as the Primary Actors for my use cases.

In some instances, however, a use case has one or more Secondary (or Supporting as they say in Hollywood) Actors, in which case it might have some meat to it.

Another approach is to create white-box use cases that model internal

Page 54: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

behavior. But I find that this makes use cases very hard for the customer, black-box functional tester, technical writer, user interface designer, and so on, to understand. Instead, I model the internal behavior as rules in a separate artifact that I call a Rules Dictionary. Also, white-box use cases are likely to incorporate too much design; black-box use cases with a supporting rule set tend to be better focused on requirements.

A Human Alternative

So what is the best alternative? In our example, who really has the goal of running payroll? Well, if we look back at the use-case model diagram in the OOAD course, we see that there is a Payroll Administrator, a person or team responsible for ensuring that employees are paid. Remember: A system automates a business process, and in this case, we are automating the business process of the Payroll Administrator. Aha! So then it is the Payroll Administrator who has the goal of running payroll.

Now, the question is: Why did the analyst use Time as the Primary Actor, instead of the Payroll Administrator? Mainly, to avoid implying that the Payroll Administrator must manually start the payroll processing. But keep in mind that, if we use the Payroll Administrator as the Primary Actor, then we can capture all the system features that surround the running of payroll. This includes features that allow the Payroll Administrator to set the timetables for running payroll and to handle discrepancies, manual intervention, and holidays. And since much of this would be visible behavior, it would fit nicely into the use case.

Handling the Time Dependency

Using the Payroll Administrator as the Primary Actor also gives us two ways to handle the time dependency in the use case:

1. Set Time as a Secondary Actor. Some people choose to portray time as a Secondary Actor (see Figure 5). This shows that time is a factor in accomplishing the use case. A nice side effect is that if you follow the Boundary, Entity, Control pattern for analysis suggested in OOAD and in the Rational Unified Process®, the analysis model will then contain a <<boundary>> class to time. This method is best if it is important to show which use cases have a time dependency.

Figure 5: Payroll Administrator as the Primary Actor in the Run Payroll Use Case

2. Create a Mechanism to Capture Time. A second method is to create a mechanism to capture time. With this method, there is nothing in the use-case model diagram to indicate which use cases

Page 55: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

are time triggered (see Figure 6), but we can still determine that by examining the Use-Case View of Architecture. This also puts finding a solution to the problem squarely in the Architect's court.

Figure 6: Abstracting Time and Placing the Problem in the Architect's Court

Your Choice

So, in short, my personal answer to the question, "Can the system clock be the Primary Actor of a use case?" is "no." However, there is nothing in the Unified Modeling Language (UML) that says it can't; and if you typically write white-box use cases, then maybe your guidelines will point to "yes." My advice is to ask yourself: Who truly wants the functionality? Then designate that person as the Primary Actor, and leave design intricacies such as capturing time in the application to the Architect and his or her mechanisms.

Usefully yours,Dr. Use Case

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 56: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Versioning and Parallel Development of Requirements Artifacts Using Rational RequisitePro and Rational ClearCase

by John A. Morrison

Solutions ConsultantSears, Roebuck and Co.

If you're using document-based requirements in Rational® RequisitePro®, and you've successfully implemented the Rational Unified Process® and Rational tools in your company, then you've had first-hand experience with how a "team of teams" learns to work together under the RUP Disciplines. We were at this stage when several of our project managers came forward with a proposal to shorten the development timeline and make better use of their development resources through parallel, non-overlapping iterations. "Why should we have programmers twiddling their thumbs while analysts are writing use cases, and analysts twiddling their thumbs while programmers are coding?" they kept asking. That certainly sounded like a reasonable question. But there were other, even more compelling arguments for developing requirements in parallel.

First, what if multiple analysts had to update the same requirement artifacts, either for the same iteration and project, or for different iterations and projects? In our integrated system, this scenario was a possibility. In addition, we were past the stage of creating all-new use cases and had developed a growing collection of use cases and related artifacts in Rational RequisitePro. How were we going to handle changes to these existing artifacts for the next package release?

When we thought through the Change Management Discipline and how it would impact our existing business modeling and requirements artifact sets, we saw several conflicting needs:

● We already knew we needed to make changes to some of our existing use cases and related artifacts in RequisitePro and have them reviewed -- and ultimately approved -- by business managers, testers, and

jprince
http://www.therationaledge.com/content/jun_02/t_versioningDevelopment_jm.jsp
jprince
Copyright Rational Software 2002
Page 57: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

developers.

● Although we knew some of the changes we wanted to make, we also wanted to hold off on directly editing these existing use cases and related artifacts because of the requirements volatility that occurs during use-case workshops and reviews. Making changes to requirements impacts existing tagging and tracing, and we wanted to be sure our changes were stable before we began editing.

● Another reason we wanted to hold off was that the existing use cases and related artifacts represented the current production release, so we wanted everyone to be able to access them. We knew we might have to change them later anyway, if requirement defects were found in production.

Realizing that these needs were all interrelated, and that we would have to find ways to respond to them all, we took up the challenge. We decided to figure out not only how to version our requirement artifacts, but also how to develop requirements artifacts in parallel.

Why "requirements artifacts" and not "requirements"? you might ask. Doesn't Rational RequisitePro have the capability to indicate which version of a requirement you're looking at, by its attributes? Couldn't requirements exist at different levels within the same use case?

Well, yes to both questions. But frankly, I knew from the start that I did not want to be versioning at the requirements level. I have always thought of a use case as an intact requirements artifact, similar to a component. My instincts told me, instead, to look to the programmers who have been developing code in parallel for years, and follow their example. What I discovered are the benefits to development organizations if requirements are managed using configuration management tools and techniques.

What Programmers Know: Serial and Parallel Development

If you are now or have ever been a programmer, then you are already familiar with the concepts of serial and parallel development. Serial development occurs when a single release of software is developed, and only one person is working on a given file at a time. Defects fixed during software maintenance are included in the next release. Parallel development occurs when individuals or teams, working on different releases, make changes to the same files at the same time. The changes made in parallel are later merged, along with fixed defects from maintenance. Software configuration management tools like Rational® ClearCase® manage the complexity of parallel development through support for branching and merging.

In simple language, a branch is a linear sequence of versions. A merge is a technique by which the different versions of a file are reconciled and combined to produce a new version of the file (see Figure 1). In ClearCase, there is a diff/merge tool that displays both files side by side, highlights the differences, and allows a developer to choose which changes will be incorporated into the new version of the file.

Page 58: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 1: Rational ClearCase Version Tree with Merge

A Serial Example

Now that we understand the concepts of branching and merging, we can apply them to requirements artifacts in Rational RequisitePro. Let's say that Release 5 of our software is in production, and our project managers are planning the iterations to develop Release 6. The new functionality in Release 6 will require changes to the Create Lead use case. To keep the discussion simple, we will limit our scenario to the serial development of this one use case, keeping in mind that in an actual development cycle we would have to make decisions about how to manage change for all related artifacts.

Baselining the Use Case

When Release 6 development officially begins, the first thing we want to do is make sure the Create Lead use case has been baselined. According to the RUP, a baseline is a "snapshot" in time of one version of each artifact in the project repository. A baselined use case represents the agreed upon requirements for code currently in production.1 It provides an official standard on which subsequent work is to be based, and to which only authorized changes can be made. Once we baseline our use case, we can proceed with changing it, secure in the knowledge that we have an archived copy of the original.

To baseline the Create Lead use case, we use an Archive utility written in Visual Basic by one of our Rational consultants. The Archive utility uses the Rational RequisitePro and ClearCase COM APIs (Component Object Model Application Program Interfaces), which provide the ability to archive some or all of our RequisitePro documents into a directory in a ClearCase view, and

Page 59: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

apply labels to the archived files as they are added to the view. In advance of using this utility, ClearCase Archive Views were created for each analyst, based on directory profiles created by our ClearCase administrator. These ClearCase Archive Views allow the analyst to access the structures of the ClearCase directories in which the archived files are stored.

Preparing an Artifact for Archive

Before we launch the Archive utility, there is something that must be done in Rational RequisitePro to prepare the documents for archive. On the Project Properties menu, we uncheck the "Save documents in RequisitePro Format" checkbox that prevents the documents in RequisitePro from being opened successfully in Microsoft Word outside of RequisitePro (see Figure 2). We want the archive copy of the Create Lead use case to be stored in Microsoft Word format, so it can be checked out and viewed at any time. After the Archive utility has been run, we want to revisit Project Properties and recheck the "Save documents in RequisitePro Format" checkbox. Failing to do this could lead to corruption of the documents in RequisitePro, so we encourage our analysts to ensure that step is not forgotten by creating their own checklist of archive activities. (Perhaps a future version of the Archive utility will do this for us through the RequisitePro API.)

Figure 2: After Archiving, Remember to Recheck the "Save documents in RequisitePro Format" Checkbox

Archiving the Create Lead Use Case

The next step is to launch the executable Archive utility, which we do through a shortcut on our desktops. Figure 3 shows the Archive utility as it appears immediately after launch.

Page 60: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 3: The Archive Utility After Launch

Once the Archive utility has been launched:

● Enter the release name for the Create Lead use case as the ClearCase label, and the full name archive directory path name as its location. Also check the checkbox that indicates the directory is part of a ClearCase View.

● Next, select the appropriate Rational RequisitePro project file (*.rqs), using the drive and directory navigation features.

● Once the RequisitePro project has been selected, click the "Get Docs" button to bring up all of the documents in the RequisitePro project. When doing this, you will be required to login to the RequisitePro database, even if you do not have security turned on. This is because you must supply a username and password to connect to the RequisitePro database via the API.

● Once the documents are retrieved, you can filter them by document type, using the Document Type selector in the lower right corner of the Utility. Figure 4 shows types UC (Use Case), TWA (Test Workload Analysis) and TPL (Test Plan).

● Scan the project document list and click on the Create Lead use case to select it.

Page 61: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

● Then click the Archive button. The Archive utility will copy the Create Lead use case into the ClearCase view directory, apply the label to it, and check in the document. The Archive utility will allow for multiple documents to be selected for archive.

Figure 4: Archive Utility with Rational RequisitePro Listing

After the Archive utility has completed, and you have rechecked the Project Properties box in RequisitePro, you can easily verify that your document has been archived into ClearCase by using the ClearCase Explorer to locate the file and examine its label. We do this by right-clicking the file and selecting the "Version Tree" menu. This brings up the ClearCase Version Tree Browser and displays the treeview of Version 1 of your document. See Figure 5. (Note: Although the copy of the use case in ClearCase has a UC suffix, it has been saved in Microsoft Word format and can be checked out and opened in Word outside of RequisitePro any time.)

Page 62: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 5: Rational ClearCase Explorer and Version Tree Browser

Creating the TEMP Use Case

Now that you have archived the Create Lead use case, you can proceed with creating the change version of the Create Lead use case in Rational RequisitePro. To allow for this, you need to define an additional document type in RequisitePro to handle change artifacts. At Sears, we chose the name TEMP for this document type, and created the TEMP document type in RequisitePro. The TEMP document type is used for any document that will be changed during development (see Figure 6).

Page 63: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 6: TEMP Document Type Definition

Next, make a "Save As" copy in MS Word of the Create Lead use case, and then import it into a new TEMP document in RequisitePro to update later. The TEMP version of the Create Lead use case is exactly the same as the base version, except that underlined requirement text appears in place of the tagged requirements. This TEMP use case is your "branch" for the branch and merge function (see Figure 7).

Page 64: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 7: Rational RequisitePro "Branch and Merge"

It is in this TEMP version of the Create Lead use case that my team makes all of our changes for the next release. The TEMP document is what we bring to use case workshops and reviews, and the TEMP document is what the business

Page 65: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

agrees to at sign-off.

To help identify deletes and changes in the base and TEMP use cases, our analysts invented their own system of color highlighting in Microsoft Word. During use case reviews, the color highlighting also makes it easier for everyone to identify changes to the use case, which improves understanding and speeds up the review process. One analyst (who has a programming background) uses the MS Word Compare function to verify that no changes were missed, but if you do this, remember that Word Compare does not recognize color highlighting.

To identify what has changed, first look at the base use case (see Figure 8).

Figure 8: The Base Use Case

Figure 9 shows a simple example of the TEMP use case after editing and color highlighting. Yellow indicates the changed text, and blue indicates text that will be deleted. The analyst uses the document Revision History section in the use-case artifact to record the history of the changes as well. Note that this example does NOT use the revision marking features of Microsoft Word, as these are incompatible with Rational RequisitePro.

Page 66: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 9: The TEMP Use Case with Editing and Color Highlighting

Once we have completed the requirements work for the use case and are ready to turn the use case over to the Developers and QA, we manually merge the TEMP document into the base document in Rational RequisitePro to create a new version of the base use case.

Doing a Manual Merge

The merge process has three basic activities:

1. The first activity is to identify any changes that were made to the base use case due to requirements defects found in production.

2. The second activity is to identify all of the changes that have been made to the TEMP use case that will have to be applied to the base use case.

3. The third activity is to apply the changes to the base use case, including the tagging and tracing of requirements, to create a new version of the new base use case. Be aware that this means there is a delay in the process between approval of the change to the TEMP use case and the actual update of the base use case. Don't let this delay cause something to be forgotten!

Page 67: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Once the base use case has been updated, it is now officially at a higher version. From the new version of the base use case, developers can update sequence diagrams in Rational Rose®, and testers can update test cases.2 During Analysis and Design, and during Test, the new base use case may be updated directly in response to feedback from developers and testers. This again may result in changes to models and test cases. Once the new code has moved through Integration Test and QA Test, to the final build and User Acceptance test, the code is ready for deployment.

Once deployment has occurred, it is time once again to archive the base use case in Rational ClearCase, to prepare for the possibility of a new set of changes in the next release. Package development officially ends when the use case is baselined after deployment (see Figure 10). Remember: For convenience, the copy of the use case in ClearCase has the UC suffix but is saved in Microsoft Word format; it can be checked out and opened in MS Word outside of Rational RequisitePro.

Page 68: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 10: The Serial Development Cycle

A Parallel Example

Now, let's suppose that, for Release 6, the changes to the same use case are split among two analysts to shorten the development timeline. In this scenario, two TEMP copies of the base use case are needed. Again, to keep the discussion simple, we will limit our scenario to the parallel development of this one use case. In an actual development cycle, decisions would have to be made as to how change would be managed for all related artifacts as well.

Determining the Starting Point

The first thing the second analyst does is check out a copy of the Release 5 version of the Create Lead use case from Rational ClearCase and import it into Rational RequisitePro as a TEMP document. This is our additional "branch."

Changing the Second Copy of the Create Lead Use Case

In this second TEMP version of the Create Lead use case, we will make the rest of the changes to the use case. Once the requirements work has been completed and we are ready to turn the use case over to the Developers and QA, the analysts work together to manually merge the two TEMP documents into the base document in Rational RequisitePro to create a new version of the base use case.

Why Use a Requirements Management Tool?

Using an automated requirements management database tool helps to start projects off on the right foot and fosters team unity and efficiency by:

● Providing a consistent and convenient way to gather and organize user requirements in one location.

● Giving developers and testers 24X7 access to clearly defined requirements, which facilitates concurrent development.

● Making easy the review and feedback which help to ensure that

Doing a Manual Merge

The merge process for parallel development has six activities:

1. Identify any changes made to the Release 5 base use case due to requirements defects found in production.

2. Identify all changes made to the first TEMP use case.

3. Identify all changes made to the second TEMP use case.

4. Apply the changes from the second TEMP use case to the first TEMP

Page 69: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

applications fulfill customer expectations.

● Enabling all team members to quickly trace the impact of requirement changes, and to be quickly notified whenever changes are made to those requirements.

Rational RequisitePro was our requirements management solution because of its extensive requirements management capabilities, as well as its tight integration with the other Rational tools.

use case.

5. Apply the changes from the first TEMP use case to the Release 5 base use case, to create the new version of the base use case.

6. Update the requirement tagging and tracing.

Once all changes have been applied to the Release 5 base use case, it is now officially at Release 6. From the new base use case, developers can update models in Rational Rose, and testers can update test cases. During the Analysis and Design, and Test disciplines, the new base use case may be updated directly, based on feedback from developers and testers. Again, this may result in changes to models and test cases. Once the new code has moved through Integration Test and QA Test, to the final build and User Acceptance test, the code is ready for internal or external deployment.

Once internal or external deployment has occurred, it is time once again to archive the base use case in Rational ClearCase, to prepare for the possibility of a new set of changes in the next iteration or release. Release development for the use case officially ends when the use case is baselined (see Figure 11).

Click to enlarge

Page 70: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Figure 11: The Parallel Development Cycle

Making the Decision: Serial or Parallel Development?

The following list of considerations may help you make the decision for or against versioning and parallel development of requirements artifacts in Rational RequisitePro.

Versioning of Requirements Artifacts is optional, when:

● Modifications have a negligible impact on the basic requirements.

● The timing of requirements modifications results in negligible impact on remaining workflows to be performed against the requirements.

● Modifications have no impact on existing artifacts, either because there are no existing artifacts or because the current iteration does not impact existing functionality.

Versioning of Requirements Artifacts is highly recommended, when:

● Concurrently with finishing an iteration of requirements and performing the remaining workflows against the completed requirement artifacts, you are capturing the next iteration of requirements -- and that impacts artifacts that are currently being realized, constructed, tested, and so on.

● The requirement artifacts are being modified for a current project, and will also need to be modified due to a requirement defect found in production.

● Multiple analysts are updating the same requirement artifacts, either for the same iteration and project, or for different iterations and projects.

If you find yourself in any of the "highly recommended" scenarios described above, then I hope this explanation of versioning and parallel development of requirements artifacts in Rational RequisitePro and Rational ClearCase has intrigued you enough to give it a try. Though there is a certain level of manual work and complexity, the Analysts on my team were able to incorporate these steps successfully into their Requirements Management activities under the RUP and pass on the benefits to our whole team: We were able to use our resources more efficiently and manage the impact of changes more effectively.

Appendix: Archive Utility

Download a copy of the archive utility mentioned in this article (tested for Rational ClearCase 5.0 and Rational RequisitePro 2002).

Acknowledgments

Page 71: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

I wish to thank Suzette Boyce of Empowered Software Solutions for her input on parallel development, and for piloting these parallel development techniques during her contracting engagement as a Systems Analyst with The Great Indoors/Sears. Guy Thier of Empowered Software Solutions provided a project manager's viewpoint on parallel development. Systems Analysts Wendy Smith and Joe Mack, of Sears Roebuck and Co., provided assistance in editing this article.

Editor's Note: John Morrison will deliver a presentation on this topic at RUC 2002. For more information, visit www.rational.com/ruc/ruc2002/presagenda.jsp and click on Session RM12.

Notes

1 The agreed-upon requirements are usually, but not always, exactly in sync with production code. For example, there may be a time lag between a production fix caused by a defect in requirements, and the necessary update to the corresponding use case.

2 Since developers and testers take part in the use-case reviews, they are very familiar with the changes. They also have printed copies of the TEMP artifacts and can access the TEMP artifacts in RequisitePro. Before deleting the TEMP use case, we make sure it is no longer of any use.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 72: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Top Ten Ways Project Teams Misuse Use Cases -- and How to Correct Them

Part I: Content and Style Issues

by Ellen Gottesdiener Principal

EBG Consulting

Use cases are a wonderful and powerful way to define the behavioral requirements of your software. They have evolved in style and form over the past decade and are now used to define user requirements by many requirements analysts, developers, and business experts involved in software projects -- both object-oriented and otherwise. But if you don't fully understand the ins and outs of use cases, it's easy to misuse them or make mistakes that can unintentionally turn them into "abuse cases."

In my work on software projects, I've facilitated numerous requirements workshops and have probably encountered every kind of error people can make in writing use cases. In this two-part series, I will present a view of how use cases can go awry and discuss ways to prevent this from happening. In this first article, we'll begin by defining use cases and their purpose and then identify ten "misguided guidelines" project teams often apply when they actually create use cases. And finally, we'll take a closer look at the first six of those "guidelines" -- which relate to the content and style of use cases -- and explore ways to correct them. Next month, Part II will address the last four misguided guidelines: the most common mistakes teams make when modeling use cases.

What Use Is a Use Case, Anyway?

A use case is a textual or diagrammatic (or both) description of both the major functions that the system will perform for external Actors, and also the goals that the system achieves for those Actors along the way. Use cases can be represented with text, a diagram or through both formats. Use-case text can contain different pieces of information, but at a minimum it will include names and a basic course of

jprince
http://www.therationaledge.com/content/jun_02/t_misuseUseCases_eg.jsp
jprince
Copyright Rational Software 2002
Page 73: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

action. Exception conditions and variation paths are also included in detailed use-case descriptions.

Scenarios describe typical uses of the system as narratives or stories; each narrative can be a few sentences or paragraphs. Scenarios are "played out" in the context of a path through a use case. You can think of a use case as an abstraction of a set of related scenarios.

Ten Misguided Guidelines Teams Follow for Use Cases

Now that we've seen what an ideal use case is supposed to be and do, let's see what happens when people actually try to create use cases. Below is my lighthearted translation of the "misguided guidelines" I've observed people following during my years as a project participant and consultant.

1. Don't bother with any other requirements representations. (Use cases are the only requirements model you'll need!)

2. Stump readers about the goal of your use case. (Name use cases obtusely using vague verbs such as do or process. If you can stump readers about the goal of a use case, then whatever you implement will be fine!)

3. Be ambiguous about the scope of your use cases. (There will be scope creep anyway, so you can refactor your use cases later. Your users will keep changing their minds, so why bother nailing things down?)

4. Include nonfunctional requirements and user interface details in your use-case text.(Not only will this give you a chance to sharpen your technical skills, but also it will make end users dependent on you to explain how things "really work.")

5. Use lots of extends and includes in your initial use-case diagrams.(This allows you to decompose use cases into itty bitty units of work. After all, these are part of the UML use-case notation, so aren't you supposed to use them?)

6. Don't be concerned with defining business rules. (Even if they come up as you elicit and analyze use cases, you'll probably remember some of them when you design and code. If you must, throw a few business rules into the use case text. You can always make up the rest when you code and test.)

7. Don't involve subject matter experts in creating, reviewing, or verifying use cases.(They'll only raise questions!)

8. If you involve users at all in use case definition, just "do it." (Why bother to prepare for meetings with the users? It just creates a bunch of paperwork, and they keep changing their minds all the time, anyway.)

Page 74: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

9. Write your first and only use case draft in excruciating detail. (Why bother iterating with end users when they don't even know what they want, and they only want you to show them meaty stuff, anyway!)

10. Don't validate or verify your use cases. (That will only cause you to make revisions and do more rework, and it will give you change control problems during requirements gathering. So forget about it!)

If you recognize yourself in any of these "guidelines," take heart. The reason I know them so well is that I've made most of these mistakes myself. Pausing to examine your own mistakes is a wonderful way to learn, so now I'll share some of my experiences with use cases.

Correcting Misguided Content and Style Guidelines

As I've noted, the first six misguided guidelines relate to content and style issues, which we'll examine in this article, starting with:

1. Don't Bother with Any Other Requirements Representations

Because use cases are powerful and familiar software engineering tools, many teams mistakenly believe they can employ use cases alone to define user requirements.1 But experience shows that use cases are often insufficient and in some cases inappropriate for this purpose.

Why? From the point of view of a person or system interacting with your software, a use case nicely describes an aspect of its behavior. But no single user requirements model can fully express all of the software's functional requirements: its behavior, structure, dynamics, and control mechanisms. Figure 1 illustrates how these requirements translate into four interrelated views.

Figure 1: Four Views of System Requirements. Employ use cases to define software

Page 75: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

behavior but other models to describe user requirements.

These views provide complementary mechanisms for analyzing your business domain and modeling it accurately and completely. Suppose, for example, that you're creating a product-ordering application. If you want to represent this domain in terms of use cases, then you might propose a use case such as "Place Order" to capture the flow of the ordering process. This use case might adequately describe this behavior of your system, but it would miss related structural, dynamic, and control elements. The structural view deals with attributes of the order, the order placer, and the customer. The control view encompasses rules for order placement, invoicing, billing, and back ordering. In describing the dynamics of placing an order, you might want to specify the allowable states of the order and actions that occur within those states.

In short, using multiple views gives you a richer context for eliciting user requirements. It also aligns with an important principle of requirements engineering: separation of concerns. Each model describes a specific aspect of your software and omits extraneous information. This means, for example, that your use cases don't include details found in other models, such as data attributes and business rules. Instead, these related models -- whether defined with a diagram or text -- should be traced to your use cases.

One project I worked on impressed me with how important it is to separate concerns. In our initial draft, we wove business rules into the use- case text, along with occasional lists of data attributes. But knowing that this information was sprinkled across multiple use cases, we removed it to other models. We made a list of the business rules in English, and we created a visual domain model that contained a logical data model and an analysis class model. We expected that the requirements would change as we worked and wanted the ability to quickly assess the impact of those changes. The changes rippled beyond a single model, so we used our requirements management tool (Rational® RequisitePro®) to associate the models with each other.

As our users explored their requirements and details evolved, we easily managed the changes. We collected business rules, the data, and the analysis class model at the same time, yet separately, from the use cases. We deftly bounced between models in a requirements workshop, and ultimately we reached closure faster than we would have if we had left the use-case text loaded up with lots of information.

Another project team I worked with learned why relying solely on use cases is problematic. The team was using facilitated workshops to determine requirements for a financial project. The goal was to support plant managers in querying information using a variety of reporting and query rollups of summary data. To model the system, the team planners wanted to create and verify use cases and actors as their primary deliverables. But after careful analysis, we realized that use cases weren't a useful way to express the problem domain. For this purpose, a single use case such as "Query Plant Information" would be far too abstract and all-inclusive. So instead, we used a structural view (data model) and a control view (business rules) to define user requirements. We elicited

Page 76: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

these models by starting with scenarios in the form of situations and questions that plant managers would need to ask. For that particular project, and for similar data querying systems, use cases would be minimally useful.

In general, it's best to let the business problem domain drive the selection of the best requirements models for representing functional needs. Table 1 provides examples of some business domains and appropriate requirements models. Often, you can create simplified versions of what might otherwise be complex models, such as statecharts and data models, in collaboration with business experts or users during a workshop.

Table 1: Example Business Domains and Appropriate Requirements Models

Business Domain Primary View(see Figure 1)

Suggested Models

Operations, Administration, Inventory Management, Billing, and Ordering

Behavior Use Cases, Scenarios, Actor Table and Maps, Domain Models, Event Table, Prototypes

Data Query and Analysis, Data Extraction, Ad hoc and Standard Reporting, Customer Reporting

Structure Data Model, Scenarios, Business Rules

Workflow, Logistics, Demand Management, Contract Negotiation, and Procurement

Dynamics Process Maps, Event Table, Statechart, Prototypes, Scenarios

Claim Adjudication, Welfare Eligibility, Mortgage Lending, Clinical Diagnosis

Control Business Rules, Statecharts, Scenarios, Event Table

Use cases are especially appropriate for highly interactive (behavioral) systems involving end users. Embedded systems, intensively algorithmic systems, data access, and batch systems might start with use cases such as "Provision Line Card," "Compute Dividend," "Query Information," or "Refresh Application Files." However, these types of systems won't benefit from the writing of detailed use case text. Other requirements representations, such as functional hierarchies or precise specifications like Gilb's Planguage,2 are more effective.

Overcoming single-model-itis (the temptation to use use cases alone) will increase the quality of your requirements, reduce rework, and save you time and money. It will also speed requirements development and uncover requirements defects. On one project, we laid out use-case steps on a wall,3 listed business rules below the steps, and listed data attributes nearby on sticky notes. As we discussed use-case steps, we found missing business rules and attributes. Each was separately documented yet traced to the other steps. The result? The project experienced no defects resulting from requirements errors, which gave strong endorsement to our approach.

Page 77: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

2. Stump Readers About the Goal of Your Use Case

To paraphrase Alistair Cockburn, the purpose of a use case is to fulfill an Actor's goal in interacting with the system.4 As you review your list of use cases, be sure that the goal and the Actor (the person or thing that has the goal) are clear.

"Process Order" or "Do Inventory" are vague use-case names, leaving a lot of room for interpretation. What is the goal of "Process Order"? Is it to authorize the order? Find available products? Pack and ship the order? Some combination of these? A single use case can't describe all the actions that name might encompass.

The best way to generate use-case names is either by starting with Actors or by listing use cases and then immediately naming each use case's Initiating Actor. Well-named use cases often enable a business customer to easily infer who the Actor is. An unclear name, in contrast, provides few clues. What Actor initiates the use case named "Process Order," for example? An order taker? An inventory replenisher? A shipper?

The following guidelines will help you avoid this naming problem.

● Name your use cases using this format: verb + [qualified] object.

● Use active (not passive) verbs.

● Avoid vague verbs such as do or process.

● Avoid low-level, database-oriented verbs such as create, read, update, delete (known collectively by the acronym CRUD), get, or insert.

● The "object" part of the use-case name can be a noun (such as inventory) or a qualified noun (such as in-stock inventory).

● Make sure that the project Glossary defines each object in the use-case name.

● Add each object to the domain model (as a class, entity, or attribute).

● Elicit Actors and use case concurrently, associating one with the other as you name each.

One project I know of had sixty-eight use cases because team members created a use case for each database event. But it doesn't make sense to describe database events this way. Use cases are designed to model Actor interactions with your software. Human Actors don't interact with the system in order to CRUD rows in their databases. They don't think in terms of rows and databases; they may not know what a database looks like internally, nor should they have to know that. Rather, Actors think in terms of higher-level goals, such as finding out the discount to give a particular customer; and these goals, in turn, serve business objectives.

Although goals (use cases) can be related, don't make the mistake of blending them together. For example, the use cases "Place Order,"

Page 78: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

"Replenish Stock," "Locate Distributors," and "Ship Order" are related, but each is a distinct use case. If you understand their interdependencies,5 then it will be easier to prioritize them and to plan increments with customers. It will also give you built-in flexibility if you need to trim functionality for a given release. But don't make the mistake of thinking of these cases as one big use case.

To help project teams create good use-case names in requirements workshops, I give participants a "cheat sheet" of good verbs to use (see Table 2). I divide the list into informative use cases (those that give information to the Actor) and performative use cases (those that execute a business transaction to deliver value to the customer or that change the state of data in the system).

Table 2: Example Verbs to Use in Use-Case Names

Informative Use Cases Performative Use Cases

AnalyzeDiscoverFindIdentifyInformMonitorNotify QueryRequestSearchSelectStateView

AchieveAllowArrangeChangeClassifyDefineDeliverDesignEnsureEstablishEvaluateIssueMakePerform ProvideReplenishRequestSet up Specify

This list can help you to arrive efficiently at a first-cut list of use cases without having to clarify their meaning. In one requirements workshop in which use cases were our primary deliverable, I started with a one-minute definition of the term "use case." Next, I handed out the cheat sheet, and together we named several use cases for the project. Then, while the developers and analyst watched, the three business experts present were able to generate more than fifty use-case names for three related subsystems -- in only twelve minutes! We spent another ten minutes or so clarifying, collapsing, and adding use cases to arrive at a first cut list of about fourteen use cases per subsystem. The participants went on to practice writing a one-paragraph description of a sample use case, and from there we iterated through the process of detailing each use case, mapping out dependencies, and packaging the use cases for prioritization and release planning.

3. Be Ambiguous About the Scope of Your Use Cases

Page 79: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Use-case scope mistakes are typically of two sorts: Either the use case does not address a single Actor goal, or the use case does not fall within your project's scope and should never have been detailed in the first place. Both types of mistakes waste a lot of time and energy. If you don't scope your use cases appropriately, then development becomes unnecessarily complex, and iterative and incremental development becomes a major chore. If you don't frame each use case clearly, then it's hard to know when a use case starts and ends.

To avoid confusion, remove out-of-scope use cases by naming them well (see the preceding section) and ensuring each belongs in the system's scope. Several other models can help, including the context diagram and event table. All surviving use cases should be checked to ensure that each addresses one or more of the business goals defined in your Vision or Charter.

To keep a single use case in scope -- and thereby address only one Actor goal -- constrain each use case with its triggering event and necessary event response. Events are what cause Actors to initiate use cases. When the event response is achieved, the use case is finished.

Events for scoping use cases come in two flavors: business and temporal. Business events are high-level occurrences that happen at unpredictable times. Although you can estimate, for example, how many book requests or product searches might occur, you can't specifically say when they will occur or how often.

Assign names to business events using a "subject + verb + object" format: for example, "customer requests book." In this example, one event response might be that book information is provided to the customer. As you might guess, the subject part of the business event turns out to be an Initiating Actor, and the verb part gives you clues for naming one or more use cases.

Temporal events, on the other hand, are entirely predictable occurrences, driven by a specific time on the clock. You know exactly (i.e., the month, day, year, hour, or minute) when the use case needs to replenish inventory levels, publish the schedule, post bills, or produce 1099s. Temporal events are driven by the clock and should be named using the format "time to <verb + object>." The initiating actor for these temporal events will be "Clock" or a psuedo actor name you choose such as Inventory Controller or Scheduler Manager. Event responses to temporal events can be what McMenamin and Palmer6 call "custodial" -- for example, cleaning up data inside the system by refreshing information -- and they can generate tangible artifacts to actors, as business events often do.

It's useful to define events and their responses in your use-case template -- a standard format for documenting use cases. Templates usually are headed by high-level information about the use case. A business or temporal event is not itself a use case; rather, it corresponds to the "trigger" in most use-case templates, and the event response corresponds

Page 80: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

to the "success outcome" in your use-case header.

Defining events can also help you eliminate use cases that don't belong in your project's scope. Let's look at a few ways.

● Events can be your starting point in defining requirements. In fact, an event name is very similar to a use-case name, simplifying the transition from scope to use cases (see Figure 2). To add vigor to your scoping activity, it's a good plan to use a context diagram or context-level use case to describe events and event responses.

● Drawing a context diagram while simultaneously naming business and temporal events allows everyone to "see" the system's scope. On the context diagram, business events are in-flows to the central bubble (or oval), and event responses are shown as out-flows (the system's response to the external environment). Temporal events can be out-flows, and sometimes also in-flows, when the temporal event requires feeds from external Actors.

● In an hour or less, you can create an event table (a table with one column for events and another for the corresponding event responses) along with a context diagram. It's an hour well spent, because it helps you avoid specifying use cases that don't belong.

● If your team is eager to jump into naming use cases, consider taking a brief detour to the event table and context diagram. By taking five to fifteen minutes to refresh everyone on the project's scope, you will likely uncover missing events or extraneous use cases, saving significant rework later on.

Figure 2: Events Can Help You Frame Use Cases Precisely

4. Include Nonfunctional Requirements and User Interface Details in Your Use-Case Text

Page 81: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

A common mistake teams make with use cases is to incorporate nonfunctional requirements, such as response times and throughput information, and user interface or prototype details such as widget manipulation and references to windows, buttons, and icons. Although use cases are effective tools for eliciting nonfunctional requirements and for envisioning the user interface, you shouldn't insert that information into your use-case text; instead, you should associate that information to the relevant use cases. For example, you can define nonfunctional requirements in a nonfunctional requirements specification or supplementary specification document that traces each nonfunctional requirement to its corresponding use case. The prototype sketches, drawings, or screens should be also stored separately, and traced to the use cases they envision.

Nonfunctional requirements include quality attributes (such as performance, throughput, and usability), system requirements (such as security and recoverability), and system constraints (such as database and language). These nonfunctional requirements drive architectural decisions, govern user satisfaction with your final product, and provide competitive advantage in commercial software products.

User quality attributes constrain functional requirements. Use cases, and any other user requirements model describing software functionality, portray the "doing" part of requirements. Their associated nonfunctional requirements describe the "being" part of software. You should strive to separate the doing from the being but also relate the two.

As you begin to specify the functionality needed to achieve a use case, you can uncover some of your nonfunctional requirements, such as response time, throughput, and usability. To do that, ask good questions of users -- or surrogate (stand-in) users -- while eliciting use cases:

● How many users does this Actor represent?

● What is the maximum response time acceptable for <use case>?

● How many <object part of use case> will you need to <verb part of use case name> each day, hour, or week?

● Are there periods in the year when you will see higher volume?

● Will experienced and new users need to learn to use this functionality differently?

The answers will help you begin to nail down the nonfunctional requirements for your use case.

Other nonfunctional requirements -- such as backup, recovery, security, and audits -- relate to multiple use cases. Often you need define them only once, and they don't belong in your use-case text. Separating them will help your project architects get a comprehensive overview of the technical issues that will drive important design considerations.

Prototypes describe requirements as viewed by direct users, or actors.

Page 82: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Before you code anything, try eliciting and testing use cases by using simple screen navigation maps or mockup dialogs posted in sequence on a wall. These low-fidelity prototypes also help you manage user and customer expectations about the system's look and feel without locking anyone into specific interface designs.

Though it's tempting to incorporate GUI references into use-case text, you shouldn't fall into this trap. It creates design expectations that may prove erroneous or unworkable as you iterate through Elaboration and Construction. In one project I know of, the team had to rework their use- case text when they embedded specific GUI references to an environment (Java Swing) that changed (to XLS) shortly after they had drafted their use case. The use-case text should apply regardless of implementation environment.

In sum, let your use cases do what use cases do well: describe Actor-system interactions. To describe constraints and quality attributes for those interactions, define and trace nonfunctional requirements apart from your use cases. To describe how the software will look and feel, use a prototype.

5. Use Lots of Extends and Includes in Your Initial Use-Case Diagrams

Extensions and includes are among the most confusing aspects of the use-case diagram. Overzealous attempts to use the notation -- just because it's there -- can lead to analysis paralysis.

In practice, <<include>> use cases aren't revealed until the second or third iteration through all the use cases. On one project I facilitated, we iterated first through all the "Happy Paths" (normal scenarios in which all goes well) and then all the "Unhappy Paths" (assuming errors, exceptions, and infrequently occurring scenarios) over the course of several days. The number of use cases expanded and contracted as we explored the breadth and depth of the project. We began with fifteen use cases, went down to fourteen, and finally settled with twelve. Included use cases became apparent as we iterated through the set, finding patterns of behavior that could be partitioned out and reused by multiple use cases. Jumping to <>> use cases too soon leads to the trap of a functional decomposition mindset, eradicating the advantages that Actor- and goal-driven use cases provide.

Extensions inside use-case text add complexity because they often address important errors or exception conditions -- in other words, business rule violations. Occasionally, a set of steps that handles similar extensions turns out to be an included use case. To save time in identifying these patterns, you should define business rules explicitly.

Although the use-case diagram with includes and extensions semantics might be useful for certain complex use cases, it is more productive to spend your time specifying use-case text, visualizing relationships within and between use cases, and using multiple requirements models (sound familiar?). To more easily visualize each use case, ask users to lay out

Page 83: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

each step on a wall and then step through the use case. At each step, ask them questions designed to uncover attributes, business rules, and elements that might appear on a user prototype.

6. Don't Be Concerned with Defining Business Rules.

This guideline is based on a serious misconception: Business rules should be at the heart of your use cases, providing the controls, reasoning, guidance, and decisions behind the processes the use case describes. Business rules exist to enforce higher-level business policies, which in turn serve business goals, objectives, and tactics. If you don't explicitly define and separate your business rules, they will almost surely end up wrong, missing, or difficult to change. Many post-implementation defects relate to problems with business rules.

Business rules are owned and known (or need to be known) by business experts and/or the product and marketing managers who represent them. Technical people have no business trying to guess at business rules unless they are well versed in the business and are authorized to define the rules.

For this reason, early in requirements efforts, I request that a business executive or expert assume the role of project "Rule Czar." This means taking responsibility for defining the rules and noting where they apply. As you elicit use cases, many questions will arise about business rules, and the Rule Czar's job is to help the group reach closure on such questions.

To find out whether business rules are lurking below the surface of your use cases, listen and look for certain verb clues in the text you have written:

● evaluate

● determine

● assess

● verify

● validate

● classify

● decide

● compare

● diagnose

● match

● conclude

● should (as part of verb phrase)

Once you hear these verbs, ask probing questions of your users or customers to uncover the business rules that must be enforced to take the action the verb suggests (e.g., verify, decide, etc.) in the use-case description.

Page 84: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

To help you specify business rules precisely, you can use business rule templates, which give you a structured format for writing business rules in natural language. This will help you tease loosely written business rules into atomic business rules. And as you do so, you'll find missing elements from other models, such as the domain model and use-case steps.

There is no agreed-upon taxonomy for business rules, nor does there need to be. Table 3 shows some examples. Each project is unique, so you must select or invent a template that works best for your domain. Be sure to define each term in your business rules in your Glossary. Terms are the building blocks of all your business rules and are used throughout your use cases, so you should nail down their meanings as soon as you can.

Table 3: Sample Business Rules Templates

Category Templates Examples

Term (list in Glossary)

[property] <noun/business term> is defined as <text definition>

A manager is defined as a person to whom two or more people report directly.

Fact Each <noun/business term> must|may <verb or verb phrase> one and only one|one or more <noun/business term> [<prepositional phrase>]

<noun/business term1> may|must <verb or verb phrase> <noun/business term2>

<noun/business term1> has a property of <noun/business term2>

Each buyer must assign one and only one discount to an order.

Line items must contain the quantity requested.

"Web customers" has a property of "userid."

Constraint <[qualified] noun/business term> must be true for <condition> or <condition [property] <noun/business term> must not/cannot <verb phrase> <constant or non-verb phrase>

The active ingredient for a finished product must be listed first on the package.

Total-sale must not exceed $100.

An underage customer cannot purchase alcoholic beverages from liquor stores.

Page 85: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Derivation <noun/business term> is calculated as <arithmetic expression>

Total excess material is calculated as (total volume input minus total amount used).

Action Enabler (also known as ECA rules: event, condition, action)

when <condition is true>, then <action>

if <condition1>[and <condition2>...]then <action>

When claim arrives after cancellation date, then issue rejection letter.

If preferred customer and backordered item, then offer 10 percent discount.

Inference If <condition1 [true]>[and condition2...]then <conclusion>

If customer submitted expired credit card, then credit is suspicious.

Be sure to separate business rules from your use cases but still relate them. On one project we included business rules in our use cases from the start, and by the second iteration we found ourselves diving into multiple use cases to change and add business rules. We corrected course by deleting the business rules from the use-case text and turning them into a distinct requirements artifact; then we traced use cases to business rules and vice versa, using traceability matrices. This taught me a valuable lesson: Agility is facilitated not only by simplicity but also by separation.

Write your use case with no business rules embedded in the text, but reference them in a separate document and trace them to the use case that must enforce them.

Until Next Time

In this article we've come a bit more than halfway through the list of misguided guidelines. Next month we'll look at the last four, which are pitfalls to watch for in the process itself. Until then, remember that although use-case mistakes are common, it is worth the effort to correct them. Use cases can really work for you if you don't misuse them!

Acknowledgments

I would like to thank the following reviewers for their helpful comments and suggestions: Alistair Cockburn, Gary Evans, Susan Lilly, Bill Nazzaro, Paul Reed, Debra Schratz, Karl Wiegers, and Rebecca Wirfs-Brock.

References

Alistair Cockburn, "Use Cases, Ten Years Later." Software Testing and Quality Engineering Magazine (STQE), Vol. 4, No.2, March/April 2002, pp. 37-40.

Page 86: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Alistair Cockburn, Writing Effective Use Cases. Addison-Wesley, 2000.

Alistair Cockburn, "Using Goal-Based Use Cases." Journal of Object-Oriented Programming, November/December 1997, pp. 56-62.

Martin Fowler, "Use and Abuse Cases." Distributed Computing, April 1998.

Tom Gilb, Competitive Engineering: A Handbook for Systems and Software Engineering Management Using Planguage. Addison-Wesley (forthcoming, 2002).

Ellen Gottesdiener, Requirements by Collaboration: Workshops for Defining Needs. Addison-Wesley, 2002.

Ellen Gottesdiener, "Collaborate for Quality: Using Collaborative Workshops to Determine Requirements." Software Testing and Quality Engineering, March/April 2001, Vol 3, No. 2.

Ellen. Gottesdiener, Requirements Modeling with Use Cases and Business Rules (course materials, EBG Consulting, Inc.), 2002.

Daryl Kulak and Eamonn Guiney, Use Cases: Requirements in Context. Addison-Wesley, 2000.

Susan Lilly, "How to Avoid Use Case Pitfalls." Software Development Magazine, January 2000.

Stephen M. McMenamin and John F. Palmer, Essential Systems Analysis. Yourdon Press, 1994.

Rebecca Wirfs-Brock and Alan McKean, "The Art of Writing Use Cases." Tutorial for OOPSLA Conference, 2001. See http://www.wirfs-brock.com/pages/resources.html

Notes

1 "Requirements" define the operational capabilities of a system or process that must exist to satisfy a legitimate business need. The generic term "requirements" covers both functional (functionality users expect) and nonfunctional requirements (quality attributes of the software such as performance, system needs such as security and archiving, and technical constraints such as language and database). Functional requirements evolve from user requirements-tasks that users need to achieve with the software.

2 Plangauge is a specification language developed by Tom Gilb. For more information, see: www.result-planning.com and Tom Gilb, Competitive Engineering: A Handbook for Systems and Software Engineering Management Using Planguage. Addison-Wesley (forthcoming, 2002).

3 For more about this technique, see "Specifying Requirements with a Wall of Wonder" in the November issue of The Rational Edge.

4 Alistair Cockburn, Writing Effective Use Cases. Addison-Wesley, 2000.

Page 87: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

5 Two useful ways to understand use-case dependencies is to show how use cases execute in sequence with a use-case map (see http://www.ebgconsulting.com/publications.html, "Use Cases" section for an example) and by clearly defining pre- and post-conditions for each use case.

6 Stephen M. McMenamin and John F. Palmer, Essential Systems Analysis. Yourdon Press, 1994.

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 88: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Popular Science

by Joe Marasco

Senior Vice President and General ManagerRational Software

We find many instances these days of scientific language used to "explain" common phenomena. Unfortunately, these usages are often metaphorically or analogically incorrect, devoid of meaning, or just plain silly. In this article we choose some common examples, show the improper application, and then try to illustrate a better way of saying what the author would like to say, stripped of the jargon intended to impress the layperson.

Houston, We Have a Problem

Plato was the first philosopher to point out that achieving a conceptual understanding of the physical world is trickier than we might think at first. In more recent times, Immanuel Kant should be given credit for trying to figure out what knowledge can be known with and without the "filtering of reality." That is, we have a notion that there is an "objective" reality, but in some sense we can never get to it, because it is filtered through our human apparatus for experiencing it -- our senses, minds, and emotions. What we are capable of experiencing, both as individuals and as a species, is a "subjective" reality -- and that limitation introduces lots of doubt as to what "reality" is. This doubt has permeated our thinking, and sometimes its effects can be less than subtle, or even subconscious. Consider how often we say, "Things are not always what they seem."

Our knowledge of physics, in some sense, compounds the problem. As far back as Galileo and then Newton, scientists have been formulating theories that seemed, in their time, very counterintuitive -- for example, that bodies in motion, left to their own devices, would continue in motion forever. As the centuries passed, however, many (but not all) of these counterintuitive ideas became accepted by educated people as reasonable; the ideas became common knowledge and were assimilated as accepted

Page 89: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

doctrine. Another trendy way of saying this is that certain ideas achieved "mind share"; that is, they became part of our collective consciousness.

But "modern" physics is only about one hundred years old. Although the rate at which the general population accepts new scientific ideas has accelerated, many of these ideas have not actually been absorbed. There are some very good reasons for this. For one, new theories such as relativity and quantum mechanics, because they deal with realms of reality outside our everyday experience -- velocities near the speed of light and subatomic distances, for instance -- turn out to be governed by laws that are very counterintuitive, even to our modern way of thinking. Both physicists and non-physicists alike recognize that it is very hard to explain these fundamental theories, in part because the mathematical apparatus that makes them clear to the practitioner is simply not accessible to most people. So physicists have a problem: Although their theories are correct and powerful, they are not explainable in any detail to the non-physicist.

Nonetheless, people want to understand. So what happens in most cases is that popularizers attempt to explain by analogy. This is perfectly valid. But over the years, the analogies themselves have acquired the status of fundamental truth for the lay public and are bandied about as common clichés. Each time this happens, people like me who have some understanding of the underlying science become baffled by these overused analogies because we don't see the applicability.

Because of my training as a physicist, I want to be sure to state up front that I don't think physics, or science and mathematics in general, is the exclusive province of some exalted priesthood, and that the layperson "shouldn't talk about what they don't understand." I would like everyone to understand science and technology better. But I am also a realist about the magnitude of the problem. What I would settle for is a better understanding of what those scientific clichés actually mean so that people can either use them only when they are appropriate or replace them with more appropriate arguments. In particular, I would like to discourage the practice of smugly quoting scientific jargon to impress an audience with the correctness of one's position. And, for my money, the best way to do this is to point out when the popular analogies actually apply, and when they don't.

So here goes. See if you can spot the faux pas as we proceed. Let's start with some "leftovers" from classical physics, that is, ideas that are still poorly understood, even after a few centuries of simmering in the intellectual pot.

Fig Newtons

Newton's Laws of Motion (or The Three Laws of Motion) are liberally quoted. Here are some of the things one hears from time to time:

From people in general:

"That object is in equilibrium, so by Newton's First Law, there

Page 90: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

must be no forces acting on it."

From a manager in response to observing a backlash to a recent business initiative:

"We should have known that would happen. Newton's Second Law predicts that for every action, there is an equal and opposite reaction."

From a project manager, remarking on someone else's project:

"That project is definitely in free fall."

Let's look at these one by one.

Misapplication of the First Law

Newton's First Law of Motion says:

A body at rest or in a state of uniform motion (constant velocity) will stay that way unless acted upon by an external force.

Note that this means there are no net external forces acting on the body unless precisely stated. Or, to put it another way, there may be external forces acting on the body, but they (the multiple external forces) cancel exactly. When these external forces balance each other, the object is in equilibrium: static equilibrium if the body is at rest, or else equilibrium in uniform motion -- that is, in a straight line at constant velocity. So remember: Equilibrium does not mean "no forces acting." Equilibrium means, "all external forces balance exactly." Of course, internal forces have no effect, as they cancel in pairs by Newton's Second Law, as we shall soon see.

Let us assume that a lump of coal is moving at constant velocity along the surface of a level table. Ignore for a moment how it came to be in motion, but let's assume it is moving at one inch per hour toward the west. Newton's first law tells us that unless we impose some other horizontal force on the lump, it will continue to move at one inch per hour toward the west forever.

Now, as we pointed out earlier, this defies common sense. In our real world, we would expect the lump of coal to slow down for at least two reasons. One, there is air resistance, and two, there is friction with the table's surface; both of these will tend to retard the uniform westward motion. But of course, there is no violation of Newton's first law here at all; both air resistance and friction are external forces acting on the lump of coal, and the first law states very precisely that the rule does not apply if external (net) forces are acting on the body in question. Now a physicist, used to thinking about and stating conditions precisely, would understand that a force is a force, and you can't neglect any of them. To describe the case above precisely, you would have to state: "The lump of coal will continue to move at one inch per hour to the west in a perfect vacuum on

Page 91: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

a perfectly level, frictionless, table." The problem is, most of us are not so precise in describing daily phenomena, so it's easy to understand how ordinary folks might misapply Newton's First Law.

A member of the younger generation of physicists recently pointed out to me that these days, students use deep space as a theoretical framework for working out problems, so that they can quickly dispense with the effects of air resistance, friction, "tables," and the gravitational pull of nearby massive bodies. Although this idealized context simplifies the requirements for understanding mechanics, one wonders what will happen when these students are called on to solve real problems "back on Earth."

Misapplication of the Second Law

Newton's Second Law says:

For every applied external force on a body, the body exerts an equal and opposite force.

When something happens in the business world in reaction to an event, someone is sure to bleat out, "For every action there is an equal and opposite reaction." In fact, it is they who are having a knee-jerk "reaction." Rather than applying any thought to the situation, they quote Newton to justify or validate whatever backlash has taken place. The reaction is postulated as something that "had to happen" according to "the laws of physics." In truth, however, what goes on has nothing to do with physics. Not only is the typical reaction unequal to the effect that produced it; often it is not even delivered in the opposite direction, but is rather off at some tangent. Moreover, it may not have been a result of the original action at all.

Once again, Newton's Law is correct, but we must be precise about the force and the body. Often the "equal and opposite" forces people cite in business situations are really an internal force pair that does not exert any external net force on the body. So whenever you hear someone intone, "For every action there is an equal and opposite reaction," my advice is to check to see what the forces are and what bodies these forces are being applied to.

Misapplication of the Third Law

The Third Law says:

A body will be accelerated by an external force in direct proportion to the force and inversely proportionally to its mass.

This one is often quoted as simply "F = ma," which is just a formulaic restatement.1 It is an unbelievably simple and elegant result that applies over an incredible range of phenomena.

But what does it mean to talk about a project "in free fall"? I think managers mean that it is accelerating under the influence of gravity, which means that it is gaining speed and will inevitably collide, inelastically

Page 92: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

and catastrophically, with Mother Earth. Splat! I "get" the notion that there are no parachute and no brakes, and a sense of rapidly impending doom. Yet, I see here a misuse of the physics analogy. Projects are subject to constraints just as surely as they have mass; the notion that management is so absent that we have effectively yanked the table out from under the lump of coal is certainly disheartening to say the least.

Everything's Relative

OK. Now that we understand the gravity of mistreating Newton, let's try a couple more popular idioms on for size. Two people disagree on something, and one says:

"Well, it all depends on your frame of reference."

Or, someone wants to make the point that "the old accepted laws of nature are no longer true." The usual expression of this is along the lines of:

"Einstein showed that Newton was wrong."

Well, as they say in the Hertz commercial, not exactly. Einstein did a hell of a job with relativity, but his theory has spawned some strange notions.

Frame of Reference

With respect to the first example above, it's true that things can be different depending on your perspective or point of view; however, that is a perfectly classical phenomenon and has nothing to do with Einstein's Theory of Relativity. In fact, things are not different depending on your frame of reference. Relativity emphasizes that when you are within a framework that's moving at constant velocity, you cannot know your velocity as perceived by a stationary observer, because everything else inside your framework behaves according to the laws of physics, and all appears to you just as it would if you were stationary. And, by extension, you also cannot distinguish between, say, acceleration and space curvature.

Einstein Proved Newton Wrong?

As for the second claim above, Newton's Laws are perfectly valid at velocities that we encounter in our daily lives. Change comes only when things are moving at or near the speed of light. Then you need to apply different rules. And that's where Einstein comes in. Newton works at low speed (that is, most of the time), and Einstein's Relativity Theory kicks in when you start to go very fast. If you use "just Newton" for too long, then you will get progressively more incorrect results as you approach the speed of light, and your answers will be completely wrong when you actually reach the speed of light. Just remember that, relative to our daily experience, the speed of light is a very, very, big number.

The effects of relativity are completely negligible in our common experience. You can compute them if you'd like, using Einstein's Theory of

Page 93: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Special Relativity, but you will find that your results don't change at all.

That is because the speed of light is so great. On the other hand, the speed of sound is something we relate to daily. You can observe that the speed of light and the speed of sound are very different by doing this experiment the next time you are playing golf. When you are about 250 yards or more down the fairway (you have just hit your second shot and are walking toward the green), look back and watch (and listen) for the next group's tee shots. You will see the club hit the ball, and then a split second later you will hear the impact. You can compute this discernable interval by using the speed of sound in dry air at sea level,2 and by assuming that the speed of light is infinite; that is, it takes zero time for the light to travel from the golf club to your eyes. This will give you the right answer.3 If you do the calculation using the actual speed of light, then you will get basically the same answer.4 So although you can legitimately apply Einstein's Theory of Relativity here by using a finite speed for light, it won't buy you much.

There is a real life situation in which you can experience the speed of light as finite. When making an international phone call, you are sometimes unlucky enough to go up to a geostationary satellite and back down to Earth, and that takes about half a second. That's long enough to give you the impression that your interlocutor is pausing; you might misinterpret that pause as dissent, hesitation, apprehension, or the like, depending on the conversation.

And Another Thing

When it comes to Einstein, we have just scratched the surface. All the phenomena we've just discussed are manifestations of his Special Theory of Relativity, which holds only for bodies moving at constant velocity. When bodies actually accelerate relativistically, then you have to use his General Theory, with consequent additional heavyweight mathematical baggage. Yet popularizers invoke the General Theory with equal impunity. In fact, there are only a small number of experimental tests that we know of to test the General Theory, all of them involving very, very, small effects.5

None of this diminishes the magnitude of Einstein's accomplishments. However, applying his brilliant discoveries to situations in which they do not really apply in a sense cheapens them.

Quantum Nonsense

Let's move on from relativity to quantum mechanics. Recently I had someone who was unwilling to make a forecast say to me:

"It's just like quantum mechanics. All I can give you is a probability."

Although the second part of his claim was most assuredly true, I am certain that it had absolutely nothing to do with quantum mechanics.

Page 94: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

About twenty years after the relativity revolution, circa 1927, quantum mechanics burst upon the human race with equally momentous and unsettling effect.6

All you have to remember about quantum mechanics is all you have to remember about relativity. Neither theory replaces Newton's Laws. Whereas Einstein's Relativity Theory extends Newton's Laws into the domain of the very fast (velocities near the speed of light), quantum mechanics extends classical physics into the domain of the very small. That is, when we get down to subatomic dimensions, new rules come into play. That's when we need to use quantum mechanics. For everything else, the rules of quantum mechanics still apply, but the effects are so small that they are irrelevant.

The reason it took so long to discover both bodies of knowledge is that we could not measure either stuff that went really fast or things that were really small much before the second half of the nineteenth century. Actually, it was the invention and perfection of the vacuum pump -- an engineering feat -- that facilitated measurement in both arenas. This also explains why the effects that required the application of either Einstein's theory or quantum mechanics were not observed; except for the conundrum about the wave-particle duality of light, nothing in our plodding macroscopic world hinted that anything was "wrong."

The "It's just like quantum mechanicsý" quote reveals an interesting misconception. Because quantum theory involves calculations involving probabilities, many people think that predictions based on quantum mechanics are somehow imprecise. The reality is just the opposite.

For example, we can determine , the fine structure constant7, experimentally. Now this number is quintessentially a "modern" physics number: It is made up of, among other things, the charge on the electron, Planck's constant (see more on this later), and the speed of light. When you are measuring it by any method, you are doing quantum mechanics, and the theoretical predictions of the number involve some of the deepest applications of quantum theory we know. Yet we can do experiments that measure its value to about one part in 108. Now that is pretty good in anybody's book.

By contrast, G, the universal gravitational constant, a perfectly "classical" quantity known since the time of Newton, has been experimentally measured to only about one part in 104. That's not bad either; it corresponds to 0.01 percent precision. Yet we know with several thousand times more precision. Somewhat ironic, isn't it?

So much for the probabilistic nature of quantum mechanics and its relation to making predictions.

More Quantum Nonsense

Actually, my pet peeve is the frequent misuse of "Heisenberg's Uncertainty Principle." If you are interested in a particularly droll example of this, see Freddy Riedenschneider's monolog in the Coen brothers' movie "The Man

Page 95: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Who Wasn't There."8 It is easy to see why Billy Bob Thornton got the chair after his lawyer tried to use the principle to convince (or confuse) a jury.

A common lament when someone is asked to make a difficult measurement:

"We're screwed. Heisenberg tells us we can't measure something without disturbing it."

Another example: Software people now talk about "Heisenbugs."

"Man, it took us weeks to track down that defect. Turned out to be a Heisenbug."

These are bugs that are very hard to eliminate, because in the process of trying to do so we change the working of the program, and our original bug is further hidden by the actions of the debugging apparatus.

What is really going on here?

Measuring Stuff

The fundamental issue is this: can you measure something without at the same time disturbing the thing you are trying to measure? That is, when you go to take the measurement, do you influence in some way the very thing that you are trying to determine? If so, then you have a problem, because your measurement will be contaminated by your perturbation of the system you are trying to measure.

Now, this is not an extremely "deep" problem. Medical diagnosticians have to deal with it all the time. They spend a lot of time and energy making assessment procedures as minimally invasive as possible. Yet we know that some people's blood pressure goes up the minute a cuff is put on their arm. Ergo, their measured blood pressure is higher than their normal resting blood pressure.

In software, we work very hard to make debuggers "non-intrusive." Nonetheless, sometimes the act of debugging changes something that causes the program to behave differently than when it is running without the debugger. Whether this is the fault of the program or the debugger is somewhat moot; in either case, the programmer has a big problem.

And in the 1920s, Elton Mayo discovered the Hawthorne Effect; he demonstrated that in studies involving human behavior, it is difficult to disentangle the behavior under investigation from the changes that invariably occur when the group under study knows it is being studied.

Note that these phenomena are perfectly "classical"; you don't need quantum mechanics or the Heisenberg Uncertainty Principle to explain them.

Before we delve more into Heisenberg, we might ask the following question: Is it possible to do any measurement, even a macroscopic one,

Page 96: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

that is totally "non-intrusive"? If we can find just one example, then we can debunk the idea that it is impossible.

So here's my example. I wake up in a hospital bed in a room I have never been in before. I want to figure out how large the room is. So I count the ceiling tiles. There are sixteen running along the length, and twelve along the width. I know that ceiling tiles are standardized to be one foot by one foot. Hence I know that the room measures sixteen feet by twelve feet for an area of 192 square feet. Bingo! I have performed a measurement without even getting off my back, and I claim that I have not disturbed the room at all.

Applying Heisenberg

Where the Heisenberg U.P. applies is in the atomic and subatomic realm. Basically, it posits that it is impossible, quantum mechanically, to specify both the position and momentum of a particle to arbitrary precision. If you want to make your knowledge of the particle's position more exact, then you will have less precision on its momentum, and vice versa.

To observe said particle, you have to "shine a light on it." But in so doing, the light itself affects the particle's momentum, and therefore makes it impossible to know the particle's position exactly. So "non-intrusiveness" is impossible at quantum dimensions, and Heisenberg supplies you a formula to compute just how much the intrusion will affect your measurement.

One caution: Heisenberg's U.P. uses Planck's constant, which is very, very, small. So small, in fact, that the Heisenberg U.P. yields nonsensical results once you get into anything greater than atomic and subatomic distances. That is, if you shine a light on an electron, you will affect its position, measurably. On the other hand, if you shine a light on me, you are not going to affect my position much. Shining a light on the ceiling tiles of the hospital room affects them not at all. So using the Heisenberg Uncertainty Principle for macroscopic objects is just nonsense.

Heat Death

On to the last misuse. Some say that there have only been four or five fundamental watersheds in physics. Sandwiched in between Newton and the twentieth century behemoths of relativity and quantum mechanics is the science of thermodynamics.9

Thermodynamics really shook things up. Here is where our modern ideas of energy conservation come from. Here is where the relationship between work and heat becomes clear. Here is where we show that perpetual motion machines are impossible. But most intriguing of all, here is where we get an entirely new concept: entropy.

So today we hear statements such as this:

"Large companies are doomed to failure, because entropy inevitably takes over."

Page 97: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Entropy is a measure of disorderliness. And one of the key tenets of thermodynamics is that entropy is always increasing. The most common example given to students is intuitively very appealing. Take a box that has a partition down the middle, and fill one side with oxygen molecules and the other side with nitrogen molecules. Remove the partition, and the molecules will continue to move about spontaneously. After some time, we observe that there is a uniform mixture of oxygen and nitrogen in the box. We can wait forever, and the molecules will never, of their own accord, find themselves back in the state with all the oxygen on one side and all the nitrogen on the other.10 The mixed state is considered to be more "random" or disordered; the segregated system more ordered. The entropy of the final system is greater than that of the initial system.

In any closed system, entropy spontaneously and naturally increases. Eventually the system reaches maximum entropy, or total randomness. In applying this phenomenon to the universe, physicists refer to it as "heat death"; hence the title of this section.

Seems logical, and by and large it is. Most systems, left alone, will tend to a more disorderly state. Just look at my desk.

But somewhere along the line, lay people started extending the concept to economic and social systems. And this is where I think it took a wrong turn. Note that the quotation above is not entirely without merit. It is certainly true that the larger an organization becomes, the more communications links it must support; as Kenneth Arrow pointed out many years ago,11 this may eventually limit its growth. Certainly it becomes harder for large organizations to coordinate activities, and even more difficult for them to respond quickly to changing circumstances. On the other hand, it is a mistake to assume that entropy must inevitably win.

Although it's certainly true that a closed system, left alone, will tend to a state of maximum entropy, economic systems -- such as the company you work for -- are not "closed." They are open to the flow of matter and energy. And we don't tend to leave our companies alone. We add raw materials to them all the time; we do work on the system; we expend energy to combat entropy. Just as I work to clean off my desktop and make it more orderly (less entropic), I can invest work in the communications channels and mechanisms in my company to reduce the disorder.

Now this work is roughly equivalent to the energy a machine might expend to overcome friction; it is not, in some sense, "useful" work. On the other hand, it does provide us with a rationale for continuing human enterprise. With the correct balance, we can at least hold off entropy while we make progress on the "real" objectives. One philosopher, long since forgotten, pointed out that we spend our whole lives combating entropy. It's how human beings and societies survive. In effect, the social organizations -- countries and enterprises -- that do a better job of beating back entropy ultimately win over those that are less successful in this fundamental enterprise.

The key thing to ask when someone refers to the inevitability of entropic

Page 98: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

disorder is: "Is it a closed system?" If not, then the spontaneous and inevitable increase in entropy is not a given.

Other Examples

There are, unfortunately, lots of other examples I could delve into. Each one would require a few paragraphs, and this article grows long already. I have heard numerous misstatements concerning the dual nature (wave-particle) of light. The discoveries over the last forty years in Chaos Theory have been incorrectly quoted to (once again) invalidate Newton's Laws. Gýdel's Incompleteness Theorem, around seventy years old, is sometimes used to justify our inability to prove something. And in the computer science arena, Turing's Machine is frequently used to demonstrate undecideability12 in areas where it has no applicability at all. The recently deceased scientist Stephen Jay Gould wrote extensively on how Darwin's Theory of Evolution has been widely quoted and generally misunderstood. All of these fundamental theorems are profound, and all of them are betrayed when used in situations in which they absolutely don't apply.

Good Science

Scientists and mathematicians have given us some incredibly powerful tools to help us understand our physical world. These tools are wonderful triumphs of human intellect, allowing us to start with first principles and explain a wide variety of phenomena, right down to the existence and behavior of elementary particles. As the phenomena get farther and farther away from our common experience, however, the theories become more abstract and require more esoteric mathematics for their exposition. It is at this point that we sacrifice to inaccessibility much of what we stand to gain in fundamental understanding. Nevertheless, although the average Joe cannot really appreciate all of the subtleties, he can certainly benefit from the trickle-down effects of these discoveries, embodied in practical products that come into his life. In this sense, it is all "good science."

What is not good science is using shibboleths from science to explain things that are clearly unrelated to the physical principles that underlie those shibboleths. Human beings are not just like fundamental particles; there is no reason to believe that they obey quantum mechanical laws as macroscopic beings. Such analogies really are misleading, and we should be wary of those who would use them to convince us that their positions are valid.

Likewise, we should all be careful about using pseudoscientific jargon in our daily communications with others. The least harmful result is that they will believe us without thinking, because we have "snowed" them with technical lingo. A more harmful result, which you should carefully consider, is that they will nod in agreement, and secretly conclude that you are a carnival huckster. And you will never know that you have lost credibility when you thought you were gaining it.

Dedicated to Mark Sadler, 1945 - 2002

Page 99: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Notes

1 While "F = ma" is the commonly quoted formula, the more general equation is "F = dp/dt," which says that the force is proportional to the rate of change of momentum. This only matters if the mass of the system does not remain constant, as in the problem of a rocket becoming lighter as it burns its fuel and thus loses mass during its flight. "F = dp/dt" is more general than "F = ma," but the latter formulation is the one you hear the most. Just remember when you hear it that it contains the assumption that the mass doesn't change.

2 Notice how precision just starts to creep into our language when we want to be careful about using physics. The speed of sound depends on a lot of things, including the temperature, which I did not specify.

3 I hope you were curious enough to do the calculation yourself. If not, here it is. At 68 degrees Fahrenheit, the speed of sound is 1127.3 ft/sec. Two hundred and fifty yards is 750 feet, so the sound will reach you in 0.665 seconds, or roughly two-thirds of a second. This is a noticeable interval. And by the way, I used English units here, not metric, because golfers, by and large, are "calibrated" in yards, not meters.

4 Another common example of this calculation is determining how far away a lightning bolt is by timing how long it takes to hear the thunderclap after you see the lightning. Same idea.

5 When I was doing physics around thirty years ago, there were only three. They were (and are still today) 1) the precession of the perihelion of the orbit of Mercury, 2) the gravitational bending of light as it passes by a massive object, and 3) the gravitational red shift of light as it climbs out of the gravitational field of a mass. My sources tell me that since then, several more have been added; one involves measurements on binary pulsars. All these effects are extremely small and hard to measure, and have very little connection to our everyday lives.

6 Some date the origin of quantum mechanics back to Planck's work in the early 1900's, which was contemporaneous with that of Einstein. I use 1927, because the papers that Schrýdinger published in 1926 were publicized in early 1927, giving us Schrýdinger's Equation. That formalized things and really launched the revolution.

7 The fine structure constant comes up when considering the separation of lines observed when doing spectroscopy on the atoms of an element. Quantum mechanics evolved as physicists tried to explain the various separations for different elements; later, quantum theory was used to predict higher-order effects on the spectra when, for example, the atom in question was subjected to an electrical or magnetic field.

8 http://us.imdb.com/Title?0243133

9 We might mention in passing that around the time of the American Civil War, our friend James Clerk Maxwell, building on the empirical work of Michael Faraday before him, recast electromagnetism in a beautiful mathematical formulation. It made electricity and magnetism understandable as aspects of one theory, and it is stunning. It enabled, in some sense, modern telecommunications to be born; for example, Marconi and his radio came after. So it is not to be downplayed. Yet to me Maxwell's equations are a mathematical tour de force; much of the physics and phenomenology was well understood at the time of Maxwell's work. For example, we know that the laying of the transatlantic cable was interrupted by the Civil War, so that telegraphy was in place well before it.

10 Theoretically, you can compute the probability that this will happen. It is very close to zero, believe me.

11 See http://www.amazon.com/exec/obidos/ASIN/0393093239/qid=1022704048/sr=1-6/ref=sr_1_6/002-2515674-9292830

12 The notion of "undecideability" refers to the impossibility of writing a computer program to determine the result of a problem or class of problems.

Page 100: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information

Page 101: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

ClearCase VOB Database Troubleshooting

by Carem Bennett

Editor's Note: Each month, we will feature one or two articles from the Rational Developer Network, just to give you a sense of the content you can find there. If you have a current Rational Support contract, you should join the Rational Developer Network now!

Introduction

ClearCase VOBs use a proprietary database format, the Raima database. Troubleshooting VOB databases can be difficult when the only errors you see are: db_vista error -912. How do you troubleshoot ClearCase VOB database problems? What do the error messages mean?

The ClearCase database resides in the db subdirectory of the VOB storage location. All database transactions come through the vobrpc_server and db_server processes. The vobrpc_server process reads and writes data on behalf of the view_server process. The db_server process reads and writes data as a result of cleartool and clearmake commands. The lockmgr process coordinates simultaneous access. As such the log files where errors would be reported are the db_server_log and vobrpc_server log files. Errors in the scrubber_log and vob_scrubber_log files might also indicate problems internal to the database.

Error Messages and What They Mean

jprince
http://www.therationaledge.com/content/jun_02/rdn.jsp
jprince
Copyright Rational Software 2002
Page 102: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Error Number

Possible Cause What To Do Initial Steps to Take

-4 User is not in <vobstore>\db directory while running db utility. User is also not root or administrator while running db utility.

Change to the correct directory. Ensure user is administrator or root.

Change to the correct directory. Ensure user is administrator or root.

-6 bit-flip possible. If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database.

Gather information in Troubleshooting section.

-16 Possible corruption.

If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database. This should only affect DOs.

Gather information in Troubleshooting section.

-20 Running dbcheck with large -p value.

Running dbcheck with large -p value.

Gather information in Troubleshooting section.

-35 Specifying too long of a path to db directory.

Specifying too long of a path to db directory.

Gather information in Troubleshooting section.

Page 103: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

-43 Possible network issue.

This does not appear to be database corruption. Rational will need to gather information to investigate. This may be a network issue.

Gather information in Troubleshooting section.

-900 Disk full Disk full Check for disk space where the db directory is located. Gather information in Troubleshooting section.

-901 There should also appear an Operating System error.

You are experiencing a normal OS error which needs to be fixed. However, this could also indicate database corruption.

Locate the OS error. Gather information in Troubleshooting section.

-902 Not likely to be corruption

Not likely to be corruption

Gather information in Troubleshooting section.

-903 Missing key file Missing key file Ensure that all files in the db directory are present. Restart.

-904 Insufficient Memory

Insufficient Memory How much memory is on the server? Add more memory and\or paging space. If this occurred while running dbcheck, rerun without the -p option.

-905 Not database corruption

Have there been any changes to the VOB storage directory?

Check for permissions and db files. Gather information in Troubleshooting section.

Page 104: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

-906 Incorrect permissions on transaction files.

Have there been any changes to the VOB storage directory?

Check the permissions on the physical VOB storage directory.

-907 Incorrect lock manager permissions.

Have there been any changes to the lock manager permissions?

Run as a system account. The clearcase_albd account may not have access.

-908 Not likely to be corruption

You've encountered a lock problem and will need to restart ClearCase.

Restart Clearcase.

-909 Reached maximum records limit.

You've reached the maximum records limit. Run countdb.

Gather information in Troubleshooting section.

-910 Key file inconsistency

This is database corruption. If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database.

Gather information in Troubleshooting section.

-911 Not corruption. Adjust lock manager parameters.

Not corruption. Adjust lock manager parameters.

Run as a system account.

-912 Normally not corruption. Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Normally not corruption. Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Gather system logs. Check for disk space.

Page 105: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

-914 Normally not corruption. Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Normally not corruption. Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Gather system logs. Check for disk space.

-915 Network error Heavy network traffic may be the cause.

Check for network errors in logs.

-916 This does not appear to be database corruption. Rational will need to gather information to investigate.

Gather information in Troubleshooting section.

-917 Insufficient lock manager parameters

Insufficient lock manager parameters

Run as system account. Clearcase_albd may not have access to directory.

-918 Normal, not an error.

-919 Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Disk space problem with VOB storage mount. Problem with VOB server processes. 2 GB limit on string and transaction files.

Check for disk space. Gather information in Troubleshooting section.

-920 Lock manager socket deleted.

Lock manager socket deleted.

-921 This does not appear to be database corruption. Rational will need to gather information to investigate.

Gather information in Troubleshooting section.

-922 Lock manager is busy

Lock manager is busy

Adjust lock manager parameters. Increase -u.

Page 106: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

-923 Memory error (only on Windows)

There is a memory error and ClearCase needs to be restarted.

Restart ClearCase. Restart the computer.

-925 Corrupt *.tjf or *.taf file.

There are corrupt transaction files. Contact support. Any recent updates to the VOB that have not been written to the database may be lost.

Restart ClearCase. Contact Support. Back up the vista.* files and the logs subdirectoryLock the VOB (if possible)Stop ClearCaseRemove the vista.* files from the database subdirectoryStart ClearCaseUnlock the VOBImmediately lock/unlock the VOB again

1 Possible corruption

If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database.

Gather information in Troubleshooting section.

2 Possible corruption

If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database.

Gather information in Troubleshooting section.

Page 107: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

3 Possible corruption

If everything appears to be working normally, and dbcheck only reports a few errors, it is acceptable to continue using the VOB while Rational gathers information needed to fix the database.

Gather information in Troubleshooting section.

5 Running a utility that requires a VOB to be locked.

Running a utility that requires a VOB to be locked.

Gather information in Troubleshooting section.

All others This does not appear to be database corruption. We will need to gather some information to investigate.

This does not appear to be database corruption. We will need to gather some information to investigate.

Gather information in Troubleshooting section.

Troubleshooting

Gather the following information before contacting Support to troubleshoot VOB database issues.

1. Obtain the Operating System and ClearCase version information. What version of the operating system is in use? Are there any service packs or patches installed? What version of ClearCase is installed?

2. When was the last time dbcheck was run before the corruption?

3. Did the system recently, since the last time a dbcheck was run, experience a power failure? If so, when?

4. Did the system recently experience a crash or hang, so the computer had to be restarted? If so, when?

5. Is RAID 5 in use? Any other RAID configuration? Hardware or software based?

6. Save and send in operating system logs from the VOB server. On Windows, send in the System and Application logs from the Event Viewer.

7. On Windows, run CCDoctor on the VOB server and save the output to a .ccdoc extension file.

8. On Windows, send in a WiNMSD report in complete mode. On

Page 108: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

Windows 2000, send in a System Info in text format.

9. On Windows, run cleartool getlog -a > <path>\getlog.txt on the VOB server.

10. Perform a dbcheck as Administrator or root on the VOB server.

a. Log on as Administrator or root

b. Lock the VOB. If there are errors locking the VOB, copy the database to another location and run the dbcheck.

c. Change to the db directory on the VOB server.

d. Run the dbcheck. On Windows: <atriahome>\etc\utils\dbcheck -a -k -p8192 vob_db >

c:\tmp\dbcheck.txt

e. On UNIX: <ATRIAHOME>/etc/utils/dbcheck -a -p8192 vob_db > /tmp/dbcheck.txt

Note: the string vob_db is not an abbreviation, and should be entered literally. Make sure the dbcheck.txt output file indicates processing of each of the 7 database files.

11. Sign in to the Rational FTP server, and put the output into your directory. Uploading files to the FTP site:

a. At the command prompt: ftp exchange.rational.com

b. Login: anonymous

c. Password: your full e-mail address

d. Enter: bin for binary mode before putting the files into the ftp directory.

e. Put the zipped files into the directory.

f. Quit

Or, you can e-mail your files to [email protected] with your SR number in the subject line. After you have placed the requested information on Rational's FTP server, please call the Support Representative you have been working with and tell them that the information has been uploaded. If you cannot contact the Support Representative directly, call our general Support number at (800) 433-5444.

Page 109: index.jspCopyright Rational Software 2002€¦ · Editor's Notes: So if you're trying to convince others in your organization that the "activity-Summer Rules! Cover art: Stained glass,

For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

Copyright Rational Software 2002 | Privacy/Legal Information