8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
1/13
Building a Learning Organisation
Evaluating Performance Criteria through Simulation Games
Keith T Linard Senior Lecturer, Department of Civil Engineering
Keithlinard#@#yahoo.co.uk(Remove hashes to email)
Games are crucial to childrens educative process. Play helps integrate the cognitive, emotional andsocial dimensions in thinking and behaviour. It fosters initiative and problem solving and helps
them explore interpersonal relationships and integrate experiences in a non-threatening melieu.
As these innocents grow up to be managers, however, their games tend towards individual
orientation, destructive for themselves, colleagues and business. Organisational performance
criteria foster such dysfunctional games by focusing on individual reward structures. But large
organisations are aggregations of management units operating as a system, where efficiency and
effectiveness of the system are far more than the sum of individual parts.
Systems thinking helps us map qualitatively organisational interactions, highlighting how
managers responses to performance criteria produce unanticipated feedback problems for others.
An advance on this is system dynamics modelling, which shows in quantitative terms theconsequences of the interacting positive and negative feedback loops.
The PowersimTM
system dynamics software advances us yet further, enabling management
learning laboratories with interacting sub-models of an organisation under the control of the
respective managers. Managers can learn how performance criteria affect not only their decisions
but the performance of colleagues and of the organisation.
The use of dynamic models, particularly in a gaming environment, add critical dimensions to
evaluation. They assist They enable managers to test the validity of performance indicators
KEYWORDS: Evaluation; performance indicators; simulation games; management learning
laboratory; learning organisation; systems thinking; system dynamics._________________________________
INTRODUCTION
Linard, in his presentation to the 1995 AES Conference1, argued that the underlying program
evaluation paradigm is in large measure flawed because it fails to take into account the impact of
feedback mechanisms and especially of delayed feedback.
Studies at MIT demonstrate that there is systematic misperception of feedback especially when
there are delays in the system. Replication of these simulations at the Australian Defence Force
Academy confirm that highly educated managers invariably fail to comprehend the significance of
feedback in the face of delay induced dynamics. Linard also referred to the 1994 MIT research
report by Diehl and Sterman2
which argued ... is a fundamental bound on human rationality - our
cognitive capabilities do not include the ability to solve systems with delay induced dynamics.
Diverse studies suggest that, where non-trivial delayed feedback mechanisms are likely to exist, the
program element should be considered from a systems thinking perspective and that computer
modelling is essential to understanding the dynamics.
1 Linard, K., Dancing Towards Disaster -- The Danger of Using Intuitive Indicators of Performance. Proc. 1995
International Conference of the Australasian Evaluation Society.2 Diehl, E and J Sterman. Effects of Feedback Complexity on Dynamic Decision Making. MIT Sloan School of
Management, Research Report D-4401-1. March 1994.
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
2/13
This paper first describes the accepted framework for program and performance evaluation in the
public sector, using the detailed guidelines for Navy by way of illustration.
The paper then introduces the key elements of systems thinking, contrasting this approach with the
essentially static program logic framework in the context of the traditional program evaluation
paradigm.
The paper then addresses the issue of achieving management confidence in computer models given the twin dangers of skepticism and black box syndrome. The balance of the paper focuses
on development of a management learning laboratory, where managers are introduced to the
concepts of delayed feedback through simple (yet profound) simulation games, and then participate
in the development and calibration of gaming models of their business areas.
Setting the Context -- Evaluation in Defence
Program and performance evaluation in the Australian Defence Organisation are similar to the rest
of the bureaucracy, understandably so since they draw heavily on evaluation paradigms developed
by the Federal Department of Finance. For simplicity Navys evaluation framework is described.3
The comments broadly apply to the other Services and indeed to the rest of the bureaucracy.At the corporate level,program evaluations perform three main functions.
4They:
a)provide information for strategic planning -- program outcomes are assessed against programobjectives, with the resulting judgements on effectiveness and capabilities providing an input to
the planning process;
b)support the annual five year program planning and budgeting process by reviewing the adequacyof resources requirements, areas of management flexibility and financial priorities; and
c)provide a mechanism to review effectiveness and efficiency in achieving program objectives.
Program evaluation and strategic planning are thus intimately interwoven.
Performance evaluation focuses on finding new and better ways of conducting activities5
It hasbeen integrated into Navys Total Quality Management cycle, where the Plan-Do-Check-Act cycle
underpins continuous quality improvement. Processes are continually evaluated to find areas for
improvement. Data is collected and
evaluated to help improve decisions.
Planning and evaluation are two sides
of the same coin. Evaluation of
progress against strategic and
business plans is regularly assessed to
ensure that goals/objectives are being
achieved and so to assist in theplanning process in order to achieve
further improvement.
3 This section draws heavily on: Royal Australian Navy, Directorate of Corporate Management, ABR 2010, The Navy
Quality Manager. Defence Centre, Canberra. Second Edition, 1996, Chapter 11. That document in turn draws heavily on
Linard, KT, Evaluating Government Programs - A Handbook. AGPS, Canberra, 1987. Figures 1 to 3 are from ABR 2010.4 Ibid., para 1108.5 Ibid., para 1111.
Figure 1: Performance Evaluation in Navy QualityManagement Cycle
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
3/13
Each business plan includes performance indicators to measure and evaluate performance against
planned objectives. This evaluation information then feeds back into the planning cycle, providing
information for the situation analysis phase of business and strategic planning.6
In Figure 2, performance evaluation has two distinct dimensions -- efficiency and effectiveness.
Efficiency or Process Evaluationplaces an emphasis on analysing inputs, processes and outputs to
measure efficiency, as illustrated in Figure 3. In the very optimistic words of Navys evaluation
guidelines: Evaluation of
processes is relatively easy
(measures of inputs, outputs,processing times etc) and this
gives an indication of
efficiency.7 As will be
discussed, delays and wider
interdependencies render it
anything but easy to evaluate,
without dynamic simulation.
Effectiveness or Impact Evaluation is concerned with determining whether programs are achieving
their objectives, for example whether the allocation of resources are consistent with operational
readiness directives.
In this system there are delays
of months or years, between
the outputs and measureable
outcomes; and years from
that to achieving measureable
objectives. Interdependencies
are even greater. Meanwhile
decisions continue to be made
on the basis of static
performance data.
Mental Models WhichShape Our Perceptions of the World
Our failure to address the key system effects of time dynamics (the effect of delayed feedback on
decisions) and interdependencies with decisions being made elsewhere is arguably a consequence
of prevailing mental models in the Western world. The prevailing analytical paradigms have their
origins in the philosophy of Plato, reinforced by the scholastic philosophers of the 13th
century and
embedded in concrete by the philosophers of the 17th
and 18th
century, especially Descartes.
6 Ibid., para 1112.7 Ibid., para. 1114.
Figure 2: Performance Evaluation and Business Planning --Two Sides of Same Coin
Figure 3: Efficiency or Process Evaluation
Figure 4: Effectiveness or Impact Evaluation
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
4/13
Shopping for causality 8
Typically, if one asks a "what is/was the causes of ?" type question, one generally gets a
shopping list of causal factors in response. Review of the press or TV reports of the 1997 Canberra
Hospital demolition disaster, which resulted in the death of a young girl, reveals such a list:
political pressure to stage a spectacle
loss of corporate memory due to downsizing / corporatisation detailed plans not being available
managerial focus on devolution and outsourcing of professional expertise
inadequate regulatory environment
organisation culture in bureaucracy
etc, etc
Similarly, press discussion of the 1997 Thredbo ski resort disaster produced a veritable shopping
list of organisations, people, decisions and factors to blame.
Implicitly, people also weight each factor in the shopping list: this one is most important, this one
is second and so on. This kind of mental modeling has been given analytical expression as a
multiple regression equation. Many here would be familiar with this type of expression:
y = ao + a1X1 + a2X2 + . . . + an Xn
where: y = dependent variable
Xi = independent variable(s)
ai = coefficient (or weighting factor) for each independent variable
Notice that the implicit assumptions in the shopping list thinking process are that each factor
contributes as a cause to the effect, i.e., causality uni-directional; each factor acts independently;
the weighting factor of each is fixed; and the way in which each factor works to cause the effect is
left implicit (represented only by the sign of the coefficients, i.e., it is simply a product of statisticalcorrelation and may have no direct bearing on how the system works).
The shopping list mental model
implicitly assumes:
straight-line (as distinct fromcircular) causality
independent factors (c.f.interdependent relationships)
fixed weighting (c.f. changing
strength of relationships)
correlational (c.f. operational)perspective
The Board of Inquiry in the 1996 Blackhawk helicopter disaster, in which 18 service personnel
were killed did not produce a shopping list. Rather it concluded that the cause lay in interrelated
systemic problems over a period of years. In essence there were a series of interrelated decisions
(budgetary, personnel, operational etc) which, alone, would not have resulted in a crash, but taken
together made the disaster inevitable.
Mental models, systems thinking and shopping lists
8 This section draws heavily on: Richmond, B. Systems thinking: critical thinking skills for the 1990s and beyond, System
Dynamics Review Vol. 9, no. 2 (Summer 1993): 113-133
Figure 5: The typical answer to a causal questionis a shopping list
Causal Factor 1
Causal Factor 2
Causal Factor 3
Causal Factor n
.
.
.
.
EFFECT
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
5/13
The shopping listview of the world is a mental model. It is a set of assumptions by which we try
to simplify the multiplicity of real world factors so that we can better understand what decisions to
make. In the words of Demming, all models are wrong, some models are useful. The shopping list
model has limited usefulness in understanding complex policy problems where delays and
interdependencies are endemic.
Systems thinking paradigm is also a partial view of reality, but has greater utility for program
evaluators and planners. It is based on a radically different mental model of 'how the world works'..
Uni-directional straight-line (as distinct from circular) causality
The program logic framework
was originally developed by
USAID in the 1950s for
evaluating the effectiveness of
aid programs. It was adapted by
federal and state bureaucracies in
the 1980s as a basic paradigm for
under-taking program evaluation.This framework has been
valuable in exposing wishful
thinking in program planners.
However, the linear causal
assumptions limit one's ability to
understand dynamic phenomena
The "logic framework" is a static
picture of a dynamic world.
Most programs are worlds of change, with delayed feedback information about that change. A
typical program time cycle might be:1991-92 >>> Program logic helps develop program & indicators, based on 1980s research.
June 92 >>> Program commences, but with teething problems due to budget changes.
June 95 >>> Indicators for 1993/4 suggest implementation problems. Fixed, so ignore.
1995-96 >>> ABS survey data (6-9 months old) suggest marginal impact. Further studies deemed
necessary. Program evaluation suggests changes.
1996-97 >>> Policy changes proposed; debated in Party room; Cabinet Submission prepared;
amending legislation developed etc.
Nov 97 >>> Legislation passed (based on 2-3 year old data)
Program manager, facing such delays, are in a similar situation to the driver depicted in Figure 7.
Figure 6: Linear uni-directional causal mental model(Program Logic Framework)
Figure 7: Program planning & evaluation is like this ...
Imagine driving a car where:2 second delay before steering responds
4 second delay before brakes respond
6 second delay before indicators respond
Resource Inputs
Political Cultural &
Legal Environment OngoingProgram
Activities
ShortTerm
Outputs
Achievementof Objectives
Are presumed tobring about
Are presumed tobring about
Are presumed tobring about
" Program Logic is . . . systematic study of
the presumed relationship between . . . "
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
6/13
If we consider causality from a systems
perspective, for example why did Bill suffer
that nervous breakdown?, we might see an
initial analysis akin to Figure 8. Here,
dynamic phenomena unfold over time, and
the unfolding itself feeds back to influence
the subsequent course of unfolding. Thus,initially, there is a temporary increase in
workload. The resulting overwork in turn
increases fatigue, leading to lower
productivity. The to do list continues to get
bigger . . . and round we go again. Our
analysis might then consider other sets of
interrelated factors e.g., stress or motivation.
Independent factors (c.f. interdependent relationships)
The "shopping list" paradigm implicitly assumes that factors exert their influence independently of
each other. Systems thinking mental models instead view the world as a web of interdependentfactors, Figure 9, where the separate loops from the initial analysis above are interconnected.
The shift from one-way to circular
causality, and from independent factors to
interdependent relations, is a profound one.
In effect, it is a shift from viewing the world
as a set of static, stimulus-response relations
to viewing it as an ongoing, interdependent,
self-sustaining, dynamic process.9
Static ordering of relative importanceThe third assumption implicit in the
shopping list paradigm is that the relative
importance of the factors remains constant
over time. The systems thinking mental
model focuses on the relationship between factors, rather than events. Thus the strength of the
closed-loop relations in Figure 9 is assumed to wax and wane over time. Certainly, the temporary
jump in work load may act as a catalyst. But then one loop will dominate at first, others will then
take over, and so on. Addressing a problem is not seen as a quick fix (prescribing Vallium or
temporarily removing part of the workload). Rather, it is considered necessary to think in terms of
ongoing, interdependent relations whose strengths vary over time, partly in response to
interventions that may have been implemented into the system.10
Correlational (c.f. operational) perspective
The final assumption of the shopping list paradigm is that correlation explains how a system
works. The systems thinking paradigm rather focuses on the processes or relationships that make
the system work the way it does.
The "shopping list" mental model, by focusing on the "event" and the factors associated with the
event, leads to treating the "symptoms" rather than the causes that are embedded in the system
relationships. A classic example are the 8 decades of quick fixes to stop the end of the year
spending binge by federal Departments due to a constitutional requirement that required all
unspent funds to be returned to Consolidated Revenue. By focusing on the systemic relationships,
9 Richmond (1993). op. cit.10 Ibid.
Figure 8: The 'cause' of a 'breakdown'-- from a systems perspective
Figure 9: Systems thinking perspective seesa web of interdependencies
Overwork
"To Do"List
Productivity
Fatigue
Stress
Motivation
Overwork
"To Do"List
Productivity
Fatigue
"To Do"List
Productivity Stress
Lack ofpersonal
recognition
Attitudetowards
work
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
7/13
the root cause of the problem was addressed in 1986 through agreements to reimburse to
Departments underspending in the previous year. Of course it took (takes?) time for the mental
models of managers across the bureaucracy to align with the changed situation.
The systems thinking mental model thus focuses on the relationships that generate the events. This
operational focus helps identify the leverage points for modifying the performance of the system.
Simulation Games and System Dynamics Models
Black boxes and user confidence
Our experience in modelling is typically that, if the answer coincides with the expectations of the
managers, the answer is accepted. Although the question may be raised, why did we need a
consultant to tell us what we already knew. If the answer is counter-intuitive, the model is just
another black box and strong management support for program implementation or change cannot
be guaranteed. Until recently this was also the case with system dynamics models.
With the development of system dynamics modelling packages with a graphical design interface,
the ability it is easier to communicate the underlying logic of the system and hence to communicate
the reasons for counter intuitive system responses.
Figure 10, for example, is
the structure of a
simulation model for
evaluating alternative
policies for parachute
training.11
The model
points to significant
changes to current
practice, and could be
expected to be subject tosignificant skepticism.
The boxes represent
stocks of staff in different
categories at any given
time. The pipeline feeds
staff in or out of a stock.
Those familiar with the
organisation or problem
quickly learn to read a
stock-flow diagram. Thelogic of the model is easy
to communicate, both to subject area experts who can see whether the structure correctly captures
the way things actually work, and to senior executives who simply want confidence in the model, so
that they can concentrate on he implications of the model outputs.
The problem modelled in Figure 10 is that a minimum number of training jumps is required each
year to maintain competency. but there is a known, and significant, risk of injury with each jump.
The longer one is in the unit (e.g. trainers) the greater the risk of injury. Trainers need more skill
and so need more jumps, but there is a feedback, both in injury rate and in consequent reluctance to
remain in the unit for an extended period. Cutting the number of jumps affects force readiness.
11 Figure 10 is a marginal simplification of the full model.. About 10 variables have been temporarily switched off to aid
presentation. These can be included, at the click of a button, for progressive elaboration.
Figure 10: Basic structure of parachute training model
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
8/13
In drawing the structure as shown, the software automatically generates the complex differential
equations necessary to solve the time varying dynamics problems. Senior managers, of course, are
rarely interested in the mathematical equations. They do want to understand the logic of the model.
User confidence in the structure is but the start. Confidence in understanding systemic interactions
requires that the key decision makers be part of the modelling loop. It is not sufficient for technical
experts to drive the model for them. Thus, the control panel to drive the model desirably should
be as simple as that on a modern car rather than that on a Boeing 747.
The Powersim software allows the design interfaces geared to the user requirements. Figure 11
illustrates our approach to the parachute training problem. The key input and output parameters are
linked by causal loops, to assist understanding of the reason for the interactions. The user can run
what if simulations by varying the manning targets, number of jumps per year and/or the
personnel posting policy for each of three rank levels. Other interfaces are illustrated later in the
paper.
What is a Management Learning Laboratory?12
In sports, the arts, the military, practice sessions are normal, simulating real events where members
learn from mistakes and each others examples before going live. In business and government we
practice on the real thing, as illustrated in Figures 1 and 2 which show Navys evaluation and
planning framework. This is perhaps why over one third of all Fortune 500 industrials listed in
1970 had vanished by 198313
, and why nearly one third of companies go bankrupt within 5 years.
A learning laboratory is the term used by the MIT Center for Organizational Learning for a
training workshop where managers form a particular organisation or unit come to develop new
managerial skills, by cycling back and forth between war gaming, using boards, dice and counters
12 This and following sections draw on a wide variety of readings and local research, the more important are listed in the
bibliography at the end of the paper. Formal acknowledgement is only made for specific quotes.13 de Geus, AP. Planning as Learning. Harvard Business Review, March-April 1988. P.71
Figure 11: Control panel for parachute training simulation model
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
9/13
for example, or computer models of varying complexity. They the have debriefing sessions where
they seek, collectively, to understand why the system behaved the way it did.
A learning laboratory differs from traditional training in that it simulates the dynamics of business,
allowing management teams to practice together and make mistakes without penalty or pressures of
performance. Like players practising for a team sport, they learn about their mutual dependence on
other elements of the business, especially those outside their control.
In business and government, specialisation results in walls or stovepipes that separate different
functions into independent and often warring fiefdoms. A learning laboratory offers the possibility,
in a non-threatening environment, for the respective managers to understand how their performance
(on which they are judged) is impacted by the activities of other units. It enables testing of program
performance indicators, to see how they promote or inhibit decisions for the good of the whole.
The problem of organisational stovepipes was illustrated in work done by ADFA for Navy
Training Command in 1996. Navys evaluation procedures indicated they had a problem in training
sufficient helicopter pilots. An obvious quick fix solution was to purchase more helicopters. In
modelling the problem it quickly became apparent that there were three players whose policy
making was somewhat independent of each other, despite clear system linkages: PersonnelDivision, responsible for pilot postings; Training Command and Aircraft Maintenance.
Figure 12 illustrates the conflicting objectives that can arise between different organisational units.
This relates to our modelling of the systemic interrelationships affecting the capability of Armys
5th Aviation regiment to undertake its tasks. (there are direct parallels with Navy pilot training.)
The left side of Figure 12
depicts a the key objective
Personnel Divisions objective,
to foster a supply of officers
with good corporate skills.
These skills decay the longerthe officer is in a specialist
posting. They want to
optimise the time spent in
corporate management tasks.
The right side relates to
Training Commands key
objective, developing and
maintaining specialist flying
skills, which in fact decay much faster than general management skills. Because flying skills decay
when pilots are posted to corporate duties, personnel policies impact on training policy decisions,without training policy decision makers being able to adjust personnel policies.
Similarly, there are trade offs between training policy and aircraft maintenance policies. Action to
address training deficiencies by increasing the rate of flying training will in turn impact on the
serviceability of the aircraft. This is critical as a Blackhawk helicopter requires 12 weeks servicing
after 300 flying hours and 6 months servicing after 600 hours. There are limited maintenance
facilities. Thus we might reach a situation were pilots are ready for an emergency deployment, but
no helicopters are available.
The Purpose of the Learning Laboratory
Computer modelling has almost become synonymous with prediction. Many people
automatically assume that the business simulation is designed to predict what will happen if ?
Certainly, at some stage, the simulations may be taken to that level. The problem with this
Figure 12: Interrelationship between Personnel policy andTraining policy for pilots
Time in
Corp Mngt
Time in
Role Skills
Corp Mngt
Skill Level
Role Skill
LevelSkill Decay Skill DecayAcquire Skill Acquire Skill
Personnel Policy
Decision
Rate of
Flying Effort
Personnel Policy Maker Training Policy Maker
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
10/13
mentality is that we might simply model an organisational structure of a policy framework that is
fundamentally flawed. All we will achieve is doing the wrong job more efficiently.
Fundamentally, the learning laboratory is designed to assist people involved in a function to learn
about the interrelationships and time dynamics. It is not a static process of playing SimCity a
thousand times until you get a higher score than your children. The process of learning surfaces our
mental models, which in turn may challenge the very assumptions underlying the model as
constructed. It challenges key assumptions on relationships things which have always been taken
for granted. It challenges the validity of performance indicators.
The key issue is not precise prediction of results, but the direction of change, the time to effect
change, and the stability of such change likely to result from different decisions. These feedbacks,
when reflected on by the group, are critical to learning.
This is why the various researchers in the field emphasise the critical importance of making the
managers themselves critical components of the simulation. Wherever possible, crucial decisions
are not modelled into a black box algorithm in the computer. Instead, the responsible manager is
placed at that junction, with feedback information generated by the computer (resulting from the
decisions of others). This feedback can be clear and well targetted, or it may replicate the real lifeenvironment incomplete, ambiguous, late and with lots of irrelevant padding.
As understanding develops, the simulation model develops. It may be that eventually it becomes a
predictive model. It may not be necessary
Designing a Management Flight Simulator
Airlines such as ANSETT or Australian Airlines never put a new pilot at the control of their aircraft
without demonstrating competence in a mock-up aircraft a flight simulator. In the management
learning laboratory, the management flight simulator is the equivalent mock up of the business or
organisational unit.
The management flight simulator might be a board game or a computer simulation which faithfullyreflects the key interaction and the key time dynamics of the business operations. The key features
of the major flight simulator developed at ADFA to date are:
the outcome to be tested, which determines whether the organisation is capable of doing its job,given all the environmental, resourcing and other influences
the critical decisions that affect the ultimate objectives and the validation of this choice
the sectors of the organisation to be included (this flows from the previous point)
the key players who will make the decisions on behalf of the different sectors
the decisions to be automatedin the computer and the validation of the relationships
the decisions to be left to the players and the presentation of the control mechanism. the feedbackto be given to each player and its presentation
The Learning Laboratory and Evaluation
Referring back to the earlier section, Evaluation in Defence, the learning laboratory, and its
primary tool, the flight simulator, can provide a major input into many of the evaluation functions.
They can also provide more proactive guidance, especially for ex-ante evaluation and
implementation analysis. This is summarised in Table 1.
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
11/13
Potential Limitations ofCurrent Practice
Value Added of theManagement Learning Laboratory
Program /EffectivenessEvaluation
CREATES A SHARED VISION ANDCOMMUNITY OF COMMITMENT
Program logic Untestable leaps in logic Demands specification of relationships(even if this is qualitative) and exposesthese to scrutiny.
Tests sensitivity of results to suchassumptions
Program performanceindicators
No rigorous basis for their selection.
No way of testing the consequences ofusing performance information.
No way of testing consequences ofusing a mix of indicators
Tests the likely result for programoutput of decisions on the basis ofgiven indicators.
Tests likely result for program outputof using a mix of indicators.
Outcomes assessmentagainst objectives
After the event (often by years).
Helps learn from mistakes but is likedriving by looking in the rear viewmirror.
Gives a forward looking perspective.
Enables testing of validity & impact ofindicators
Helps create proactive management.
Support for planning &budgetting by review ofresource adequacy etc
Years late. World has changed. Likedriving by looking in the rear viewmirror.
Gives a forward looking perspective.
Enables pro-active testing of options togive more confidence in results.
Efficiency or processevaluation
Cannot account for dynamics inducedby delays in system.
Difficult to account for impacts by otheragencies.
Can be used pro-actively to identifypotential dynamic impacts.
Can be used in retrospect tounderstand why problems occurred.
TABLE 1: Value added for Evaluation of Learning Laboratory
A Case Study of a Business Simulation of the 5th Aviation Regiment
This simulation was based on Armys 5th
Aviation Regiment, the unit involved in the 1996 disaster.
It was chosen because of the obvious systemic interrelationships, and also because it was one of the
more complex military units. If we could model that we should have little difficulty with others.
The outcome to be evaluated
The fundamental objective of
any peace time Australian
military unit is to be able to
reach a defined level of
capability (defined, in part, in
terms of current competency in
the skills required) for any given
pre-set scenario.
It should be noted that a recent
evaluation of Army was critical
of Armys lack of rigorous
indicators of its capability and
readiness, or of any rigorous
link between its resourcing andthese objectives. Figure 13: Defence Outcome to be evaluated
The simulation addresses the critical question:"Given the history of resources management, policy and operationaldecisions, can the Unit achieve scenario readiness within the directedtime frame ?"
Time
LevelofCap
ability
AssemblyPeriod
WorkupPeriod
OperationalPeriod
ExpansionDirective
Deployment
Response to Strategic Warning
Tgt O ps Capability Level
Peacetime Target Capability Level
Deployment Time
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
12/13
In this flight simulator three operational scenarios were included to evaluate readiness: support for
counter terrorist operations, Brigade support and general air mobile support for operational units.
The critical decisions and sectors of the organisation to include
Figure 14 depicts the key players in the flight simulator, top level Resourcing, Personnel Policy,
Training Policy, Operational Tasking (in essence day-to-day management) and Games Master. The
decisions exercised explicitly by each player (controls) and the main feedback information receivedby each are shown..
Top level Resourcing and Personnel Policy were seen as critical. These make decisions
infrequently, but with long lead times, and hence their implications on operational management are
not intuitively obvious. At the operational level, directives of Training Policy, which include
critical assumptions, and day to day OperationalTasking of the Unit Commander are incorporated.
Finally, the Games Mastercan issue an instruction to deploy according to preset scenarios. At this
point, whether the Unit Commander can succeed within the timeframe depends not only on her/his
skills, but on the net result of all previous decisions. This command may be issued at any time.
Lessons thusfar
The flight simulator concept has attracted strong support from Defence top management. The
process of development has highlighted a number of invaluable lessons including:
it is important to involve users in the development and validation of the simulation
seemingly rigorous data sets are not always what they seem to be the data may in fact be oflimited value because of lack of either clear definition, quality control or both.
every system has implicit, and often very critical, assumptions based only on professionaljudgement and experience which are so part of the culture that they are never challenged.
where simulations seek to link top level resourcing with lower level operations, the difference in
decision timeframe may make it desirable to run two separate simulators, in different laboratorysessions, with the results of each feeding the other.
Figure 14: The 'players in the 'flight simulator' of 5th Aviation Regiment capability
ResourcesTasking
CDF
(Games Master)
Personnel Policy and Resources
Controls:Times in Unit and other Duties
Time to PromotionRecruiting
Feedback Information:Pers lo cation by r ank andcareer type
Controls:Deployment parameters: Timeallowed, # o f airframes, role
Training Policy
Controls:Skill acquisition & decay ratesActivity contribution ratesCrew work limits
Feedback Information:Pers location by skill level Aircraft availability
Controls:Initial budget allocation o f hrsC hanges to budget
Feedback Information:Achieved r ate of ops effort A irc raft availabilityReq effor t to deploy
Controls:Distribution of budgeted hrs
Feedback Information:Achieved r ate of ops effort A ircr aft availabilityReq effort to deploy
Feedback Information: A ircr aft availabilityReq effort to deploy
Actual time to deploy
5th Aviation Rgt - Web of relationships impacting on readiness
8/9/2019 Linard_1997-AES_Learning Organisation System Dynamics & Management Learning Laboratory
13/13
the design of input and feedback formats is important and should, as far as possible, replicate theformats used by the players in their normal jobs.
The key factors to be taken into account in designing the flight simulator are:
the nature of the problem: is it more related to the technical complexity of the issues, theorganisational boundaries or both.
what are the payoffs: is this critical to the core business or simply an interesting experiment.
who is the champion: this is closely related to the next point. If the simulation is likely tochallenge key aspects of corporate culture or organisational power structures, a champion at the
highest level is critical.
the level of the decision makers: is it an issue which touches on top corporate policy or is it anoperational issue.
can we get the real decision makers: if the problem is strongly related to cross institutionalboundaries, top executive participation is critical.
are the program performance indicators significant factors in the dynamics: if they are, outsidebodies (e.g. the federal Department of Finance) might need to be brought into the learning loop.
what are the characteristics of the learners: issues such as their motivation, socio-culturalprofile (e.g., their decision making style), their knowledge profile, their psychological profile
(e.g., their particular learning style, how they relate in team situations etc)
what are the appropriate tools: this cannot be readily answered as it depends on all of theforegoing. There is significant research in this field around the world. We at ADFA, in
collaboration with ANU and Flinders University, and a umber of private sector firms including
Computer Science Corporation, have a number of research projects. Suffice it to say that a range
of tools, including board games and computer simulations, are required.
what are the planned validation processes and methods to build user confidence in thesimulation: These should be established at the outset, although they will invariably change.
____________________________________________
Bibliography:
Bakken, B, Gould. J and Kim, D: Experimentation in learning organisations - a management flight simulator approach.
European J. of Operational Research 59, 1992. 167-182.
Diehl, E.W.: Participatory simulation software for managers. European J. of Operational Research 59, 1992. 201-215.
Forrester, J,: Designing social and managerial systems. Sloan School of Management Research Report D-4209 (1991).
Garvin, D.A.: Building a learning organisation. Harvard Business review, July-Aug 1993.
Graham, A.K., Morecroft, J.D., Senge, P.M. and Sterman, J.D.: Model supported case studies for managementeducation. European J. of Operational Research 59 (1992) 151-166.
Isaacs, W. N.: Taking flight: collective thinking, and organizational learning. Orgl Dynamics, Winter 1993, 24-39.
Kreutzer, W.B. and Kreutzer D.P.: Applying the principles of human computer interaction to the design of
management flight simulators. http://gka.com/papers.
Packer, DW & Glass-Husain, W: Interactive multi-user learning laboratories. Proc 1997 Intl System Dynamics Conf.
Roth G.L and Senge P.M. From Theory to Practice: Research Territory, Processes and Structure at the MIT Center for
Organizational Learning (Submitted to Journal of Organizational Change Management, 1995}
Schein, E. H. How Can Organizations Learn Faster? The Challenge of Entering the Green Room. Sloan Management
Review, 34, 1993. 85-92.
Schein, E. H. (1997). The Three Cultures of Management: Implications for Organizational Learning. Sloan
Management Review, in press.Senge, P. (1990) The Fifth Discipline. N.Y.: Doubleday, 1990.
Top Related