Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking
-
Upload
kristan-j-wheaton -
Category
Documents
-
view
105 -
download
0
description
Transcript of Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking
EXPLICIT CONCEPTUAL MODELS:SYNTHESIZING DIVERGENT AND CONVERGENT THINKING
SHANNON L. FERRUCCI
A Thesis
Submitted to the Faculty of Mercyhurst College
In Partial Fulfillment of the Requirements for
The Degree of
MASTER OF SCIENCEIN
APPLIED INTELLIGENCE
DEPARTMENT OF INTELLIGENCE STUDIESMERCYHURST COLLEGE
ERIE, PENNSYLVANIAMAY 2009
DEPARTMENT OF INTELLIGENCE STUDIESMERCYHURST COLLEGE
ERIE, PENNSYLVANIA
EXPLICIT CONCEPTUAL MODELS:SYNTHESIZING DIVERGENT AND CONVERGENT THINKING
A ThesisSubmitted to the Faculty of Mercyhurst CollegeIn Partial Fulfillment of the Requirements for
The Degree of
MASTER OF SCIENCEIN
APPLIED INTELLIGENCE
Submitted By:
SHANNON L. FERRUCCI
Certificate of Approval:
___________________________________Kristan J. WheatonAssistant ProfessorDepartment of Intelligence Studies
___________________________________William J. WelchInstructorDepartment of Intelligence Studies
___________________________________Phillip J. BelfioreVice PresidentOffice of Academic Affairs
May 2009
Copyright © 2009 by Shannon L. FerrucciAll rights reserved.
iii
ACKNOWLEDGMENTS
I would like to thank Kristan J. Wheaton, my thesis advisor and primary reader, for his
continued guidance and encouragement throughout the course of this work. I would also
like to thank Professor Hemangini Deshmukh for her patience and assistance with the
statistical analysis of this work, it was greatly appreciated.
iv
v
ABSTRACT OF THE THESIS
Conceptual Modeling: Missing Link In The Analytic Process
By
Shannon L. Ferrucci
Master of Science in Applied Intelligence
Mercyhurst College, 2009
Professor Kristan J. Wheaton, Chair
[Explicit conceptual modeling (ECM) within intelligence analysis is a topic on
which very little specific research has thus far been done. However, when considering
the complexity and depth of most intelligence requirements it becomes evident that
consideration of this topic is both crucial and long overdue. This thesis examines what
little literature exists on conceptual modeling within intelligence analysis, in addition to
discussing relevant studies from other fields that help to shed light on the need for, and
value of, incorporating this technique into intelligence analysis. After examining the
relevant literature, an experiment was conducted to test the hypothesis that intelligence
analysts who engage in ECM will generate better analytic products, as evaluated by
thoroughness of process and accuracy of product, than analysts who do not. However,
despite a wealth of literature strongly suggesting that ECM will improve analysis the
results of this study’s experiment did not support that notion. The author ends by
drawing conclusions from the experimental data highlighting the notion that ECM
requires a combination of robust divergent and convergent thinking techniques to be
successful.]
vi
TABLE OF CONTENTS
Page
COPYRIGHT PAGE………………………………………………………………... iii
ACKNOWLEDGEMENTS………………………………………………………….
iv
ABSTRACT……………………………………………………………………….... v
TABLE OF CONTENTS…………………………………………………………… vi
LIST OF FIGURES………………………………………………………………….
CHAPTER
1 INTRODUCTION……………………………………………………
Conceptual Models…………………………………………………..
ix
1
2 Explicit Modeling and Intelligence Analysis………………………..
2 LITERATURE REVIEW…………………………………………....
Constructivist Roots…………………………………………………. Mental Models and Intelligence……………………………………...Memory Limitations…………………………………………………Group Intellect……………………………………………………….Combating Groupthink………………………………………………Related Mapping Disciplines………………………………………...Mind Maps…………………………………………………………...Concept Maps………………………………………………………..Technology Aids……………………………………………………..Learning Styles………………………………………………………Hypotheses…………………………………………………………...
3 METHODOLOGY…………………………………………………...
Research Design……………………………………………………...Subjects………………………………………………………………Preliminaries…………………………………………………………Control Group: Day 1………………………………………………..Experimental Group: Day 1………………………………………….Bubbl.us……………………………………………………………...Control Group: Day 2………………………………………………..
3
6
6789
10111314151717
18
18182222242627
vii
Experimental Group: Day 2………………………………………….Data Analysis Procedures……………………………………………
4 RESULTS…....………………………………………………………
Significance Testing………………………………………………….Pre- and Post-Questionnaire Results…………………………………Process – Conceptual Model Findings……………………………….Process – Logic/Quality of Supporting Evidence Findings………….Product – Forecasting Findings……………………………………...Quality Of Supporting Evidence Vs. Forecasting Accuracy………...Product – Source Reliability and Analytic Confidence Findings……
5 CONCLUSIONS…………………………………………………….
Excessive Possibilities Confuse……………………………………...Importance of Convergence………………………………………….Final Thoughts……………………………………………………….Future Research……………………………………………………...
BIBLIOGRAPHY…………………………………………………………………...
APPENDICES……………………………………………………………………….
Appendix 1: Experiment Sign-Up Form……………………………..
Appendix 2: IRB Research Proposal………………………………... Appendix 3: Control Group Consent Form………………………….
Appendix 4: Experimental Group Consent Form……………………
Appendix 5: Research Question……………………………………...
Appendix 6: Important Supporting Information……………………..
Appendix 7: Experiment Answer Sheet……………………………...
Appendix 8: Control Group Expectation Sheet……………………...
Appendix 9: Experimental Group Expectation Sheet………………..
Appendix 10: Pre-Experiment Questionnaire………………………..
Appendix 11: Contact Information…………………………………..
2829
30
30313640414246
48
49515254
55
57
58
59
66
67
68
69
70
71
72
73
74
viii
Appendix 12: Conceptual Modeling Lecture………………………...
Appendix 13: Bubbl.us Instruction Sheet For Experimental Group…
Appendix 14: Bubbl.us Instruction Sheet For Control Group……….
Appendix 15: Structured Conceptual Modeling Exercise…………...
Appendix 16: Control Group Post-Experiment Questionnaire………
Appendix 17: Experimental Group Post-Experiment Questionnaire...
Appendix 18: Control Group Debriefing Sheet……………………...
Appendix 19: Experimental Group Debriefing Sheet………………..
Appendix 20: Significance Testing Results………………………….
75
77
79
80
81
83
86
87
88
ix
LIST OF FIGURES
Page
Figure 2.1 Example Conceptual Model 12
Figure 2.2 Example Mind Map 14
Figure 2.3 Example Concept Map 15
Figure 3.1 Subject Education Level 21
Figure 3.2 Subject Education Level By Group 21
Figure 3.3 Original Control Group Vs. Actual Control Group 23
Figure 3.4 Original Experimental Group Vs. Actual Experimental Group 25
Figure 3.5 Bubbl.us Screenshot 27
Figure 4.1 Pre- Vs. Post Experiment: Time Dedicated To Experiment 32
Figure 4.2 Control Vs. Experimental: Learning Style 34
Figure 4.3 Bubbl.us Screenshot With Concept/Connection Labels 37
Figure 4.4 Experimental Group: Average Concepts And Connections 38
Figure 4.5 Control Vs. Experimental: Average Concepts And Connections 38
Figure 4.6 Control Group Conceptual Model Example 39
Figure 4.7 Experimental Group Conceptual Model Example 40
Figure 4.8 Control Vs. Experimental: Accuracy Of Forecasting 42
Figure 4.9 Forecasting Accuracy: Top Vs. Bottom Half Process Rankings 44
Figure 4.10 Control Forecasting Accuracy By Process Ranking 44
Figure 4.11 Experimental Forecasting Accuracy By Process Ranking 45
Figure 4.12 Control Vs. Experimental: Source Reliability 46
x
Figure 4.13 Control Vs. Experimental: Analytic Confidence 47
xi
1
CHAPTER I:
INTRODUCTION
In an introduction to The Jefferson Bible: The Life and Morals of Jesus of
Nazareth, Forrest Church, Minister of Public Theology at the Unitarian Church of All
Souls in New York City, tells the reader of an historical offer made by Thomas Jefferson
to Congress.1 Jefferson’s offer consisted of selling his personal library to replace the
volumes in the Library of Congress burned by the British during the War of 1812. While
some might find the most interesting aspect of Jefferson’s proposal to be the reaction it
elicited from members of Congress who were insulted by the specific makeup of the
collection, Church places importance on a different aspect of the story altogether.
According to Church, “Jefferson’s scheme of classification as formulated in the catalog
that he submitted to Congress” was more telling than anything else.2
Furthering a method established by Francis Bacon in 1605, Jefferson categorized
his books by “the process of mind employed on them.”3 Therefore, books having to do
with philosophy were classified under reason, history books could be found under the
label of memory and books focused on fine art could be located under a section entitled
imagination.4 However, Jefferson did not stop there. Under each of the overarching
categories mentioned above was a variety of intricate subdivisions that further organized
Jefferson’s collection. Based on Church’s retrospective examination of the library, it
1***This research partially funded by the Mercyhurst College Academic Enrichment Fund***
Forrest Church, Introduction to The Jefferson Bible: The Life and Morals of Jesus of Nazareth, by Thomas Jefferson (Boston: Beacon Press, 1989), 1.2 Church, The Jefferson Bible, 2.3 Ibid.4 Ibid.
2
appears that the true value of Jefferson’s system of categorization lies in its ability to
provide a glimpse into the inner thoughts and beliefs of Jefferson himself. Due to the
detail with which the library was constructed, Church was able to surmise Jefferson’s
viewpoint on a variety of issues, particularly religion, based on the placement and
relationship amongst books within Jefferson’s hierarchical structure.
Conceptual Models
What Bacon in the early 1600s, and Jefferson in the early 1800s, were essentially
doing through their systems of classification was attempting to make their individual
mental models of the world around them explicit. Surprisingly enough, not only do the
likes of Bacon and Jefferson develop such mental models, but each and every one of us
carries out this same exercise on a variety of different levels numerous times per day.
For example, we construct mental models of the route we take on the way to the grocery
store and of our routine for getting ready in the morning.
This implicit modeling is extremely interesting in the context of intelligence
analysis, when considering that we also build models when faced with questions.
Whether the issue is a simple one, such as what to do on our day off, or as complex as an
intelligence requirement set forth by a decision maker, the human mind automatically
attempts to model the question and arrive at possible preliminary answers. Oftentimes in
doing this, we are able to recognize not only what we currently know about a given
situation, but also what we think we need to know in order to arrive at a comprehensive
answer.
According to an article from the journal of Information Research by Kalervo
Jarvelin, Academy Professor in the Department of Information Studies at the University
3
of Tampere in Finland, and T.D. Wilson, Professor Emeritus at the University of
Sheffield in the United Kingdom:
All research has an underlying model of the phenomena it investigates, be it tacitly assumed or explicit. Such models called conceptual frameworks, or conceptual models… may and should map reality, guide research and systematize knowledge. A conceptual model provides a working strategy, a scheme containing general, major concepts and their interrelations. It orients research towards specific sets of research questions.5
Obviously, the more complex the question, the more intricate the subsequent model tends
to be. This is especially true of requirements posed to intelligence analysts, which often
entail the understanding of multifaceted relationships between people, states,
organizations, industries, etc. Therefore, the odds of any analyst being able to develop a
complete model of an intelligence requirement on their first try are very slim. More often
than not, analysts are able to fill in pieces of their model with information they already
know, but are forced to fill in the rest with topics they recognize they need to understand
more about.
Explicit Modeling and Intelligence Analysis
The complexity of intelligence requirements leads to the core purpose of this
study: determining the value of making these conceptual models explicit within the
analytic process. In considering the scope of most intelligence requirements, it becomes
obvious that the vast majority of related conceptual models will become too complex to
be held in an individual’s memory, and hence would benefit from being made explicit.
This is especially true when considering that conceptual models are not static, but
actually quite amorphous, constantly evolving and adapting to new information and
5 Kalervo Jarvelin and T.D. Wilson, “On Conceptual Models for Information Seeking and Retrieval Research,” Information Research 9, no. 1 (2003), http://informationr.net/ir/9-1/paper163.html (accessed January 15, 2009).
4
improving knowledge on a specific topic. Consequently, at present, an exploration of
explicit conceptual modeling’s (ECM) place within the field of intelligence is both
crucial and long overdue.
Within the intelligence community, this topic is primarily of interest to
intelligence professionals holding managerial positions, intelligence educators, and
individual intelligence analysts (both students and practitioners). For these groups the
incorporation of ECM into the analytic process would likely be beneficial on a variety of
fronts. First, explicit modeling may increase efficiency in the analyst’s collection
process, in addition to aiding in the identification of knowledge gaps. Second, by
organizing ideas and information in a simplistic, straightforward and graphic way,
managers at the head of small analytic teams might more easily grasp what needs to be
done, the best method for doing it, and the most efficient way to originally task project
analysts. In addition, after initial areas of responsibility are assigned to each analyst it is
likely that managers might find it easier to supervise analysts, due to the organizational
foundation provided by the model.
ECM may also be useful in helping analysts to assess their level of analytic
confidence in the estimate produced. In addition, analysts may share, compare and
discuss models amongst themselves and with other professionals. Finally, ECM would
likely be useful for after the fact review and in providing a solid starting point for any
related questions posed to an analyst in the future. However, while these are only some
of the potential benefits stemming from the incorporation of ECM into the analytic
process, this thesis will show that obtaining the abovementioned results is not easy.
5
Furthermore, this study will call into question conventional wisdom regarding what
makes a good explicit conceptual model.
Taken as a whole, this thesis will argue that despite the relative dearth of studies
focused specifically on conceptual modeling within the field of intelligence, literature and
examples from other fields will shed light on the need for, and value of, incorporating
this technique into intelligence analysis. As one study on conceptual modeling within the
field of intelligence has stated, “Conceptual models both fix the mesh of the nets that the
analyst drags through the material in order to explain a particular action or decision and
direct him to cast his net in select ponds, at certain depths, in order to catch the fish he is
after.”6
6 Graham T. Allison. “Conceptual Models and the Cuban Missile Crisis,” in The Sociology of Organizations: Classic, Contemporary and Critical Readings, ed. Michael Jeremy Handel, (Thousand Oaks: SAGE, 2003), 185.
6
7
CHAPTER II:
LITERATURE REVIEW
The following is a review of literature relevant to conceptual modeling and
intelligence analysis. The section begins with a discussion of constructivism’s bearing on
conceptual modeling and an illustration of the importance of a given analyst’s mental
model to the analysis produced. Next is a discussion of the relationships between
intelligence requirements, human memory limitations, group intelligence and groupthink
and the support they provide for the need to make our mental models explicit. This is
followed by a segment on the various methods for making these models explicit, in
particular mind maps, concept maps and explicit conceptual models. The benefit of using
technological aids to assist in the creation of explicit conceptual models is also
mentioned, as is the notion that the utility of ECM may be affected by varying individual
learning styles. Finally, this section concludes with the author’s original hypotheses for
this study.
Constructivist Roots
The array of concepts and relationships between them, illustrated through
conceptual models, is closely related to constructivist notions of knowledge formation.
In particular, the work of the famed Swiss psychologist Jean Piaget is relevant, as his
viewpoint holds that individuals continually construct cognitive models to make sense of
the world around them by organizing and connecting their ideas, observations and
experiences.7 Additionally, according to Piaget these cognitive models are always
7 John W. Santrock, Adolescence, 8th ed.(New York: McGraw-Hill, 2001), 102.
8
evolving to include new information that aids individuals in furthering their
understanding of the world around them.8 Piaget called the constructs for assembling
these models schema or “a concept or framework that exists in the individuals’ mind to
organize and interpret information.”9 While a discussion of the pros and cons of
constructivism theory is outside the reach of this paper, using this theory to aid in
thinking about the development of models within our minds is actually quite useful.
Mental Models and Intelligence
When taking constructivist theory and applying it to the field of intelligence
analysis it becomes clear that each analyst’s cognitive, or mental model, is uniquely
shaped by the context and purpose of the requirement posed to them, along with the
summation of that individual’s prior experiences, schooling, cultural values, professional
position and organizational standards.10 As stated in “Intelligence Analysis: Once
Again,” by Charles A. Mangio, of Shim Enterprise, Inc., and Bonnie J. Wilkinson of the
Air Force Research Laboratory:
Given the importance of the mental model in influencing and shaping the analysis (i.e., from problem exploration and formulation, to purpose refinement, through data acquisition and evaluation, and ultimately determining meaning and making judgments), it is not surprising how it influences the discussion of intelligence analysis.11
However, despite the importance of a well-defined and thorough mental model to an
analyst’s subsequent analysis, it is rarely touched upon in intelligence literature.
8 Ibid.9 Ibid.10 Charles A. Mangio and Bonnie J. Wilkinson, “Intelligence Analysis: Once Again” (paper presented at the annual international meeting of the International Studies Association, San Francisco, California, 26 March, 2008): 8.11 Ibid.
9
Memory Limitations
Despite the lack of attention given to mental models in the intelligence
community, the formulation of these models can significantly impact the process of
intelligence analysis. However, eventually it becomes obvious that as the amount of
concepts and relationships included in the analyst’s mental model continues to grow, it
becomes difficult to store all of that information accurately in working memory. In “The
Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing
Information,” George A. Miller, Professor Emeritus of Psychology at Princeton
University, argues that we only have the ability to hold 7 things (plus or minus 2) in our
mind at a given time without making mistakes in differentiation.12 Having said that, there
are methods individuals can use to help them surpass these known limits, as well as there
are a variety of exceptions to the rule in the first place. In Psychology of Intelligence
Analysis, Richards Heuer, prior staff officer and contractor of the CIA for almost 45
years, discusses one such method for aiding analysts in exceeding memory constraints.
Essentially, what Heuer describes is none other than making an individual’s mental
model explicit:
“The recommended technique for coping with this limitation of working memory is called externalizing the problem—getting it out of one’s head and down on paper in some simplified form that shows the main elements of the problem and how they relate to each other…Breaking down a problem into its component parts and then preparing a simple ‘model’ that shows how the parts relate to the whole. When working on a small part of the problem, the model keeps one from losing sight of the whole.”13
12 George A. Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information,” Psychological Review 63 (1956): 81-97.13 Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, 1999), 27.
10
Regardless of the specific number of concepts that our working memory can
handle, the idea that there is an upper limit is quite evident, as well as is the fact that most
intelligence requirements will easily exceed that maximum. Limits on working memory,
although one of the major concerns considered in the argument for making mental
models explicit, are only one of the factors behind the need for engaging in the process of
ECM. Another reason strengthening the argument for explicit modeling stems from the
inherent complexity in intelligence requirements, which oftentimes leads analysts to work
in groups in order to tackle compound issues and topics.
Group Intellect
People often criticize the collective judgment of groups as unreliable, viewing
individual conclusions as being much more sensible and sound. In fact, groups are often
viewed as bringing out the worst in individuals, resulting in illogical and foolish
behavior. However, in The Wisdom of Crowds, James Surowiecki, a staff writer at The
New Yorker, actually defends group decisions, noting that the four conditions
distinguishing wise crowds are:
Diversity of opinion (each person should have some private information, even if it is just an eccentric interpretation of the known facts), independence (people’s opinions are not determined by the opinions of those around them), decentralization (people are able to specialize and draw on local knowledge), and aggregation (some mechanism exists for turning private judgments into a collective decision. If a group satisfies those conditions, its judgment is likely to be accurate.14
According to Surowiecki, the collective intelligence of groups often far surpasses the
individual intelligences of the people making up that group.15 However, group work does
entail its own unique set of problems, one of which is groupthink.
14 James Surowiecki, The Wisdom of Crowds (New York: Anchor Books, 2004), 10.15 Surowiecki, Wisdom of Crowds, XIII.
11
Combating Groupthink
Most classrooms and professional environments are made up of a mix of
individuals ranging from those inclined to chime in to discussions and offer opinions ad
nauseam to those who shudder at the thought of speaking up. While there can of course
be a wide variety of reasons certain individuals are hesitant to actively participate in
classroom discussion or workplace meetings, a common fear is that their responses will
somehow be inadequate, causing them to embarrass themselves in front of others. In
McKeachie’s Teaching Tips, Wilbert J. McKeachie suggests that, “Asking students to
take a couple of minutes to write out their initial answers to a question can help. If a
student has already written an answer, the step to speaking is much less than answering
when asked to respond immediately.”16
Essentially, the notion is that even the most timid will contribute when simply
asked to read off what they have already written down. While McKeachie, Professor
Emeritus of Psychology at the University of Michigan, speaks solely of students, this
same concept applies to professionals. By asking all individuals to jot down answers to a
proposed question, then focusing on each person in turn and having them voice those
ideas out loud, equal involvement is fostered. No one is allowed to passively soak up the
information being offered by others, while at the same time a select few individuals are
prevented from dominating the discussion.
This method of systematically focusing on each group member’s opinion also
helps to combat instances of groupthink. In Groupthink: Psychological Studies of Policy
16 Wilbert J. McKeachie, McKeachie’s Teaching Tips (Boston: Houghton Mifflin Company, 2002), 42.
12
Decisions and Fiascoes, Irving L. Janis defines groupthink as, “A mode of thinking that
people engage in when they are deeply involved in a cohesive in-group, when the
member’s strivings for unanimity override their motivation to realistically appraise
alternative courses of action.”17 According to Janis, Professor of Psychology at Yale
University prior to his death in 1990, there are three main categories of groupthink: group
overestimations of its power and morality, closed-mindedness and pressures toward
group uniformity.18 In situations where groupthink prevails, teams of individuals often
have trouble successfully completing the requirements placed upon them, subsequently
failing to meet their goals. By having all group members write down their thoughts and
then repeat those thoughts aloud, the phenomenon of individuals keeping quiet so as not
to voice unpopular or contrasting views is limited. Furthermore, this method also limits
having only a select few outspoken individuals’ perspectives heard and considered.
Therefore, the process of ECM is likely beneficial not only in surpassing memory
limitations but also in combating groupthink, one of the most common problems plaguing
group work.
Related Mapping Disciplines
While ECM is the method used for explicitly visualizing information and
knowledge in this study, a variety of related mapping disciplines with similar functions
do exist. Two in particular that warrant a brief discussion are mind maps and concept
maps, both of which are highly analogous to conceptual modeling. For the purposes of
this study however, conceptual models were found to be the most functional and user-
friendly method for experiment participants to learn and understand in a short time
17 Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes (Boston: Houghton Mifflin Company, 1982), 9.18 Janis, Groupthink, 174-175.
13
period. Based on the fact that the methods of mind and concept mapping are both
essentially “coined” exercises, their creation involves following a set of predetermined
criteria (see below mind map and concept mapping sections for further detail).
ECM on the other hand, simply focuses on the visualization of concepts and their
relationships without the added emphasis on the specific construction of the model,
allowing individuals the maximum freedom possible to organize the model in whatever
way was most helpful to them. As a result, the method and design for conceptual model
construction used within this thesis has been operationalized from the relevant literature
(see the methodology section for further detail regarding development of the models).
Please refer to Figure 2.1 below, taken from Bubbl.us, for an illustration of a conceptual
model created by experiment participants in this study.
Of course, taking a somewhat abstract concept and translating that into a concrete
and measurable product, is not without its problems. First, this author’s interpretation of
the physical creation of the conceptual models may differ from the interpretation of
Figure 2.1
14
others. Additionally, whereas this author treats the notion of conceptual modeling as
distinct and unique, others may disagree, believing it to be simply a subset of a related
mapping discipline.
Mind Maps
The popular exercise of mind mapping that many individuals are familiar with
today got its start in the 1960s with its originator, Tony Buzan. In “Mind Maps as
Classroom Exercises,” John Budd, professor in the Industrial Relations Center at the
University of Minnesota’s Carlson School of Management, provides a thorough
description of the accepted format for creating such maps:
As with a traditional outline, a mind map is based on organizing information via hierarchies and categories. But in a mind map, the hierarchies and associations flow out from a central image in a free-flowing, yet organized and coherent, manner. Major topics or categories associated with the central topic are captured by branches flowing from the central image. Each branch is labeled with a key word or image. Lesser items within each category stem from the relevant branches.19
Additionally, a strong emphasis is placed on the incorporation of colors and images into
the creation of a mind map.20 Therefore, the essential function of a mind map is very
similar to that of the explicit conceptual model, but the process for developing one is
more formalized. Please see Figure 2.2 below, taken from the TechKNOW Tools
website, for an illustration of a typical mind map.
19 John W. Budd, “Mind Maps as Classroom Exercises,” Journal of Economic Education (Winter 2004): 36.20 Budd, “Mind Maps as Classroom Exercises.”
15
Concept Maps
According to Alberto Canas, Associate Director of the Institute for Human and
Machine Cognition, and Joseph Novak, known for his development of concept mapping
in the 1970s:
Concept maps are graphical tools for organizing and representing knowledge. They include concepts, usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts. Words on the line, referred to as linking words or linking phrases, specify the relationship between the two concepts.21
21 Joseph Novak and Alberto Canas, The Theory Underlying Concept Maps and How to Construct and Use Them, Technical Report IHMC Cmap Tools 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition, 2008. http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf (accessed August 8, 2008).
Figure 2.2
16
The detailing of the relationship between concepts by naming links accordingly is very
important to the notion of concept mapping and helps to distinguish this form of mapping
from mind mapping and conceptual modeling. Another difference between concept maps
and mind maps is that the latter are organized around one central concept, whereas the
former tend to be organized around several. Like conceptual models, concept maps are
intended to evolve over time as an individual’s understanding of a topic increases.
However, once again, while concept mapping serves a very similar purpose to that of
conceptual modeling, its construction is much more structured in nature. Please see
Figure 2.3 below, taken from the cited work of Novak and Canas, for an illustration of a
typical concept map.
Technology Aids
Figure 2.3
17
At this point, after a discussion of the abovementioned techniques for information
visualization, one may be left to wonder if making such models explicit is hard to do. In
truth, the answer to this question is no, in large part due to technological advances that do
away with much of the burden of creation for these models. In fact, the relatively recent
emergence of technological aids designed to augment the creation of conceptual models,
concept maps and mind maps has brought about an increased interest in these techniques.
Compared to the traditional pencil and paper construction, software programs make the
editing and formatting of such models considerably easier. This evolution in
functionality encourages users to revise and expand their maps as their knowledge base
regarding a specific topic grows and changes.22
Christina De Simone, in “Applications of Concept Mapping,” states that “by
externalizing information as they create concept maps, students are better able to detect
and correct gaps and inconsistencies in their knowledge.”23 However, the development
and evolution of concept maps as a response to detailed questions and requirements can
sometimes prove difficult, lengthy and disorganized when created by hand. De Simone,
Assistant Professor of Education at the University of Ottawa, further states that many
students in her classes “find electronic concept mapping …very useful, as it minimizes
the cumbersome and time-consuming activity of erasing, revising, and beginning anew.
It allows them greater freedom to adjust their conceptual thinking and mapped
representations.”24 Technologically based conceptual modeling aids, of which a wide
22 Josianne Basque and Beatrice Pudelko. “Using a Concept Mapping Software as a Knowledge Construction Tool in a Graduate Online Course,” in Proceedings of ED-MEDIA 2003, World Conference on Educational Multimedia, Hypermedia &Telecommunications, Honolulu, June 23-28, 2003, ed. D. Lassner and C. McNaught (Norfolk: Association for the Advancement of Computing in Education, 2003), 2268-2274. 23 Christina De Simone, “Applications of Concept Mapping,” Journal of College Teaching 55, no. 1 (2007): 34.24 De Simone, “Applications of Concept Mapping,” 35.
18
variety exist, are likely the most useful and advantageous tools for intelligence students
and analysts to utilize in the construction of complex and fluid conceptual models.
Learning Styles
However, even with the increased efficiency in creating conceptual models
brought about through technological aids, it is important to note that some individuals
may be better suited towards this type of exercise than others. Research conducted by
Josianne Basque and Beatrice Pudelko, from the LICEF Research Center in Canada,
showed that some graduate students claiming to be auditory learners found little to no
utility in constructing a visual representation of knowledge in the form of a concept
map.25 However, other students identifying themselves as visual learners claimed to
better understand a topic when concepts were structured in such a visual way.26
Hypotheses
Based on review of the above literature the following hypotheses were formed:
First, intelligence analysts who engage in ECM will generate better analytic products, as
evaluated by thoroughness of process and accuracy of product, than analysts who do not.
Second, that the individual to group method employed for creation of the conceptual
models in this study will affect, either positively or negatively, the models’ ability to aid
in intelligence analysis.
25 Basque & Pudelko, Using a Concept Mapping Software, 2268-2274.26 Ibid.
19
CHAPTER III:
METHODOLOGY
In order to test the stated hypotheses I conducted an experiment that examined the
value of ECM as it applies to the quality of the analysis produced. The experiment was
designed to determine if intelligence analysts who engaged in ECM would generate better
analytic products, as evaluated by thoroughness of process and accuracy of product, in
comparison to analysts who did not. The following methodology section will provide the
details of this experiment.
Research Design
In conducting my experiment, I divided the subjects into an experimental and a
control group. Efforts were made to ensure that conditions in both groups were identical,
with the exception of the addition of ECM in the experimental group. The experimental
group was instructed to use a structured ECM approach, facilitated by the use of the
open-source program Bubbl.us (a free, internet-based conceptual modeling program), to
aid in their analysis. The control group on the other hand was not instructed to use any
particular method in conducting their analysis, as this group served as a baseline from
which to measure the experimental group against.
Subjects
In order to better relate the findings of my experiment to the United States
Intelligence Community, I chose to draw from the undergraduate and graduate student
population at the Mercyhurst College Institute for Intelligence Studies (MCIIS). The aim
20
of this program is to produce graduates qualified to enter the government or private sector
as entry-level intelligence analysts and it was therefore considered to be an appropriate
pool from which to draw a sample.
Mercyhurst College houses the oldest institution in the world dedicated
specifically to the study of intelligence analysis. The program offers coursework in the
fields of national security, law enforcement and competitive business analysis. Students
in both the undergraduate and graduate programs are subjected to a rigorous academic
curriculum during their time at Mercyhurst. They are expected to meet certain foreign
language proficiency and internship requirements and are often faced with accelerated
project deadlines for real-world decision makers in the field of intelligence. MCIIS
considers their students to be experts in the exploitation of open source information for
analytic purposes. For these reasons, MCIIS turns out capable and well-trained entry-
level analysts with a wide variety of analytic skills and abilities.
To obtain participants for my experiment I contacted professors within the
Intelligence Studies program at Mercyhurst College via email to ask if they would be
willing to let me briefly speak to their classes in order to recruit students. I then followed
up in-person with all professors who responded to my emails positively to setup a
schedule for class visits that was convenient for them.
After establishing this schedule I made appearances in a variety of undergraduate
and graduate intelligence classes where I gave a broad overview of the experiment and
passed out individual signup sheets to interested students (see Appendix 1). These sheets
asked for the student’s name, email address, class year and available time slots for
experiment participation. Students were given a total of twelve time slots to choose from,
21
with six slots on the first day of the experiment and six slots on the second day. The only
requirement placed on students was that they choose at least two time slots, one on each
day of the experiment. Additionally, of these two time slots students were asked to select
one for the duration of ninety minutes and the other for the duration of thirty minutes.
Students could either turn these sheets back into me on the spot or they could drop them
off at their convenience to my worksite located within the Intelligence Studies building.
After receiving all signup sheets students were divided into subgroups based on
class year. Individuals within each subgroup were then randomly divided between the
control and the experimental group in an attempt to control for the educational level of
participants. For example, after the subgroup of freshman who signed up for the
experiment was established, individuals within the group were randomly assigned to
either the control or experimental group.
Students were then notified of their designated time slots for participation via an
email, which included information on the location of the experiment. The email also
stated that participation would be rewarded with extra credit from select professors within
the Intelligence Studies Department, in addition to free refreshments and pizza on the
second day of the experiment. Finally, my contact data was included in the email so all
participants would easily be able to access me if they had any questions or concerns in
the days leading up to the experiment. Intelligence Studies students of all grade levels
were welcome to participate as any information needed to complete the experiment
would be provided during the sessions.
Testing took place approximately three-quarters of the way through the fall term,
unfortunately falling at a time when students were especially busy trying to meet the
22
demands of their coursework. A total of 47 students (25 control and 22 experimental)
actually participated in, and finished, this experiment. For a breakdown of participants
by educational level and group, please see Figures 3.1 & 3.2 below.
Figure 3.1
Figure 3.2
23
Preliminaries
Prior to conducting the experiment, I had to submit a detailed research proposal to
the Mercyhurst College Institutional Review Board (IRB) for approval (see Appendix 2).
It is Mercyhurst College policy that any student conducting research involving the use of
human subjects be granted permission by the IRB. In order to receive a green light from
the IRB students must provide a description of the proposed research, its purpose, and an
explanation of any potential dangers (physical or psychological) that could befall
individuals as a result of their participation in the experiment.
However, not only did I need to secure the consent of the IRB I also needed the
consent of each individual experiment participant. Therefore, on the first day of the
experiment (for both control and experimental sessions) all participants were given a
formal consent form upon arrival (see Appendix 3 for control group consent form and
Appendix 4 for experimental group consent form). This form outlined what would be
expected of them as participants, along with the fact that there were no foreseen dangers
or risks associated with involvement in the experiment. The form also asked for basic
contact information such as name, class year, and telephone number.
Control Group: Day 1
Control group participants were asked to attend two experiment sessions. The
first session was slotted for thirty minutes and the next session, scheduled for one week
later, was slotted for 90 minutes. Three control group sessions were run on both days of
the experiment, for a total of six sessions, in order to make it as convenient as possible
for subjects to schedule participation into their busy agendas. Out of those who
originally signed up for the experiment and were assigned to the control group (37
24
individuals), a total of 25 actually attended and completed the experiment. Please see
Figure 3.3 below for a graphic representation of this breakdown by class year.
Both groups were given the same question to analyze, regarding October 2008
presidential elections in Zambia (see Appendix 5). The control group was simply asked
to forecast the winner of the elections and to provide a list of the main pieces of evidence
that aided them in their analysis. Both groups were provided with information regarding
source reliability and analytic confidence as they were asked to supply measures of both
in their final product (see Appendix 6). Also, both groups were given a semi-structured
answer sheet with space for their name, a pre-written forecast with built-in words of
estimative probability and presidential candidates to choose from and space for a bulleted
discussion (see Appendix 7). The bottom of the answer sheet also asked them to identify
Figure 3.3
25
their source reliability and analytic confidence on a scale from low to high and to provide
the names of their professors offering extra credit for participation in the experiment.
Lastly, participants were given a sheet of expectations for the second and final
session of the experiment one week later (see Appendix 8) and were asked to fill out a
short pre-experiment questionnaire (see Appendix 10). They were also once again
provided with my contact information in case they encountered problems or had
questions while working on their analysis during the course of the week (see Appendix
11).
Experimental Group: Day 1
Experimental group participants were also asked to attend two sessions. The first
session was slotted for ninety minutes and the next session, scheduled for one week later,
was slotted for thirty minutes. Three experimental group sessions were run on both days
of the experiment, for a total of six sessions, once again to make scheduling more
convenient for participants. Out of those who originally signed up for the experiment and
were assigned to the experimental group (37 individuals), a total of 22 actually attended
and completed the experiment. Please see Figure 3.4 below for a graphic representation
of this breakdown by class year.
26
The experimental group was given the same tasking as the control group (see
Appendix 5). This group was also provided with the same information regarding source
reliability and analytic confidence as the control group (see Appendix 6), along with the
same answer sheet (see Appendix 7). Although this group was given the same tasking as
the control group in regards to forecasting the winner of the elections, they were also
required to use ECM to assist them in completing this endeavor.
Therefore, I began by giving a lecture, approximately ten minutes in length, to
familiarize experimental group participants with what conceptual models are, how they
can be used and the proposed value of making them explicit in the field of intelligence
(see Appendix 12). Following the lecture, I had all participants sign into a computer
whereby I led them through a step-by-step tutorial of the program Bubbl.us (see
Appendix 13). Once everyone was comfortable with how the program worked, I began a
structured ECM exercise.
Figure 3.4
27
Participants began the exercise by making individual lists of concepts they felt
would be important to answering the question asked of them. After individual lists were
completed, participants were asked to read their lists aloud one at a time. As concepts
were read off, they were written on a whiteboard at the front of the room, creating one
combined group list. For every time a concept was repeated a check mark was placed
next to it in order to highlight the most commonly thought of concepts. Also, concepts
that the group immediately recognized as useful but that were mentioned only once or
twice were made note of as well. Due to limitations on time, participants were not asked
to engage in a convergent thinking exercise as a group, whereby they would critically
evaluate the master list before producing their own conceptual models. Instead,
participants were simply asked to use Bubbl.us to construct a conceptual model based on
their individual list and thoughts as well as that of the collaborative group list. Finally,
participants were given a chance to briefly look at the way others around them had
assembled their models and were then asked to electronically share what they had created
with me through the collaboration function within Bubbl.us (see Appendix 15).
Following this task participants were given a sheet of expectations for the second
and final session of the experiment one week later (see Appendix 9) and were asked to
fill out a pre-experiment questionnaire (see Appendix 10). Lastly, they were provided
with a sheet of my contact information (see Appendix 11).
Bubbl.us
Bubbl.us is a free, internet-based, conceptual modeling program. It was chosen
for use in this experiment due to its extremely simple user-interface. Not only did the
program encompass all the relevant functions necessary to complete the conceptual
28
modeling segment of my experiment, it could easily be taught and learned within the
minimal amount of time I had during sessions. Basic functions include the creation of
bubbles and lines to illustrate concepts and their relationships, internet-based sharing of
work with other Bubbl.us users, and exporting finished products as photos or embedding
them into a web page. Please see Figure 3.5 below for a sample product taken from the
Bubbl.us website, illustrating a variety of the program’s features
Control Group: Day 2
On the second day of the experiment, the control group was expected to arrive at
their designated session with their completed answer sheets ready to turn in. All research
and analysis was to be done prior to arriving at this session. Since the control group had
not received any training on Bubbl.us in the first session, I began their second session by
asking them to login to a computer and follow along with me as I taught them the basic
functions of the program. However, they still received no lectures detailing conceptual
Figure 3.5
29
modeling background information and were not given any specifics regarding how to
actually construct their models in Bubbl.us (see Appendix 14).
After this group became familiar with the program they were asked to illustrate
the concepts and relationships they found to be important over the course of the week in
answering the question posed to them. This was done in order to draw a comparison
between the quality of conceptual models made after background information was
provided and those made with little to no prior instruction. Additionally, it compared the
quality of conceptual models made pre-collection and updated throughout the analytic
process with those made post-analysis. In bringing the session to a close participants were
asked to fill out a post-experiment questionnaire (see Appendix 16) and were given a
debriefing sheet thanking them for their time and further explaining the purpose of the
experiment (see Appendix 18).
Experimental Group: Day 2
Once again, the experimental group was expected to arrive on the second day of
the experiment with their research and analysis completed and a finished answer sheet
ready to be handed in. Additionally, they were expected to have electronically updated
the conceptual models made during the first session of the experiment throughout the
course of the week to reflect their expanding knowledge base in regards to the question at
hand. Therefore, after handing in their answer sheets this group was asked to fill out a
follow-up questionnaire (see Appendix 17) and was then provided with the same
debriefing sheet as the control group (see Appendix 19).
30
Data Analysis Procedures
Since the intent of this experiment was to test the value of ECM as it applies to
analysis, control and experimental group results were compared in terms of quality and
accuracy of process and product. To evaluate and compare the processes of the two
groups, three MCIIS second year graduate students independently ranked the discussion
section of each participant’s answer sheet from best to worst. Students were used in lieu
of professors who tend to have unique grading styles, as the students all received the
same training regarding what makes a sound analysis and were thus thought to be on
more equal footing. All identifying information including the student’s name, class year
and group were removed prior to evaluation. Additionally, the students doing the
evaluating did not know the outcome of the elections at the time of ranking in order to
keep measures of process and product independent of one another. The product measure
was derived through a simple tally of whether or not the participant predicted the
question correctly. Finally, the actual conceptual models created were compared in terms
of complexity, based on how many concepts and connections between concepts each
encompassed.
All measurements and rankings were compiled into a Microsoft Excel
Spreadsheet along with subject education level breakdowns and information from the
pre- and post-experiment questionnaires. Statistical analyses were then undertaken to
determine whether or not the experiment results were statistically significant. The
control group was generally expected to fall lower in the graduate student process
rankings than the experimental group and was also expected to be less accurate overall in
terms of forecasting the outcome of the election.
31
CHAPTER IV:
RESULTS
The results of the ECM experiment generated a variety of interesting and
surprising results. The next section will first provide a brief explanation of the statistical
significance testing conducted throughout this thesis and will then detail the results
derived from the analysis of pre- and post-experiment questionnaires. Next, findings
from the experiment itself will be discussed as a function of process and product, with
experimental and control group findings initially reported on individually, followed by a
comparison of the groups to one another.
Significance Testing
All significance tests related to this thesis were conducted at the 0.10 significance
level (see Appendix 20). The reason behind setting what some may consider to be a
rather lax level of significance is the fact that the research conducted in this thesis
regarding ECM and intelligence analysis is exploratory in nature. According to G. David
Garson, Professor of Public Administration at North Carolina State University, in Guide
to Writing Empirical Papers, Theses, and Dissertations, “It is inappropriate to set a
stringent significance level in exploratory research (a .10 level is acceptable in
exploratory research).”27 Although debate remains lively amongst researchers regarding
proper significance levels based on situation, this author felt that a 0.10 level was most
appropriate when dealing with this particular set of research and data.
27 G. David Garson, Guide to Writing Empirical Papers, Theses, and Dissertations (New York: CRC Press, 2002), 199.
32
Pre- and Post-Questionnaire Results
Prior to the experiment participants were asked whether or not they thought they
would be able to dedicate a sufficient amount of time to completing the experiment over
the course of the next week. In response, 64.3% of control group participants thought
that they would have ample time, 32.1% were not sure and 3.6% did not expect to be able
to dedicate a sufficient amount of time to the experiment. On the other hand, 58.3% of
experimental group participants expected to have enough time and 41.7% were unsure.
When asked post-experiment whether or not they had actually been able to devote a
sufficient amount of time to the experiment over the course of the past week, 68% of
control group participants claimed they had, 16% were unsure and 16% said that they had
not. In the experimental group 60% claimed to have had enough time to dedicate to the
experiment, 20% were unsure and 20% had not.
While the percentage of participants in both the control and experimental group
claiming to have been able to dedicate a sufficient amount of time to completion of the
experiment increased slightly from pre- to post-experiment, the percentage of those who
claimed that they did not have a sufficient amount of time to dedicate to the experiment
increased from pre- to post-experiment more substantially. Post-experiment 20% of the
experimental group claimed that they had not had enough time (up from 0% pre-
experiment) and 16% of control group participants claimed the same (up from 3.6% pre-
experiment). As previously stated in the methodology section of this thesis the timing of
the experiment fell during an extremely busy time in the participants’ trimester, serving
as a limitation to this study as students had to balance the experiment with their class
33
work and other responsibilities. Please see Figure 4.1 below for a graphic display of this
data.
Additionally, pre-experiment all participants were asked to identify how
interested they were in the study, with 1 being not interested and 5 being extremely
interested. The average response for both groups was a 3.5, illustrating that both control
and experimental groups were on average equally interested in the experiment at its
onset. The difference between control and experimental group responses to this question
was not found to be statistically significant at the 0.10 level (p-value = 0.851).
When asked the same question post-experiment, the average response of the
control group increased to 3.8, while the average response of the experimental group
remained the same at 3.5. This shows a slight average increase in interest on the part of
the control group from pre- to post-experiment. However, once again the difference in
Figure 4.1
34
control and experimental group responses was not found to be statistically significant at
the 0.10 level (p-value = 0.267).
Another question asked of participants prior to their involvement in the
experiment was how useful they feel structured approaches to the analytic process are,
with 1 being not useful and 5 being extremely useful. On average, control group
participants responded with a 3.8 and experimental group participants responded with a
4.2. Although a significant difference at the 0.10 level was not found between control
and experimental group responses to this question, the results did approach significance
(p-value = 0.127).
Faced with the same question post-experiment the average control group response
increased to a 4.1, while the experimental group response maintained steady at 4.2.
However, the difference in control and experimental group post-experiment responses
was not found to be significant at the 0.10 level (p-value = 0.617). Results for this
question show that the experimental group found structured approaches to the analytic
process to be more useful than did the control group, both pre- and post-experiment. The
control group’s feelings regarding the utility of structured approaches to the analytic
process grew throughout the course of the experiment, whereas the experimental group’s
did not.
When experiment participants were asked to identify the learning style they most
closely associated with, over half of both control and experimental group participants
identified themselves as visual learners. Since ECM is a visual learning aid, it is likely
that the exercise was generally more beneficial to those claiming to be visual learners
than to those who chose an alternative learning style. Please see Figure 4.2 below for the
35
full range of control and experimental group responses to the question regarding learning
styles.
Also, post-experiment both control and experimental groups were asked to gauge
their level of understanding of conceptual modeling prior to the study on a scale of 1 to 5,
with 1 being extremely low and 5 being extremely high. The average control group
response to this question was a 3.6, whereas the average experimental group response
was a 3.2. While the difference between control and experimental responses was not
found to be significant at the 0.10 level, the results did approach significance (p-value =
0.187).
Post-experiment both groups were also asked to identify their understanding of
conceptual modeling following the experiment, using the same scale. Post-experiment
the average response for both the control and experimental group was a 4.0 and was
therefore not statistically significant at the 0.10 level (p-value = 1.0). Results from this
Figure 4.2
36
question show that post-experiment both control and experimental group participants
claimed to have the same understanding of conceptual modeling, an increase for both
groups from their pre-experiment knowledge on the topic. However, the experimental
group claimed to have less of an understanding of conceptual modeling than the control
group at the onset of the experiment, signaling that on average the experiment raised their
knowledge of conceptual modeling more than it did the control group’s.
In relation to the above question, post-experiment all participants were asked how
often ECM had been a part of their personal analytic process prior to the experiment on a
scale of 1 to 5, with 1 being never and 5 being every time they produced an intelligence
estimate. On average the control group responded with a 2.8 and the experimental group
responded with a 2.75. The difference between control and experimental group responses
was not found to be statistically significant at the 0.10 level (p-value = 0.872).
However, post-experiment both groups were also asked how often they plan to
incorporate ECM into their personal analytic process in the future, using the same scale.
In response to this question, the control group average was a 3.5 and the experimental
group average was a 3.6. Once again, the difference between experimental and control
group responses was not found to be statistically significant at the 0.10 level (p-value =
0.617). Results from the above question highlight that although the control group
claimed to employ ECM in their analytic process prior to the experiment on average
slightly more than the experimental group, the experimental group claimed that they will
employ ECM in their analytic process on average more than the control group in the
future.
37
Post-experiment the experimental group was asked a series of four questions
regarding their specific responsibilities within the experiment. First, the experimental
group was asked to rate whether or not they found that ECM aided them in developing a
more thorough and nuanced intelligence analysis in this experiment. In response, 33% of
experimental group participants claimed that ECM definitely aided them in their analysis,
and 66.7% claimed that it helped them somewhat, with no participants responding that it
did not help at all.
The experimental group was also asked post-experiment to rate how useful they
found the conceptual modeling training provided at the beginning of the experiment to
be, with 1 being not at all helpful and 5 being extremely helpful. On average,
experimental group participants responded to this question with a 3.9. Additionally, the
experimental group was asked how effective they found the conceptual modeling method
used in this experiment, inclusive of both individual work and group collaboration, to be.
The average response to this question was a 3.7. Finally, the experimental group was
asked how useful they found the technology aid, Bubbl.us, to be in creating and updating
their conceptual models, with 1 being not useful and 5 being extremely useful. Overall,
experimental group participants found Bubbl.us to be quite valuable, averaging a
response of 4.1.
Process - Conceptual Model Findings
The 2007 Intelligence Community Directive Number 203 on analytic standards
confirms that, “To the extent possible, analysis should incorporate insights from the
application of structured analytic technique(s) appropriate to the topic being analyzed.”28
28 United States Government, Intelligence Community Directive Number 203, June 21, 2007, http://www.fas.org/irp/dni/icd/icd-203.pdf (accessed January 26, 2009).
38
As such, the conceptual models resulting from the experiment were analyzed in terms of
complexity by simply tallying the number of concepts and connections between concepts
found in each model (please see Figure 4.3 below to see the distinction between concepts
and connections). Control group conceptual models averaged 12.6 concepts and 12.9
connections, per model.
Pre-analysis experimental group conceptual models averaged 25 concepts and
28.9 connections, per model. Post-analysis experimental group conceptual models
averaged 30.9 concepts and 31 connections, per model. Results of significance testing
for the number of concepts in pre- and post-experimental group conceptual models was
found to be significant at the 0.10 level (p-value = 0.056), however the number of
connections was not found to be significant (p-value = 0.231). As illustrated below in
Figure 4.4, both the average number of concepts and the average number of connections
between concepts increased between pre-analysis and post-analysis conceptual models
within the experimental group.
Concept
ConceptConnectionConnection
Figure 4.3
39
When looking at experimental group conceptual models as a whole (not
distinguishing between pre- and post-analysis) the models averaged 27.9 concepts and
29.9 connections, per model. As illustrated below in Figure 4.5, when comparing these
experimental group averages with control group averages the difference in complexity
between the two groups’ models becomes quite obvious.
Figure 4.4
Figure 4.5
40
The control group’s models were much simpler, consisting on average of less than
half the amount of concepts and connections between concepts present in experimental
group models. Results of significance testing for the number of concepts in control
versus experimental group conceptual models was found to be significant at the 0.10
level (p-value = 0.000), furthermore the number of connections was also found to be
significant (p-value = 0.000). Please see Figure 4.6 below for an illustration of a typical
conceptual model made by a control group participant and Figure 4.7 for an illustration of
a typical conceptual model made by an experimental group participant.
Figure 4.6
41
Process – Logic/Quality of Supporting Evidence Findings
Intelligence Community Directive Number 203 highlights the need for logical
argumentation within analytic products, stating that, “Analytic presentation should
facilitate clear understanding of the information and reasoning underlying analytic
judgments.”29 As a result, three Intelligence Studies graduate students independently
ranked the analytic products of all 47 participants from best to worst, based on the quality
and logic of evidence supporting the analyst’s estimate. Based on the rankings assigned
by each graduate student, the overall average ranking of control group participants was a
22.4 and the overall average ranking of experimental group participants was a 25.8.
Therefore, control group participants scored approximately 3 points higher than did
experimental group participants in terms of the reasoning used to substantiate their
estimates, implying that participants not using ECM to aid in their analysis were able to
29 Ibid.
Figure 4.7
42
formulate slightly better supporting arguments than participants who did in fact use
ECM.
Correlation scores amongst the three graduate student rankers at the source of this
finding were relatively high across the boards (0.76, 0.72 and 0.63), illustrating
consistency in experiment participant scoring.30 As Jacob Cohen, an influential
statistician and Professor Emeritus at New York University before his death in 1998,
discusses in Statistical Power Analysis for the Behavioral Sciences, according to
convention, correlations above a 0.5 are traditionally considered to be large within the
social sciences.31 This lends support to the Mercyhurst method for the evaluation of
analytic products, as the ranking consistency of the three graduate student raters was
high.
Product – Forecasting Findings
According to Intelligence Community Directive Number 203, analytic products
should “make accurate judgments and assessments.”32 Therefore, not only must the
quality of the analyst’s process be accounted for, but the correctness of estimates must be
measured as well. In terms of forecasting the correct outcome of the October 2008
Zambian presidential elections, 68% of control group participants predicted accurately,
whereas only 40.9% of experimental group participants did. Therefore, individuals who
did not use ECM to aid in their analysis were able to identify the actual outcome of the
research question posed to them much more often than individuals who did incorporate
30 The common measures of inter rater-reliability, known as Cohen’s and Fleiss’ Kappa, were not used when conducting tests of correlation in this study. This is due to the fact that both measures are designed for use in situations where the data is categorical (ex. yes vs. no) and were therefore felt by the author to be inappropriate measures for the type of data present (ordinal numbers). 31 Jacob Cohen, Statistical Power Analysis for the Behavioral Sciences (Philadelphia: Lawrence Erlbaum Associates, 1988).32 United States Government, Intelligence Community Directive Number 203.
43
ECM into their analytic process. Forecasting result differences were found to be
statistically significant at the 0.10 level (p-value = 0.065). Please see Figure 4.8 below
for a graphic representation of this data.
Quality Of Supporting Evidence Vs. Forecasting Accuracy Findings
After looking at both the quality of the evidence supporting the participants’
analysis and the participants’ forecasting accuracy separately, the opportunity to compare
the two findings presented itself. Therefore, the following supplementary conclusion
regarding graduate student process rankings and forecasting accuracy, although not
directly related to the hypotheses of this experiment, was thought to be interesting enough
to warrant mentioning at this time.
Out of curiosity, graduate student process rankings, essentially measuring the
qualitative strength of the individual’s assessment, were compared against whether or not
the individual correctly forecasted the outcome of the election. To make this comparison
the process rankings were simply split in half and a tally of the amount of individuals in
Figure 4.8
44
the top half and bottom half who forecasted the elections correctly was conducted. This
measurement was carried out three times: first for the group as a whole, second for just
the control group and lastly for just the experimental group. The expectation following
this comparison was that individuals in the top half of the graduate student process
rankings would forecast the winner of the elections correctly considerably more often
than individuals falling in the bottom half of the rankings. However, this was not found
to be the case. In fact, results of the tally showed little difference in forecasting accuracy
between those who were ranked better qualitatively than those who were not.
When looking at the group as a whole, 14 individuals in the top half of the
graduate student process rankings forecasted the outcome of the elections correctly and 9
individuals forecasted incorrectly, compared with an even split in the bottom half of 12
individuals forecasting correctly and 12 forecasting incorrectly. When looking at solely
the control group, 9 individuals in the top half of the graduate student process rankings
forecasted the outcome of the elections correctly and 4 individuals did not, compared to 8
individuals who forecasted correctly in the bottom half and 4 who did not. Finally, when
looking at the experimental group on its own, there was an even split of 5 individuals in
the top half of the graduate student process rankings who forecasted correctly and 5 who
did not, compared to 4 individuals in the bottom half who forecasted correctly with 8 who
did not. Please see Figures 4.9, 4.10 and 4.11 below for graphic representations of this
data.
Figure 4.9
45
Figure 4.10
Figure 4.11
46
Although a strict methodology was not applied in reaching this particular
conclusion, making it difficult to ascertain the extent to which any extraneous variable or
variables has impacted it, the point in its most basic form remains the same.
Therefore, this conclusion suggests that individuals who are better writers, or who
are able to craft more convincing arguments, are not necessarily anymore likely to
forecast correctly than individuals who are lacking in those skills. This notion is an
offshoot of the general argument made in Philip Tetlock’s Expert Political Judgment,
which basically states that the way in which we reason or think about things is more
important than our backgrounds and accomplishments or even our belief systems.33 How
we think, then appears to be more important than what we think when it comes to being
proficient forecasters.
Product – Source Reliability and Analytic Confidence Findings
33 Philip E. Tetlock, Expert Political Judgment (Princeton: Princeton University Press, 2005).
47
Intelligence Community Directive Number 203 states that analytic products
should “properly describe quality and reliability of underlying sources” and “properly
caveat and express uncertainties or confidence in analytic judgments.”34 As a result, all
experiment participants were asked to assess both their source reliability and analytic
confidence on a scale from low to high. Control group findings regarding source
reliability illustrate that 4% of participants claimed low source reliability, 80% claimed
medium and 16% claimed high. Findings for the experimental group show 4.5% of
participants claimed low reliability, 68.2% claimed medium and 27.3% claimed high.
Although percentages reveal that approximately 11% more experimental group
participants claimed to have high source reliability than did control group participants, it
is necessary to note that this difference is a function of just two participants. As a result
source reliability findings were not found to be significant at the 0.10 level (p-value =
0.451). Please see Figure 4.12 below for a graphic representation of this data.
In terms of analytic confidence, 12% of control group participants claimed low
analytic confidence, 80% claimed medium and 8% claimed high. On the other hand,
22.7% of experimental group participants claimed low confidence, 63.6% claimed
34 Ibid.
Figure 4.12
48
medium and 13.6% claimed high. Although percentages reveal that approximately 6%
more experimental group participants claimed to have high analytic confidence than did
control group participants, it is necessary to note that this difference is a function of just
one participant. As a result, findings for analytic confidence were not found to be
significant at the 0.10 level (p-value = 0.745). Please see Figure 4.13 below for a graphic
representation of this data.
Figure 4.13
49
CHAPTER V:
CONCLUSIONS
As previously stated, the purpose of this study was to determine the value of ECM
within the analytic process. This was accomplished by requiring the experimental group
to incorporate the use of conceptual models, created through a structured individual to
group approach, into their analytic process. The control group on the other hand was
simply asked to analyze the question posed to them, using no particular method.
While the control group correctly predicted the outcome of the elections 68% of
the time, the experimental group forecasted the outcome correctly only 40.9% of the
time. This result was found to be statistically significant at the 0.10 level (p-value =
0.065). Additionally, although the difference was marginal, a larger percentage of
experimental group participants claimed to have high source reliability and analytic
confidence than did control group participants. Therefore, the experimental group
members did considerably poorer in terms of correctly forecasting the result of the
elections, but felt they had more reliable sources and were more confident in their
assessment. However, neither source reliability (p-value = 0.451) nor analytic confidence
(p-value = 0.745) results were found to be statistically significant at the 0.10 level. Even
so, in terms of product measures only, these results paint a bleak picture of the role of
ECM within the analytic process.
Turning to process measurements, results of graduate student rankings placed
control group participants, on average, roughly 3 points higher than experimental group
participants in terms of the quality and logic of the evidence used to support their
analysis. Correlation scores amongst the three graduate student rankers were relatively
50
high across the boards (0.76, 0.72 and 0.63), illustrating consistency in experiment
participant scoring. Furthermore, the experimental group’s conceptual models were
appreciably more complex than the control groups in terms of the amount of concepts and
relationships between concepts. This result was found to be statistically significant at the
0.10 level (p-value = 0.000). Therefore, although the experimental group’s conceptual
models appear to be more complex and thorough than the control group’s models, they
scored lower on average in regards to the reasoning used to substantiate their estimates.
Once again, these results appear largely to invalidate any suggested value of ECM within
intelligence analysis. Since the literature strongly suggests that ECM will improve
analysis, what could account for these counter-intuitive results?
Excessive Possibilities Confuse
Sheena S. Iyengar and Mark R. Lepper in, “When Choice is Demotivating: Can
One Desire Too Much of a Good Thing?,” state that, “It is a common supposition in
modern society that the more choices the better—that the human ability to manage, and
the human desire for, choice is infinite.”35 Traditionally, research has tended to support
the concept that having some choice produces better outcomes than having no choice.
However, a growing body of literature concludes that when the amount of choices
available becomes too large, people have a very hard time managing that complexity.
As a result, Iyengar, professor in the Management Department of the Columbia
Business School, and Lepper, professor of psychology at Stanford University, conducted
a field experiment at an upscale grocery store whereby they observed the outcome of
consumers visiting one of two tasting booths. One tasting booth displayed only 6 jams,
35 Sheena S. Iyengar and Mark R. Lepper, “When Choice is Demotivating: Can One Desire Too Much of a Good Thing?,” Journal of Personality and Social Psychology 79, no. 6 (2000): 995.
51
while the other displayed a variety of 24 different flavored jams. Iyengar and Lepper’s
findings showed that initially, shoppers who encountered the booth with 24 flavors were
more attracted to the display (stopping 60% of the time) than shoppers who encountered
the booth with only 6 (stopping only 40% of the time).36 Additionally, even though one
booth displayed only 6 flavors, whereas the other displayed 24, there were no significant
differences in the amount of jams sampled by visitors to each of the different booths.37
Finally, almost 30% of consumers who stopped at the 6 flavor booth bought a jar of jam,
while only 3% of consumers who stopped at the booth with 24 flavors did.38 This
suggests that although individuals originally found the booth with the plethora of flavors
to be more attractive it hampered their ability and motivation to make a choice when it
came time to purchase the product.
A similar point is made in Expert Political Judgment, when Tetlock discusses a
series of scenario exercises tested on a group of experts comprised of individuals he
refers to as hedgehogs (those who know one big thing) and foxes (those who know many
little things).39 Essentially participants were provided with an exhaustive variety of
possible future scenarios in regards to a particular country and were asked to forecast the
scenario that was most likely. This presentation of scenarios did not substantially affect
the predictions of hedgehogs who were quite easily able to reject scenarios that they
believed would not actually happen.40 However, the foxes, being more open-minded,
found it very difficult not to consider even the strange or implausible scenarios.41
Therefore, for this group in particular, the danger of attributing limited resources to the
36 Iyenger and Lepper, “When Choice is Demotivating,” 997.37 Ibid.38 Ibid.39 Tetlock, Expert Political Judgment, 190.40 Ibid.41 Ibid.
52
contemplation of a plethora of possibilities did little more than send them on a wild goose
chase. This illustrates that “foxes become more susceptible than hedgehogs to a serious
bias: the tendency to assign so much likelihood to so many possibilities that they become
entangled in self-contradictions.”42
Importance of Convergence
Conventional wisdom has long been a proponent of the process of divergent
thought, focusing on the need for thinking outside the box, maintaining an open mind and
encouraging an ever-increasing flow of ideas. In fact, until recently, convergent thinking
has perpetually received a bad rap. However, research has finally begun to unearth the
benefits of a combined approach including both divergent and convergent thinking. “In
Praise of Convergent Thinking” by Arthur Cropley, states that, “Convergent thinking is
oriented toward deriving the single best (or correct) answer to a clearly defined
question…Divergent thinking, by contrast, involves producing multiple or alternative
answers from available information.”43 Cropley, visiting professor of psychology at the
University of Latvia for the past eleven years, argues that divergent thinking is essential
to the creation of novel ideas, but that convergent thinking is then vital to the exploration
of those ideas.44 Truly utilitarian creative thought, says Cropley, can only be achieved
through the generation of ideas through divergence, followed by the criticism and
evaluation of those ideas through convergence.45
According to Michael Handel, as quoted by Stephen Marrin, “While the absence
of competition and variety in intelligence is a recipe for failure, its institution does not
42 Ibid.43 Arthur Cropley, “In Praise of Convergent Thinking,” Creativity Research Journal 18, no.3 (2006), 391.44 Cropley, “In Praise,” 398.45 Ibid.
53
guarantee success.”46 Handel, joint founding editor of the journal Intelligence and
National Security, further notes that while divergent thinking exercises lead to an
increased number of opinions for consideration, they are not able to aid in ascertaining
the best alternative.47 Richard Betts, director of the Institute of War and Peace Studies,
and the director of the International Security Policy Program at Columbia University,
makes a similar point in “Analysis, War, and Decision: Why Intelligence Failures Are
Inevitable.” Betts states that, “To the extent that multiple advocacy works, and succeeds
in maximizing the number of views promulgated and in supporting the argumentative
resources of all contending analysts, it may simply highlight the ambiguity rather than
resolve it.”48 Essentially, both Handel and Betts acknowledge that while divergent
thinking methods may indeed be useful and necessary within intelligence analysis, they
are not without their limitations. In specific, the generation of numerous ideas alone does
not automatically result in better answers, highlighting the need for a combination of both
divergent and convergent approaches to intelligence analysis.
Final Thoughts
The jam experiment and scenario exercises discussed above tie directly into the
findings of this experiment, showing that although experimental group conceptual models
were significantly larger than control group conceptual models, the control group did
significantly better than the experimental group in forecasting the correct outcome of the
research question posed. As a result of the individual to group conceptual modeling
46 Michael I. Handel, “Intelligence and the Problem of Strategic Surprise,” The Journal of Strategic Studies (September 1984), 268. In Stephen Marrin, “Preventing Intelligence Failures by Learning from the Past,” International Journal of Intelligence and CounterIntelligence 17, no. 4 (2004), 665. 47 Ibid.48 Richard K. Betts, “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable,” World Politics 31 (1978): 76.
54
method employed in this study, the generation of a multitude of ideas seemed to do little
more than confuse and overwhelm experimental group participants. Faced with such a
large number of concepts, experimental group participants appeared to struggle to make
sense of the relevant relationships and to identify adequately the information most
important to answering the question.
At this juncture, the discussion of convergent and divergent thinking discussed in
detail above, becomes relevant. Experimental group participants, having been involved
in a structured individual to group divergent thinking exercise, were left with many more
options to consider than control group participants who simply set off to research the
question on their own. Traditionally, this would be considered an excellent outcome as
the divergent thinking exercise appeared to work, leaving the experimental group with a
much wider array of concepts to consider. However, this experiment found that while
this is true, it is not enough. Divergent thinking on its own appears to be a handicap,
without some form of convergent thinking to counterbalance it.
Therefore, it is likely that experimental group participants, once faced with the
plethora of ideas generated by the group, would have greatly benefitted from a structured
convergent thinking exercise before creation and development of their conceptual
models. The goal of this exercise being to critically evaluate the ideas proposed, possibly
eliminating those concepts that were clearly off base and prioritizing what was left into a
meaningful arrangement. One way this convergent aspect is often accomplished in
groups is based on the amount of individual members within that group. For example,
often times a group with four team members will find a way to organize and break down
information into four separate sections, thereby allowing them to assign one section to
55
each team member. While this is certainly not the ideal way to engage in convergent
thinking, it is likely better than not incorporating it at all. However, at this point the best
method for engaging in convergent thinking within the process of ECM has yet to be
determined.
Future Research
Future research should explore the importance of incorporating both divergent
and convergent thinking into the method for the construction of conceptual models.
While divergence was sufficiently accounted for in this study, due to time limitations
convergence was not. Therefore, it would be interesting to see the effect of taking the
individual to group conceptual modeling method employed in this study one-step further.
As Surowiecki, McKeachie and Janis tell us, starting the process at the individual level
and then moving into group collaboration is very valuable. However, at this juncture,
once all possibilities the group can think of have been accounted for, it is necessary for
the group to narrow the scope of the conceptual model into only the most relevant
concepts and relationships and then organize them accordingly. Therefore, additional
research is needed to establish the best method for carrying out this task.
56
57
BIBLIOGRAPHY
Allison, Graham T. “Conceptual Models and the Cuban Missile Crisis.” In The Sociology of Organizations: Classic, Contemporary and Critical Readings, ed. Michael Jeremy Handel. Thousand Oaks: SAGE, 2003.
Basque, Josianne and Beatrice Pudelko. “Using a Concept Mapping Software as a Knowledge Construction Tool in a Graduate Online Course,” in Proceedings of ED-MEDIA 2003, World Conference on Educational Multimedia, Hypermedia &Telecommunications, Honolulu, June 23-28, 2003, ed. D. Lassner and C. McNaught (Norfolk: Association for the Advancement of Computing in Education, 2003).
Betts, Richard K. “Analysis, War, and Decision.” World Politics 31 (1978).
Budd, John W. “Mind Maps as Classroom Exercises.” Journal of Economic Education (Winter 2004).
Church, Forrest. Introduction to The Jefferson Bible: The Life and Morals of Jesus of Nazareth, by Thomas Jefferson, 1-31. Boston: Beacon Press, 1989.
Cohen, Jacob. Statistical Power Analysis for the Behavioral Sciences (Philadelphia: Lawrence Erlbaum Associates, 1988).
Cropley, Arthur. “In Praise of Convergent Thinking.” Creativity Research Journal 18, no. 3 (2006).
De Simone, Christina. “Applications of Concept Mapping.” Journal of College Teaching 55, no. 1 (2007).
Garson, G. David. Guide to Writing Empirical Papers, Theses, and Dissertations (New York: CRC Press, 2002).
Heuer, Richards J. Psychology of Intelligence Analysis (Center for the Study of Intelligence, 1999).
Iyengar, Sheena S. and Mark R. Lepper. “When Choice is Demotivating: Can One Desire Too Much of a Good Thing?.” Journal of Personality and Social Psychology 79, no. 6 (2000).
Janis, Irving L. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston: Houghton Mifflin Company, 1982.
58
Jarvelin, Kalervo and T.D. Wilson. “On Conceptual Models for Information Seeking and Retrieval Research.” Information Research 9, no. 1 (October 2003), http://informationr.net/ir/9-1/paper163.html (accessed January 15, 2009).
Mangio, Charles A. and Bonnie J. Wilkinson. “Intelligence Analysis: Once Again.” Paper presented at the annual international meeting of the International Studies Association, San Francisco, California 26 March, 2008.
Marrin, Stephen. “Preventing Intelligence Failures by Learning from the Past.” International Journal of Intelligence and CounterIntelligence 17, no. 4 (2004).
McKeachie, Wilbert J. McKeachie’s Teaching Tips. Boston: Houghton Mifflin Company, 2002.
Miller, George A. “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information.” Psychology Review 63 (1956).
Novak, Joseph and Alberto Canas. The Theory Underlying Concept Maps and How to Construct and Use Them, Technical Report IHMC Cmap Tools 2006-01 Rev 01-2008, Florida Institute for Human and Machine Cognition, 2008. http://cmap.ihmc.us/Publications/ResearchPapers/TheoryUnderlyingConceptMaps.pdf (accessed August 8, 2008).
Santrock, John W. Adolescence, 8th ed. New York: McGraw-Hill, 2001.
Surowiecki, James. The Wisdom of Crowds. New York: Anchor Books, 2004.
Tetlock, Philip E. Expert Political Judgment. Princeton: Princeton University Press, 2005.
United States Government. Intelligence Community Directive Number 203 (June 21, 2007),http://www.fas.org/irp/dni/icd/icd-203.pdf (accessed January 26, 2009).
59
60
APPENDICES
61
Appendix 1: Experiment Sign-Up Form
Full Name: _______________________________________________________
Email Address: ____________________________________________________
Class Year: ________________________________________________________
Please sign up for at least 1 time slot in column A & 1 time slot in Column B
20 October 2008 Time Slot A 27 October 2008 Time Slot B
11:00 – 12:30 1:00 – 1:30
2:00 – 3:30 4:00 – 4:30
5:00 – 6:30 7:00 – 7:30
Please sign up for at least 1 time slot in column A & 1 time slot in Column B
20 October 2008 Time Slot A 27 October 2008 Time Slot B
1:00 – 1:30 11:00 – 12:30
4:00 – 4:30 2:00 – 3:30
7:00 – 7:30 5:00 – 6:30
Even though you are signing up for multiple spots, you will only be asked to come in once on the 20 th & once on the 27 th
Contact Information:
Shannon [email protected](315) 525-3967
62
Appendix 2: IRB Research Proposal
Date Submitted: Advisor’s Name (if applicable):
9/24/2008 Kristan Wheaton
Investigator(s): Advisor’s Email:
Shannon Ferrucci [email protected]
Investigator Address: Advisor’s Signature Of Approval:
5117 Belle Village Drive East [X]
Erie, PA 16509
Investigator(s) E-mail: Title Of Research Project:
[email protected] Conceptual Modeling And Intelligence Analysis
Investigator Telephone Number: Date Of Initial Data Collection:
315-525-3967 October 20, 2008
63
Please describe the proposed research and its purpose, in narrative form:
The purpose of this study is to assess whether or not explicit conceptual modeling improves collection and the subsequent analysis The more complicated a question is and the more concepts that play a part in the answering of that question the harder it is to recall all of those concepts simply from memory. Therefore, putting these concepts and their relationships to each other down on paper can be extremely useful. Explicit conceptual modeling prior to collection should help to improve the efficiency of the collection process as the model provides you with a basis for what types of information to look for. Also, since the conceptual model is not static, but a fluid diagram that evolves as you learn more about a certain topic, the model should help to highlight and minimize gaps in knowledge. These improvements in collection should then improve the subsequent analysis.
Additionally, this experiment is designed to test a specific method for developing conceptual models. The approach starts at the individual level and then moves to the group collaboration level. This method supports the age old idea that two minds are better than one. Furthermore, this method should limit the problems associated with group think by encouraging equal participation from all individuals, even those who may be less inclined to voice their opinions or participate in group settings.
Indicate the materials to be used:
Consent Form
Debriefing Form
Conceptual Modeling Training Information
Research Question
Free Online Conceptual Modeling Software
Analytic Confidence And Source Reliability Information
Format For Written Product
Writing Utensils
Post-Test Questionnaire
Procedure:
64
One week prior to the start of the experiment, I will make an appearance in various undergraduate and graduate intelligence studies classes in order to promote my experiment and have students sign up. The students will be asked to provide their availability on two separate dates. I will then email them a designated time slot for both dates. Date and time assignments, as well as group assignments, will be random. On the first day of the experiment, the control group will show up for half an hour. They will be provided with a research question regarding upcoming 2008 Zambian presidential elections and a semi-structured format for a written intelligence product. I will also give them explanations of both source reliability and analytic confidence. At the end of the half hour they will be sent home and given one week to research and analyze the question posed to them. The experimental group on the other hand will be asked to come in for an hour and a half on the first day of the experiment. They will be provided with the same research question as the control group, will be given the same format for a written intelligence product and will also receive the same information regarding source reliability and analytic confidence. However, this group will undergo a small training session regarding what conceptual modeling is and how to create one on the same piece of free conceptual modeling software, such as Mindomo. After the training session the participants will each be asked to make a list of concepts off the top of their heads that they feel are relevant to the research question I provided them with. Next, we will go through and make a master list, consolidating all of the participants’ individual lists, highlighting concepts that were commonly found, significant differences in opinion and uncommon but highly useful concepts. Finally, each individual will then be asked to use this master list to create a conceptual model using software that shows the perceived relationships between concepts. Participants will then discuss the models they have created with the individual sitting next to them in order to share their ideas and see how another person visualized the same information. They will then print out a copy for themselves and a copy for me.
On the second day of the experiment, a week from the first day, both the experimental and control groups will come back in. The experimental group will come in for half an hour. They will hand in their written analysis and an updated conceptual model that they have modified to reflect what they learned through their research. After doing this they will answer a short post-experiment questionnaire and will then be debriefed. The control group on the other hand will come in for an hour and a half on this day. They will hand in their written analysis and will then simply be told to visualize the concepts that were important in answering the research question using bubbles and lines. After completion they will hand in their visualization, complete a post-experiment questionnaire and then be debriefed.
65
Participants who successfully complete all experiment responsibilities will receive extra credit from intelligence professors and pizza and soda will be offered to all those who participate in the study. Three second-year intelligence studies graduate students, who were not experiment participants, will then evaluate the written analyses using the same criteria that students are graded against in the Intelligence Communications class that these graduate students have already successfully completed.
1. Do you have external funding for this research (money coming from outside the College)? Yes[ ] No[X]
Funding Source (if applicable): N/A
2. Will the participants in your study come from a population requiring special protection; in other words, are your subjects someone other than Mercyhurst College students (i.e., children 17-years-old or younger, elderly, criminals, welfare recipients, persons with disabilities, NCAA athletes)? Yes[ ] No[X]
If your participants include a population requiring special protection, describe how you will obtain consent from their legal guardians and/or from them directly to insure their full and free consent to participate.
N/A
Indicate the approximate number of participants, the source of the participant pool, and recruitment procedures for your research:
I plan to have approximately 60-80 participants. I intend to recruit undergraduate and graduate students in the intelligence studies department through sign-up sheets which I will pass around to students when I visit their classes to promote the experiment.
Will participants receive any payment or compensation for their participation in your research (this includes money, gifts, extra credit, etc.)? Yes[X] No[ ]
If yes, please explain: Students will obtain extra credit from the intelligence professors willing to grant it for participation in an experiment and all participants will be offered pizza and refreshments at the end of the experiment.
66
3. Will the participants in your study be at any physical or psychological risk (risk is defined as any procedure that is invasive to the body, such as injections or drawing blood; any procedure that may cause undue fatigue; any procedure that may be of a sensitive nature, such as asking questions about sexual behaviors or practices) such that participants could be emotionally or mentally upset? Yes[ ] No[X]
Describe any harmful effects and/or risks to the participants' health, safety, and emotional or social well being, incurred as a result of participating in this research, and how you will insure that these risks will be mitigated:
None.
4. Will the participants in your study be deceived in any way while participating in this research? Yes[ ] No[X]
If your research makes use of any deception of the respondents, state what other
alternative (e.g., non-deceptive) procedures were considered and why they weren't chosen:
N/A
5. Will you have a written informed consent form for participants to sign, and will you have appropriate debriefing arrangements in place? Yes[X] No[ ]
Describe how participants will be clearly and completely informed of the true nature and purpose of the research, whether deception is involved or not (submit informed consent form and debriefing statement):
Prior to the start of the experiments, participants will be provided with a general overview of what will occur during the session as well as the consent form, which will also describe what is expected of them. Following the experiment participants will be asked to fill out an administrative questionnaire and will then be provided with a
67
debriefing statement that will explain how the results from the session will be used (please see forms at the end of this proposal).
Please include the following statement at the bottom of your informed consent form: “Research at Mercyhurst College which involves human participants is overseen by the Institutional Review Board. Questions or problems regarding your rights as a participant should be addressed to Mr. Tim Harvey Institutional Review Board Chair; Mercyhurst College; 501 East 38th Street; Erie, Pennsylvania 16546-0001; Telephone (814) 824-3372.”
6. Describe the nature of the data you will collect and your procedures for insuring that confidentiality is maintained, both in the record keeping and presentation of this data:
Names are not required for my research and thus no names will be used in the recording of the results or the presentation of my data. Names will only be used to notify professors of participation in order for them to correctly assign extra credit.
7. Identify the potential benefits of this research on research participants and humankind in general.
Potential benefits include:
For participants:
An opportunity to practice the intelligence analysis skills they have learned in the classroom in an experiment aimed at testing the value of explicit conceptual modeling as it applies to the quality of the analysis produced. Students are often asked to complete short written intelligence assignments with quick turnaround times in Intelligence Studies courses. This experiment hopes to validate a particular method for the creation of conceptual models, which if used by intelligence students should increase efficiency in collection and accuracy in analysis.
For the Intelligence Community:
Currently, collection takes up quite a large amount of an analyst’s time due to information overload. This experiment hopes to demonstrate that pre-collection conceptual modeling will not only make the collection process more efficient, it will also
68
help to minimize gaps in knowledge as those gaps will be recognized earlier on. This process will then in turn help to improve the analysis stemming from the collection.
69
Appendix 3: Control Group Consent Form
The purpose of this research is to test the value of variations in analytic approaches as they apply to the quality of the analysis produced.
Your participation involves the development of a short analytic product, the completion of a data visualization exercise and the filling out of a post-experiment questionnaire. This process will require your onsite attendance today and one week from today during the pre-determined timeslot that was designated to you. In total time spent onsite should not exceed two hours, however, some participation on your own time is required throughout the week. Your name WILL NOT appear in any information disseminated by the researcher. Your name will only be used to notify professors of your participation in order for them to assign extra credit.
There are no foreseeable risks or discomforts associated with your participation in this study. Participation is voluntary and you have the right to opt out of the study at any time for any reason without penalty.
I, ____________________________, acknowledge that my involvement in this research is voluntary and agree to submit my data for the purpose of this research.
_________________________________ __________________
Signature Date
_________________________________ __________________
Printed Name Class
Telephone Number: ____________________________________
Researcher’s Signature: ___________________________________________________
If you have any further question about this research you can contact me at [email protected]
Research at Mercyhurst College which involves human participants is overseen by the Institutional Review Board. Questions or problems regarding your rights as a participant should be addressed to Tim Harvey; Institutional Review Board Chair; Mercyhurst College; 501 East 38th Street; Erie, Pennsylvania 16546-0001; Telephone (814) 824-3372. [email protected]
70
Appendix 4: Experimental Group Consent Form
The purpose of this research is to test the value of explicit conceptual modeling as it applies to the quality of the analysis produced. Furthermore, the following experiment will test the effectiveness of a structured individual to group conceptual modeling method.
Your participation involves an instruction period and training exercise, the development of a short analytic product with accompanying model and the filling out of a post-experiment questionnaire. This process will require your onsite attendance today and one week from today during the pre-determined timeslots that have been designated to you. In total, time spent onsite should not exceed two hours, however, some participation on your own time is required throughout the week. Your name WILL NOT appear in any information disseminated by the researcher. Your name will only be used to notify professors of your participation in order for them to assign extra credit.
There are no foreseeable risks or discomforts associated with your participation in this study. Participation is voluntary and you have the right to opt out of the study at any time for any reason without penalty.
I, ____________________________, acknowledge that my involvement in this research is voluntary and agree to submit my data for the purpose of this research.
_________________________________ __________________
Signature Date
_________________________________ __________________
Printed Name Class
Telephone Number: ____________________________________
Researcher’s Signature: ___________________________________________________
If you have any further question about Conceptual Modeling or this research, you can contact me at [email protected]
Research at Mercyhurst College which involves human participants is overseen by the Institutional Review Board. Questions or problems regarding your rights as a participant should be addressed to Tim Harvey; Institutional Review Board Chair; Mercyhurst College; 501 East 38th Street; Erie, Pennsylvania 16546-0001; Telephone (814) 824-3372. [email protected]
71
Appendix 5: Research Question
Due to the 19 August 2008 death of Zambia’s President Mwanawasa, early presidential
elections will take place on 30 October 2008 in accordance with the Zambian constitution
that requires new elections to be held within 90 days of a president's untimely departure
from office. Who will win this upcoming Zambian presidential election (Rupiah Banda,
Michael Sata or Hakainde Hichilema) and why?
72
Appendix 6: Important Supporting Information
Source Reliability:
Source Reliability reflects the accuracy and reliability of a particular source over time.
Sources with high reliability have been proven to have produced accurate, consistently
reliable, information in the past. Sources with low reliability lack the accuracy and
proven track record commensurate with more reliable sources.
o In this experiment source reliability will be measured on a low - high scale conveying the
reliability of the sources used for that piece of intelligence/report.
o For more information regarding internet source reliability please refer to:
http://www.library.jhu.edu/researchhelp/general/evaluating/
Analytic Confidence:
Analytic Confidence reflects the level of confidence an analyst has in his or her estimates
and analyses. It is not the same as using words of estimative probability, which indicate
likelihood. It is possible for an analyst to suggest an event is virtually certain based on
the available evidence, yet have a low amount of confidence in that forecast due to a
variety of factors or vice versa.
o In this experiment Analytic Confidence will be measured on a low - high scale.
o For more information regarding factors contributing to the assessment of analytic confidence
see the Peterson Table of Analytic Confidence provided below.
Peterson Table Of Analytic Confidence Assessment
Use Of Structured Method(s) In Analysis
Overall Source Reliability
Source Corroboration/Agreement: Level Of Conflict Amongst Sources
Level Of Expertise On Subject/Topic & Experience
Amount Of Collaboration
Task Complexity
Time Pressure: Time Given To Make Analysis
73
Appendix 7: Experiment Answer Sheet
NAME:
FORECAST:
It is (likely, highly likely, almost certain) that (Rupiah Banda, Michael Sata, Hakainde
Hichilema) will win the 30 October 2008 Zambian presidential elections.
BULLETED DISCUSSION:
SOURCE RELIABILITY (CIRCLE ONE): LOW MEDIUMHIGH
ANALYTIC CONFIDENCE (CIRCLE ONE): LOW MEDIUMHIGH
NAME OF PROFESSOR(S) GIVING EXTRA CREDIT:
74
Appendix 8: Control Group Expectation Sheet
EXPECTATIONS FOR 27 OCTOBER 2008
GROUP B
Completed answer sheet ready to be turned ino Inclusive of: Name, Forecast, Bulleted Discussion, Source Reliability,
Analytic Confidence, Name of Professor(s) Giving Extra Credit
Complete short data visualization exercise onsite, based on past week of collection and analysis
Order of Events for 27 October 2008 (45 minute maximum)o Complete short data visualization exercise & hand in
o Hand in answer sheet
o Answer short post-experiment questionnaire
o Pass out debriefing sheets
75
Appendix 9: Experimental Group Expectation Sheet
EXPECTATIONS FOR 27 OCTOBER 2008
GROUP A
Completed answer sheet ready to be turned ino Inclusive of: Name, Forecast, Bulleted Discussion, Source Reliability,
Analytic Confidence, Name of Professor(s) Giving Extra Credit
Updated Bubbl.us Conceptual Model o Over the course of the week please update your models as you learn more
about the topic (ex. fill in knowledge gaps in the model, highlight areas that turned out to be more important than originally thought, place less focus on those areas that turned out not to be as important as originally thought). Allow your model to evolve alongside your analysis
o I will ask you to sign into the computer and share the final version of your
conceptual model with me once you arrive on the 27th
Order Of Events for 27 October 2008 (30 minute maximum) o Hand in answer sheet
o Share final conceptual model
o Answer short post-experiment questionnaire
o Pass out debriefing sheets
76
Appendix 10: Pre-Experiment Questionnaire
Thank you for agreeing to participate in this study! Please take a few moments to answer the following questions. Your feedback is greatly appreciated.
1. Do you feel as though you will be able to dedicate a sufficient amount of time to working on this experiment over the next week?
Yes Maybe No
2. What type of a learner do you primarily consider yourself to be?
Auditory Visual Kinesthetic & Tactile Unsure
3. Please rate how interested you are in this study, with 1 being not interested and 5 being extremely interested.
1 2 3 4 5
4. Please rate how useful you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.
1 2 3 4 5
5. What do you think the purpose of this experiment is?
6. What are your reasons for participating in this experiment?
77
Appendix 11: Contact Information
CONTACT INFORMATION
Please feel free to get a hold of me during the week if you have any questions or problems! Thank you again for your participation.
Name: Shannon Ferrucci
Email: [email protected]
Telephone: (315) 525-3967
Location: CIRAT Lab
78
Appendix 12: Conceptual Modeling Lecture
Everybody builds implicit models on a daily basiso We build models of the world around us
For ex: when we drive to the grocery store we model the route in our minds and when we get ready each morning we model our routine
o We also build models when we are faced with questions We try to come up with preliminary answers and conclusions However, we often recognize gaps in our knowledge signaling that
we need more informationo Our models are unique to each of us
They draw from our experiences, interests, opinions, etc… For ex: If I were to ask all of you, a room of intelligence students,
what first comes to mind when faced with the question of what intelligence is, your answers would likely be very different than if I went out into the community and asked the same question of the first ten people I saw.
Models can become extremely complexo Intelligence requirements (questions or topics posed to the analyst by a
decision maker) often entail understanding complex relationships between people, states, organizations, industries etc…
o Therefore, it is very rare that an analyst will ever develop a complete model of the requirements right off the bat. Generally, an analyst is able to fill in pieces of their model with things they already know but are forced to fill in the rest with topics, which they recognize they need to understand more about.
Use of Conceptual Models o Conceptual models highlight key concepts and their relationships to each
other o These models appear to be a useful way for intelligence professionals to
model knowledge and come to terms with compound requirementso Conceptual modeling within the field of intelligence must include both
what an individual already knows about a topic and also what that individual thinks he or she needs to know about it in order to sufficiently answer the requirements posed
Don’t be afraid to exploreo The first construction of the conceptual model is not factual but
exploratory We may identify areas that we think are important to answering the
requirement, but until we have collected that information we have no way of knowing whether or not they truly are
79
However, when this happens it should not be viewed as a setback or waste of time
Exploring various concepts helps the analyst recognize what is critical to answering the question and what is not
to a requirement, it often provides context, background and understanding of the requirement
Importance of relationshipso Concepts alone are not enough, relationships amongst concepts must be
included in the model as well Imagine that you are modeling a drug trafficking organization. To
learn about the suppliers and the runners separately is a good start. However, you must then learn about the relationship between the two.
Need for explicit modelso When dealing with such complex models as is usually necessary within
the field of intelligence, the need to make the models explicit becomes obvious
o Many psychological studies have been done suggesting that there are upper limits on our working memory
Often it is cited that an individual can only hold 7 things (plus or minus 2) in their memory at any given time
While there are certain exceptions to this rule, it is obvious that most intelligence models will eventually become too complex to be stored solely in memory and must instead be made explicit
Importance of model to analystso Share and compare models with fellow analysts and professionalso Assess level of confidence in analysis producedo From a managing standpoint can help in tasking a team of analysts and
divvying up responsibilities o Aids efficiency in collection process and helps to identify gaps in
knowledgeo Useful for after the fact reviewo Provides a good starting point for future related questions
80
Appendix 13: Bubbl.us Instruction Sheet For Experimental Group
Experiment Day 1 – Group A
Bubbl.us Instructional Sheet
Please sign into computers and go to Bubbl.us
Click start brainstorming
Click save on the right hand side of the screeno Fill out the create account steps and hit submito This will allow you to save and share you project with me later on
To start you will see that there is a single bubble in the center of the screeno If you click on the text saying start here you can replace that with
whatever words or concept you deem appropriateo In this case, since it is the 1st bubble it is important to start by entering the
specific requirements you need to answer based on the research question provided to you
Now if you simply place your cursor over the center of the bubble you will see a choice of 6 icons. Let’s start on the top left hand corner.
o If you click on the cross with arrows you can move the bubble anywhere you like on the screen
o Moving to the top right, if you click on the X your bubble will disappear (to get it back simply click the undo button on the top left of your screen)
o Clicking on the middle icon on the right hand side of the bubble allows you to create a new sibling bubble (i.e. a bubble that does not spring from the 1st bubble, but is entirely separate)
o The blue circles icon allows you to show directional relationships through the use of arrowed lines. By clicking on the icon and dragging your cursor to the sibling bubble you just made in the previous step you can see an example of this
o Clicking on the middle bottom icon of one of the bubbles you have created allows you to make a child balloon (i.e. a bubble that does spring from a previous bubble, generally these bubbles have some sort of direct relationship to each other, with the concept in the child balloon being a sub-concept of the original parent balloon)
o Lastly, clicking on the bottom left hand icon allows you to change the color of the balloon
81
Just a few more tips:o If you click the center button at the upper left, your entire conceptual
model will be centered on the pageo Also, if you would like to print your conceptual model you can click the
set print area button at the upper left. This will help you to ensure your entire conceptual model is within the printable area of the page
o Also, to zoom in and out you can scroll up and down on your mouse or hit the plus and minus buttons on the upper left
Now take about 5 minutes to familiarize yourself with the software on your own
o To do this begin to craft a practice conceptual model on important things to consider when buying a new car (ex. gas mileage)
o Practice using the different icons that we just went over and try to incorporate each function into your conceptual model at least once
o I will walk around to offer suggestions and take questions
82
Appendix 14: Bubbl.us Instruction Sheet For Control Group
Experiment Day 2 – Group B
Bubbl.us Instructional Sheet
Please sign into computers and go to Bubbl.us
Click start brainstorming
Click save on the right hand side of the screeno Fill out the create account steps and hit submito This will allow you to save and share you project with me later on
To start you will see that there is a single bubble in the center of the screeno If you click on the text saying start here you can replace that with
whatever words or concept you deem appropriate
Now if you simply place your cursor over the center of the bubble you will see a choice of 6 icons. Let’s start on the top left hand corner.
o If you click on the cross with arrows you can move the bubble anywhere you like on the screen
o Moving to the top right, if you click on the X your bubble will disappear (to get it back simply click the undo button on the top left of your screen)
o Clicking on the middle icon on the right hand side of the bubble allows you to create a new sibling bubble (i.e. a bubble that does not spring from the 1st bubble, but is entirely separate)
o The blue circles icon allows you to show directional relationships through the use of arrowed lines. By clicking on the icon and dragging your cursor to the sibling bubble you just made in the previous step you can see an example of this
o Clicking on the middle bottom icon of one of the bubbles you have created allows you to make a child balloon (i.e. a bubble that does spring from a previous bubble, generally these bubbles have some sort of direct relationship to each other, with the child balloon being subordinate to the original parent balloon)
o Lastly, clicking on the bottom left hand icon allows you to change the color of the balloon
Just a few more tips:o If you click the center button at the upper left, your entire conceptual
model will be centered on the pageo Also, to zoom in and out you can scroll up and down on your mouse or hit
the plus and minus buttons on the upper left
83
Appendix 15: Structured Conceptual Modeling Exercise
Structured Conceptual Modeling Exercise
Give students 2 minutes to identify requirements and create a list of concepts they feel are relevant to those requirements
o What do they think they need to learn more about in order to answer the
question? Ex. Governmental Type And Structure
o What are the big, moving pieces?
Ex. Level Of Candidate Support
Come together as a group and consolidate individual lists into group list on board
o Go around the room with each student reading off their list
o Emphasize commonalities with plus signs
o Highlight legitimate differences in opinion as food for thought
o Take note of “AHA” moments
Very important concept that only a select few thought of, but all recognize as essential to the question
Go back onto Bubbl.us and create your own conceptual model combining your individual list and thoughts with that of the collaborative group list
o Remember to start with requirements and build from there
o Highlight relationships between concepts and directional flow of those
relationships where applicable
Briefly look at the way someone sitting next to you has set up their conceptual model
o Take away ideas for your own
o Offer suggestions or alternatives
84
Appendix 16: Control Group Post-Experiment Questionnaire
Follow-Up Questionnaire B
Thanks for your participation! Please take a few moments to answer the following questions. Your feedback is greatly appreciated.
1. Please rate your understanding of conceptual modeling prior to this study, with 1 being extremely low and 5 being extremely high.
1 2 3 4 5
2. Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.
1 2 3 4 5
3. Please rate how often explicit conceptual modeling has been a part of your personal analytic process prior to this experiment, with 1 being never and 5 being every time you produce an intelligence estimate.
1 2 3 4 5
4. Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.
1 2 3 4 5
5. Were you able to dedicate a sufficient amount of time to working on this experiment over the past week?
Yes Maybe No
85
6. Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.
1 2 3 4 5
7. Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.
1 2 3 4 5
8. What do you think the purpose of this experiment was?
9. Please provide any additional comments you may have regarding conceptual modeling in general or any particular part of this experiment.
86
Appendix 17: Experimental Group Post-Experiment Questionnaire
Follow-Up Questionnaire A
Thanks for your participation! Please take a few moments to answer the following questions. Your feedback is greatly appreciated.
1. Please rate your understanding of conceptual modeling prior to this study, with 1 being extremely low and 5 being extremely high.
1 2 3 4 5
2. Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.
1 2 3 4 5
3. Please rate how useful you found the conceptual modeling training provided at the beginning of this experiment to be, with 1 being not at all helpful and 5 being extremely helpful.
1 2 3 4 5
4. Please rate how often explicit conceptual modeling has been a part of your personal analytic process prior to this experiment, with 1 being never and 5 being every time you produce an intelligence estimate.
1 2 3 4 5
87
5. Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.
1 2 3 4 5
6. Please rate whether or not you found that explicit conceptual modeling in this experiment aided you in developing a more thorough and nuanced intelligence analysis.
Definitely Somewhat Not At All
7. Please rate how effective you think the conceptual modeling method used in this experiment, inclusive of both individual work and group collaboration, was.
1 2 3 4 5
8. Please rate how useful you found the use of the technology aid Bubbl.us to be in creating and updating your conceptual models, with 1 being not useful and 5 being extremely useful.
1 2 3 4 5
9. Were you able to dedicate a sufficient amount of time to working on this experiment over the past week?
Yes Maybe No
10. Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.
1 2 3 4 5
88
11. Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.
1 2 3 4 5
12. What do you think the purpose of this experiment was?
13. Please provide any additional comments you may have regarding conceptual modeling in general or any particular part of this experiment.
89
Appendix 18: Control Group Debriefing Sheet
Participation Debriefing B
Thank you for participating in this research. I appreciate your contribution and willingness to support the student research process.
This experiment was designed to test the specific part of the analytic process termed conceptual modeling. Currently there has been little research done on the topic of conceptual modeling within the field of intelligence analysis, and this study hopes to take the first of many steps in establishing the importance of explicit conceptual modeling within the analytic process.
Within the Intelligence Community collection takes up a significant amount of an analyst’s time due to information overload stemming from both open and classified sources. This experiment hopes to demonstrate that pre-collection conceptual modeling will not only make the collection process more efficient, it will also help to minimize gaps in knowledge as those gaps will be recognized earlier on. This process should then in turn help to improve the subsequent analysis.
If you have any further questions about conceptual modeling or this research you can contact me at [email protected].
90
Appendix 19: Experimental Group Debriefing Sheet
Participation Debriefing A
Thank you for participating in this research. I appreciate your contribution and willingness to support the student research process.
The purpose of this study was to test the value of explicit conceptual modeling as it applies to the quality of the analysis produced. Furthermore, this experiment tested the effectiveness of a structured individual to group conceptual modeling method. Currently there has been little research done on the topic of conceptual modeling within the field of intelligence analysis, and this study hopes to take the first of many steps in establishing the importance of explicit conceptual modeling within the analytic process.
Within the Intelligence Community collection takes up a significant amount of an analyst’s time due to information overload stemming from both open and classified sources. This experiment hopes to demonstrate that pre-collection conceptual modeling will not only make the collection process more efficient, it will also help to minimize gaps in knowledge as those gaps will be recognized earlier on. This process should then in turn help to improve the subsequent analysis.
If you have any further questions about conceptual modeling or this research you can contact me at [email protected].
91
Appendix 20: Significance Testing Results
***The following results are based on a 0.05 level of significance. However, due to the fact that this research is exploratory in nature, a 0.10 level of significance was deemed most appropriate and is therefore reflected in the text of this thesis.***
Results:
Is there a difference between control and experimental for source reliability?
Null: there is no difference between control and experimental for source reliability.
Alternative: there a difference between control and experimental for source reliability.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows outliers for control group. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
GroupExperimentalControl
Re
sp
on
se
3.00
2.50
2.00
1.50
1.00
22
23 2425
1
92
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
Observed Value3.02.52.01.51.0
Exp
ecte
d N
orm
al2
1
0
-1
-2
Normal Q-Q Plot of Response
for group= Control
Observed Value3.02.52.01.51.0
Exp
ecte
d N
orm
al 1
0
-1
-2
Normal Q-Q Plot of Response
for group= Experimental
Group Statistics
25 2.1200 .43970 .08794
22 2.2273 .52841 .11266
GroupControl
Experimental
ResponseN Mean Std. Deviation
Std. ErrorMean
93
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.142) > ( = 0.05), thus assumption of equal variances is satisfied.
According to above table, t-test value = -0.76, P-value = 0.451.
Since (P-value = 0.451) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for source reliability.
Is there a difference between control and experimental for analytic confidence?
Null: there is no difference between control and experimental for analytic confidence.
Alternative: there a difference between control and experimental for analytic confidence.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Independent Samples Test
2.234 .142 -.760 45 .451 -.10727
-.751 41.052 .457 -.10727
Equal variancesassumed
Equal variancesnot assumed
ResponseF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
Independent Samples Test
2.234 .142 -.760 45 .451 -.10727
-.751 41.052 .457 -.10727
Equal variancesassumed
Equal variancesnot assumed
ResponseF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
94
Box plot shows outliers for both groups. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
GroupExperimentalControl
Re
sp
on
se
fo
r A
na
lyti
ca
l C
on
fid
en
ce
3.00
2.50
2.00
1.50
1.00
45 4647
27 282930
2425
12
Observed Value3.02.52.01.51.0
Exp
ecte
d N
orm
al
2
1
0
-1
-2
Normal Q-Q Plot of Response for Analytical Confidence
for group= Control
Observed Value3.02.52.01.51.0
Exp
ecte
d N
orm
al
1.5
1.0
0.5
0.0
-0.5
-1.0
-1.5
Normal Q-Q Plot of Response for Analytical Confidencefor group= Experimental
95
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.137) > ( = 0.05), thus assumption of equal variances is satisfied.
According to above table, t-test value = 0.327, P-value = 0.745.
Since (P-value = 0.745) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for analytic confidence.
Is there a difference between control and experimental for forecast results?
Null: there is no difference between control and experimental for forecast results.
Alternative: there a difference between control and experimental for forecast results.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Group Statistics
25 1.9600 .45461 .09092
22 1.9091 .61016 .13009
GroupControl
Experimental
Response forAnalytical Confidence
N Mean Std. DeviationStd. Error
Mean
Independent Samples Test
2.287 .137 .327 45 .745 .05091
.321 38.491 .750 .05091
Equal variancesassumed
Equal variancesnot assumed
Response forAnalytical Confidence
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
Independent Samples Test
2.287 .137 .327 45 .745 .05091
.321 38.491 .750 .05091
Equal variancesassumed
Equal variancesnot assumed
Response forAnalytical Confidence
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
96
Box plot shows no outliers for both groups.
Most points are not close to the line thus the assumption of normality is not satisfied for the group Control.
Most points are not close to the line thus the assumption of normality is not satisfied for the group Experimental.
GroupExperimentalControl
Re
sp
on
se
fo
r F
ore
ca
st
Re
su
lts
2.00
1.80
1.60
1.40
1.20
1.00
Observed Value2.01.81.61.41.21.0
Exp
ect
ed
No
rmal
0.25
0.00
-0.25
-0.50
-0.75
-1.00
Normal Q-Q Plot of Response for Forecast Resultsfor group= Control
Observed Value2.01.81.61.41.21.0
Exp
ect
ed
No
rmal 0.75
0.50
0.25
0.00
-0.25
-0.50
Normal Q-Q Plot of Response for Forecast Results
for group= Experimental
97
Cannot use independent samples t- test as normality is not satisfied. Need to use Wilcoxon Rank Sum test, non-parametric test.
Wilcoxon Rank Sum test value = -1.844, P-value = 0.065.
Since (P-value = 0.065) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for forecast results.
Is there a difference between control and experimental for question, “Please rate how interested you are in this study, with 1 being not interested and 5 being extremely interested.”?
Null: There is no difference between control and experimental for question, “Please rate how interested you are in this study, with 1 being not interested and 5 being extremely interested.”
Descriptive Statistics
47 1.5532 .50254 1.00 2.00
47 1.4681 .50437 1.00 2.00
Response forForecast Results
Group
N Mean Std. Deviation Minimum Maximum
Ranks
25 26.98 674.50
22 20.61 453.50
47
GroupControl
Experimental
Total
Response forForecast Results
N Mean Rank Sum of Ranks
Test Statisticsa
200.500
453.500
-1.844
.065
Mann-Whitney U
Wilcoxon W
Z
Asymp. Sig. (2-tailed)
Responsefor Forecast
Results
Grouping Variable: Groupa.
98
Alternative: There is a difference between control and experimental for question, “Please rate how interested you are in this study, with 1 being not interested and 5 being extremely interested.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows no outliers for both groups.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Group for Questionnaire Results
ExperimentalControl
Re
sp
on
se
fo
r Q
. 1 in
P
re-E
xpe
rim
en
t
5.00
4.50
4.00
3.50
3.00
2.50
2.00
Observed Value4.03.53.02.52.0
Exp
ecte
d N
orm
al
0.5
0.0
-0.5
-1.0
-1.5
-2.0
Normal Q-Q Plot of Response for Q. 1 in Pre-Experimentfor groupq= Control
99
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.476) > ( = 0.05), thus assumption of equal variances is satisfied.
According to above table, t-test value = 0.189, P-value = 0.851.
Since (P-value = 0.851) > ( = 0.05), null hypothesis is not rejected.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
Observed Value5.04.54.03.53.02.52.0
Exp
ect
ed
No
rmal 2
1
0
-1
-2
Normal Q-Q Plot of Response for Q. 1 in Pre-Experiment
for groupq= Experimental
Group Statistics
28 3.5357 .63725 .12043
24 3.5000 .72232 .14744
Group forQuestionnaire ResultsControl
Experimental
Response for Q. 1in Pre-Experiment
N Mean Std. DeviationStd. Error
Mean
Independent Samples Test
.516 .476 .189 50 .851 .03571
.188 46.352 .852 .03571
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 1in Pre-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
Independent Samples Test
.516 .476 .189 50 .851 .03571
.188 46.352 .852 .03571
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 1in Pre-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
100
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate how interested you are in this study, with 1 being not interested and 5 being extremely interested.”
Is there a difference between control and experimental for question, “Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.”?
Null: There is no difference between control and experimental for question, “Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.”
Alternative: There is a difference between control and experimental for question, “Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows outliers for Experimental group. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outlier.
Group for Questionnaire Results for Post
ExperimentalControl
Re
sp
on
se
fo
r Q
. 1 in
P
os
t-E
xp
eri
me
nt
5.00
4.00
3.00
2.00
1.0026
101
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.548) > ( = 0.05), thus assumption of equal variances is satisfied.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Except one, most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
Observed Value5.04.54.03.53.02.52.0
Exp
ect
ed
No
rmal 2
1
0
-1
-2
Normal Q-Q Plot of Response for Q. 1 in Post-Experiment
for grouppostq1= Control
Observed Value4.03.53.02.52.01.51.0
Exp
ect
ed
No
rmal 0.5
0.0
-0.5
-1.0
-1.5
-2.0
Normal Q-Q Plot of Response for Q. 1 in Post-Experiment
for grouppostq1= Experimental
Independent Samples Test
.367 .548 1.124 43 .267 .26000
1.107 38.079 .275 .26000
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 1in Post-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
102
According to above table, t-test value = 1.124, P-value = 0.267.
Since (P-value = 0.267) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate your interest in the study after having completed the experiment, with 1 being not interested and 5 being extremely interested.”
Is there a difference between control and experimental for question, “Please rate how useful you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”?
Null: There is no difference between control and experimental for question, “Please rate how useful you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Alternative: There is a difference between control and experimental for question, “Please rate how useful you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Group Statistics
25 3.7600 .72342 .14468
20 3.5000 .82717 .18496
Group for QuestionnaireResults for PostControl
Experimental
Response for Q. 1in Post-Experiment
N Mean Std. DeviationStd. Error
Mean
Independent Samples Test
.367 .548 1.124 43 .267 .26000
1.107 38.079 .275 .26000
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 1in Post-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
103
Box plot shows outliers for Control group. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outlier.
Except one, most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
Group for Questionnaire Results for Pre
ExperimentalControl
Re
sp
on
se
fo
r Q
. 2 in
P
re-E
xpe
rim
en
t
5.00
4.00
3.00
2.00
1.001 2
25262728
Observed Value654321
Exp
ect
ed
No
rmal
1
0
-1
-2
Normal Q-Q Plot of Response for Q. 2 in Pre-Experimentfor grpreq2= Control
Observed Value5.04.54.03.53.0
Exp
ect
ed
No
rmal 1.0
0.5
0.0
-0.5
-1.0
-1.5
Normal Q-Q Plot of Response for Q. 2 in Pre-Experiment
for grpreq2= Experimental
104
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.541) > ( = 0.05), thus assumption of equal variances is satisfied.
According to above table, t-test value = -1.554, P-value = 0.127.
Since (P-value = 0.127) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate how useful you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Is there a difference between control and experimental for question, “Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”?
Null: There is no difference between control and experimental for question, “Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Alternative: There is a difference between control and experimental for question, “Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Independent Samples Test
.378 .541 -1.554 50 .127 -.38690
-1.595 48.358 .117 -.38690
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 2in Pre-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
Independent Samples Test
.378 .541 -1.554 50 .127 -.38690
-1.595 48.358 .117 -.38690
Equal variancesassumed
Equal variancesnot assumed
Response for Q. 2in Pre-Experiment
F Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)Mean
Difference
t-test for Equality of Means
105
Box plot shows outliers for Control group. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outlier.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experiment.
106
Group Statistics
Group for Questionnaire Results for Post N Mean
Std. Deviation
Std. Error Mean
Response for Q. 2 in Post-Experiment
Control 25 4.0800 .81240 .16248
Experimental
20 4.2000 .76777 .17168
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 2 in Post-Experiment
Equal variances assumed
.123 .727 -.504 43 .617
Equal variances not assumed
-.508 41.758 .614
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.727) > ( = 0.05), thus assumption of equal variances is satisfied.
107
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 2 in Post-Experiment
Equal variances assumed
.123 .727 -.504 43 .617
Equal variances not assumed
-.508 41.758 .614
According to above table, t-test value = -0.504, P-value = 0.617.
Since (P-value = 0.617) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for question, “Based on your experience in this experiment, how useful do you feel structured approaches to the analytic process are, with 1 being not useful and 5 being extremely useful.”
Is there a difference between control and experimental for question, “Please rate your understanding of conceptual modeling prior to this study, with 1 being extremely low and 5 being extremely high.”
Null: There is no difference between control and experimental for question, “Please rate your understanding of conceptual modeling prior to this study, with 1 being extremely low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please rate your understanding of conceptual modeling prior to this study, with 1 being extremely low and 5 being extremely high.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
108
Box plot shows no outliers for both groups.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experiment.
109
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 3 in Pre-Experiment
Equal variances assumed
.137 .713 1.340 43 .187
Equal variances not assumed
1.335 40.189 .189
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.713) > ( = 0.05), thus assumption of equal variances is satisfied.
Group Statistics
Group for Questionnaire Results for Q. 2, 3, and 4 N Mean
Std. Deviation
Std. Error Mean
Response for Q. 3 in Pre-Experiment
Control 25 3.5600 1.00333 .20067
Experimental 20 3.1500 1.03999 .23255
110
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 3 in Pre-Experiment
Equal variances assumed
.137 .713 1.340 43 .187
Equal variances not assumed
1.335 40.189 .189
According to above table, t-test value = 1.34, P-value = 0.187.
Since (P-value = 0.187) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental
for question, “Please rate your understanding of conceptual modeling prior to this
study, with 1 being extremely low and 5 being extremely high.”
Is there a difference between control and experimental for question, “Please rate your
understanding of conceptual modeling following this study, with 1 being extremely low
and 5 being extremely high.”?
Null: There is no difference between control and experimental for question, “Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
111
Both Box plots show outliers. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
112
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 3 in Post-Experiment
Equal variances assumed
.944 .337 .000 43 1.000
Equal variances not assumed
.000 43.000 1.000
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.337) > ( = 0.05), thus assumption of equal variances is satisfied.
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 3 in Post-Experiment
Equal variances assumed
.944 .337 .000 43 1.000
Equal variances not assumed
.000 43.000 1.000
According to above table, t-test value = 0.00, P-value = 1.
Since (P-value = 1) > ( = 0.05), null hypothesis is not rejected.
113
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.”
Is there a difference between control and experimental for question, “Please rate how often explicit conceptual modeling has been a part of your personal analytic process prior to this experiment, with 1 being never and 5 being every time you produce an intelligence estimate.”?
Null: There is no difference between control and experimental for question, “Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.”
Alternative: There is a difference between control and experimental for question, “Please rate your understanding of conceptual modeling following this study, with 1 being extremely low and 5 being extremely high.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows no outliers for both groups.
114
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
115
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 4 in Pre-Experiment
Equal variances assumed
.002 .968 .162 43 .872
Equal variances not assumed
.162 41.211 .872
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.968) > ( = 0.05), thus assumption of equal variances is satisfied.
Group Statistics
Group for Questionnaire Results for Q. 2, 3, and 4 N Mean
Std. Deviation
Std. Error Mean
Response for Q. 4 in Pre-Experiment
Control 25 2.8000 1.04083 .20817
Experimental 20 2.7500 1.01955 .22798
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
116
Response for Q. 4 in Pre-Experiment
Equal variances assumed
.002 .968 .162 43 .872
Equal variances not assumed
.162 41.211 .872
According to above table, t-test value = 0.162, P-value = 0.872.
Since (P-value = 0.872) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate how often explicit conceptual modeling has been a part of your personal analytic process prior to this experiment, with 1 being never and 5 being every time you produce an intelligence estimate.”
Is there a difference between control and experimental for question, “Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.”?
Null: There is no difference between control and experimental for question, “Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.”
Alternative: There is a difference between control and experimental for question, “Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.”
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
117
Box plot shows outliers for Experimental group. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outlier.
Most points are close to the line thus the assumption of normality is satisfied for the group Control.
Except on point, most points are close to the line thus the assumption of normality is satisfied for the group Experimental.
118
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
Response for Q. 4 in Post-Experiment
Equal variances assumed
.086 .770 -.504 43 .617
Equal variances not assumed
-.501 39.631 .619
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.77) > ( = 0.05), thus assumption of equal variances is satisfied.
Group Statistics
Group for Questionnaire Results for Q. 2, 3, and 4 N Mean
Std. Deviation
Std. Error Mean
Response for Q. 4 in Post-Experiment
Control 25 3.4800 .77028 .15406
Experimental 20 3.6000 .82078 .18353
Independent Samples Test
Levene's Test for Equality of Variances t-test for Equality of Means
F Sig. t dfSig. (2-tailed)
119
Response for Q. 4 in Post-Experiment
Equal variances assumed
.086 .770 -.504 43 .617
Equal variances not assumed
-.501 39.631 .619
According to above table, t-test value = -0.504, P-value = 0.617.
Since (P-value = 0.617) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, there is no difference between control and experimental for question, “Please rate how often you plan to incorporate explicit conceptual modeling into your personal analytic process in the future, with 1 being never and 5 being every time you produce an intelligence estimate.”
Conceptual Modeling Results:
Null: Number of bubbles for Pre and Post experimental CM are not different.
Alternative: Number of bubbles for Pre and Post experimental CM are significantly different.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows outliers for both groups. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
PostPre
Group
80.00
60.00
40.00
20.00
0.00
Bu
bb
les
21
36
120
Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not satisfied for both samples. So take a look at Shapiro-Wilk test. P-value for group Pre is > (( = 0.05), thus normality assumption is satisfied for group Pre. Shapiro-Wilk test gives p-values < ( = 0.05), thus normality assumption is not satisfied for group Post. Need to look at Normal probability plot for group Post.
Most points are close to the line thus the assumption of normality is satisfied for the group Pre.
Tests of Normality
.183 24 .036 .927 24 .086
.193 23 .026 .863 23 .005
GroupPre
Post
BubblesStatistic df Sig. Statistic df Sig.
Kolmogorov-Smirnova
Shapiro-Wilk
Lilliefors Significance Correctiona.
6050403020100
Observed Value
2
1
0
-1
-2
Exp
ecte
d N
orm
al
for group= Pre
Normal Q-Q Plot of Bubbles
121
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.207) > ( = 0.05), thus assumption of equal variances is satisfied.
Most points are close to the line with no pronounced curvature thus the assumption of normality is satisfied for the group Post.
806040200
Observed Value
2
1
0
-1
-2
Exp
ecte
d N
orm
alfor group= Post
Normal Q-Q Plot of Bubbles
Group Statistics
24 23.0000 11.90542 2.43018
23 30.9130 15.60569 3.25401
GroupPre
Post
BubblesN Mean Std. Deviation
Std. ErrorMean
Independent Samples Test
1.637 .207 -1.960 45 .056
-1.948 41.143 .058
Equal variancesassumed
Equal variancesnot assumed
BubblesF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
122
According to above table, t-test value = -1.96, P-value = 0.056.
Since (P-value = 0.056) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, Number of bubbles for Pre and Post experimental CM are not different.
Null: Number of lines and arrows for Pre and Post experimental CM are not different.
Alternative: Number of lines and arrows for Pre and Post experimental CM are significantly different.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows outliers for Post group. Even with the presence of outliers, normality is satisfied (See below). Also this value is important for the analysis. Thus the decision is not to remove the outliers.
Independent Samples Test
1.637 .207 -1.960 45 .056
-1.948 41.143 .058
Equal variancesassumed
Equal variancesnot assumed
BubblesF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
PostPre
Group
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
Lin
es a
nd
Arr
ow
s
36
123
Kolmogorov-Smirvov test gives p-values > ( = 0.05), thus normality assumption is satisfied for group Pre. Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not satisfied for Post. So take a look at Shapiro-Wilk test for group Post. Shapiro-Wilk test gives p-value < ( = 0.05), thus normality assumption is not satisfied for group Post. Need to look at Normal probability plot for group Post.
Most points are close to the line with no pronounced curvature thus the assumption of normality is satisfied for the group Post.
Tests of Normality
.128 24 .200* .942 24 .181
.200 23 .018 .895 23 .020
GroupPre
Post
Lines and ArrowsStatistic df Sig. Statistic df Sig.
Kolmogorov-Smirnova
Shapiro-Wilk
This is a lower bound of the true significance.*.
Lilliefors Significance Correctiona.
706050403020100
Observed Value
2
1
0
-1
-2
Exp
ecte
d N
orm
al
for group= Post
Normal Q-Q Plot of Lines and Arrows
Group Statistics
24 26.5000 12.39916 2.53097
23 30.9565 12.77952 2.66471
GroupPre
Post
Lines and ArrowsN Mean Std. Deviation
Std. ErrorMean
124
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.708) > ( = 0.05), thus assumption of equal variances is satisfied.
According to above table, t-test value = -1.213, P-value = 0.231.
Since (P-value = 0.231) > ( = 0.05), null hypothesis is not rejected.
Conclusion: At 5% level, Number of lines and arrows for Pre and Post experimental CM are not different.
Null: Number of bubbles for Control and Experimental CM are not different.
Alternative: Number of bubbles for Control and Experimental CM are significantly different.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Independent Samples Test
.142 .708 -1.213 45 .231
-1.213 44.757 .232
Equal variancesassumed
Equal variancesnot assumed
Lines and ArrowsF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
Independent Samples Test
.142 .708 -1.213 45 .231
-1.213 44.757 .232
Equal variancesassumed
Equal variancesnot assumed
Lines and ArrowsF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
125
Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not satisfied for both samples. So take a look at Shapiro-Wilk test. P-value for group Control is > (( = 0.05), thus normality assumption is satisfied for group Control. Shapiro-Wilk test gives p-values < ( = 0.05), thus normality assumption is not satisfied for group Experimental. Need to look at Normal probability plot for group Experimental.
Box plot shows outliers for both groups. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
ExperimentalControl
Group
80.00
60.00
40.00
20.00
0.00
Bu
bb
les
53
60
23
Tests of Normality
.202 24 .012 .920 24 .057
.191 47 .000 .893 47 .000
GroupControl
Experimental
BubblesStatistic df Sig. Statistic df Sig.
Kolmogorov-Smirnova
Shapiro-Wilk
Lilliefors Significance Correctiona.
126
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.000) < ( = 0.05), thus assumption of equal variances is not satisfied.
Most points are close to the line with no pronounced curvature thus the assumption of normality is satisfied for the group Experimental.
806040200
Observed Value
4
2
0
-2
-4
Exp
ecte
d N
orm
alfor groupce= Experimental
Normal Q-Q Plot of Bubbles
Group Statistics
24 12.5833 3.46306 .70689
47 26.8723 14.25942 2.07995
GroupControl
Experimental
BubblesN Mean Std. Deviation
Std. ErrorMean
Independent Samples Test
17.117 .000 -4.821 69 .000
-6.504 55.753 .000
Equal variancesassumed
Equal variancesnot assumed
BubblesF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
127
According to above table, t-test value = -6.504, P-value = 0.000.
Since (P-value = 0.000) < ( = 0.05), null hypothesis is rejected.
Conclusion: At 5% level, Number of bubbles for Control and Experimental CM are significantly different.
Null: Number of lines and arrows for Control and Experimental CM are not different.
Alternative: Number of lines and arrows for Control and Experimental CM are significantly different.
Will be using t-test for independent samples.
Testing normality assumption as sample sizes are < than 30.
Box plot shows outliers for both groups. Even with the presence of outliers, normality is satisfied (See below). Also these values are important for the analysis. Thus the decision is not to remove the outliers.
Independent Samples Test
17.117 .000 -4.821 69 .000
-6.504 55.753 .000
Equal variancesassumed
Equal variancesnot assumed
BubblesF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
ExperimentalControl
Group
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
Lin
es a
nd
Arr
ow
s
59
23
128
Kolmogorov-Smirvov test gives p-values > ( = 0.05), thus normality assumption is satisfied for group Control. Kolmogorov-Smirvov test gives p-values < ( = 0.05), thus normality assumption is not satisfied for Experimetnal. So take a look at Shapiro-Wilk test for group Experimental. Shapiro-Wilk test gives p-value < ( = 0.05), thus normality assumption is not satisfied for group Experimental. Need to look at Normal probability plot for group Experimental.
Most points are close to the line with no pronounced curvature thus the assumption of normality is satisfied for the group Experimental.
Tests of Normality
.129 24 .200* .942 24 .182
.174 46 .001 .943 46 .025
GroupControl
Experimental
Lines and ArrowsStatistic df Sig. Statistic df Sig.
Kolmogorov-Smirnova
Shapiro-Wilk
This is a lower bound of the true significance.*.
Lilliefors Significance Correctiona.
706050403020100
Observed Value
4
2
0
-2
-4
Ex
pe
cte
d N
orm
al
for groupce= Experimental
Normal Q-Q Plot of Lines and Arrows
Group Statistics
24 12.9167 3.88885 .79381
46 29.3043 12.03859 1.77499
GroupControl
Experimental
Lines and ArrowsN Mean Std. Deviation
Std. ErrorMean
129
Here need to check if the assumption of equal variances is satisfied.
According to Levene’s test (P-value = 0.000) < ( = 0.05), thus assumption of equal variances is not satisfied.
According to above table, t-test value = -8.428, P-value = 0.000.
Since (P-value = 0.000) < ( = 0.05), null hypothesis is rejected.
Conclusion: At 5% level, Number of lines and arrows for Control and Experimental CM are significantly different.
Independent Samples Test
17.268 .000 -6.475 68 .000
-8.428 60.097 .000
Equal variancesassumed
Equal variancesnot assumed
Lines and ArrowsF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means
Independent Samples Test
17.268 .000 -6.475 68 .000
-8.428 60.097 .000
Equal variancesassumed
Equal variancesnot assumed
Lines and ArrowsF Sig.
Levene's Test forEquality of Variances
t df Sig. (2-tailed)
t-test for Equality of Means