CRESST ONR/NETC Meetings, 17-18 July 2003, v1 17 July, 2003 ONR Advanced Distributed Learning Greg...
-
Upload
shon-welch -
Category
Documents
-
view
215 -
download
0
Transcript of CRESST ONR/NETC Meetings, 17-18 July 2003, v1 17 July, 2003 ONR Advanced Distributed Learning Greg...
CRESST ONR/NETC Meetings, 17-18 July 2003, v1
17 July, 2003
ONR Advanced Distributed LearningONR Advanced Distributed Learning
Greg Chung
Bill Bewley
UCLA/CRESST
Ontologies and Bayesian Networks
in Assessment
2003 Regents of the University of California
2CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Problem StatementProblem Statement
• How do you link information from assessments to individualized instructional recommendations in a DL context?– Content is going online– Assessments are going online– Couple content and assessment
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 3
OntologiesOntologies
• An ontology is a conceptual representation of a domain expressed in terms of concepts and the relationships among the concepts– Support knowledge capture, representation, sharing– Fielded technology
• Medical, engineering, e-commerce, military…• Development tools and APIs available today (e.g.,
Protégé)• Support rapid development and testing of
prototypes
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 4
Bayesian NetworksBayesian Networks
• Graphical modeling of the causal structure of a phenomenon in terms of nodes and relations– Nodes represent states, links represent the
influence relations– Supports fusion of observable data (e.g.,
“correct on item 1”) into high-level hypotheses (e.g., “understands breath control”)
– Fielded technology• Development tools available (HUGIN,
MSBNx)• ONR-funded research
5CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Assessment Application ExampleAssessment Application Example
• Content recommendation– Deliver individualized instructional
content based on assessment results
• Approach– Use a domain ontology to represent
content– Use assessments to measure
students’ knowledge of the domain– Use a Bayesian network to model
knowledge dependencies
6CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Rifle Marksmanship OntologyRifle Marksmanship Ontology
• Capture hierarchical structure of a domain– Field manuals, doctrine, training videos– Bind content to structure (text, video,
graphics)
• Capture conceptual representation– Experts (coaches, snipers, rifle team)– Upper-level ontology captured using
knowledge maps
7CRESST ONR/NETC Meetings, 17-18 July 2003, v1
• Based on corpus of marksmanship literature and doctrine
• Currently 168 concepts (classes)
• Content directly bound to each node– Important if you want to make
use of the information
Hierarchical RepresentationHierarchical Representation
8CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Binding Content to StructureBinding Content to Structure
9CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Application of OntologyApplication of Ontology
• Marksmanship ontology serves as testbed to evaluate feasibility of approach– Pilot test of approach — 2nd Lts.
undergoing entry-level marksmanship training
– Design• Individualized content
recommendation vs. control (no recommendation)
– Examine shooting outcome, learning outcomes, changes in BN due to instruction, Marines’ perceptions of learning
10CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Linking Assessment and InstructionLinking Assessment and Instruction
• Approach– Depict knowledge dependencies among
marksmanship concepts using a Bayesian network
– Administer assessment to gather information on Marines’ understanding of rifle marksmanship
– Take assessment results — item-level data — and update BN
11CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Linking Assessment and InstructionLinking Assessment and Instruction
• Approach (continued)– Identify concepts that have low
probabilities in BN — interpreted as poor understanding
– Make use of cognitive demands of tasks and items to infer depth of a Marine’s understanding
– Deliver different content based on depth of understanding
12CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Content RecommendationContent Recommendation
13CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Example of FeedbackExample of Feedback
14CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Preliminary ResultsPreliminary Results
• Within-group analyses– BN probabilities increased for concepts that
had instructional content served up– BN probabilities did not change for concepts
that did not have instructional content– High-level BN topics correlated with measure
it was derived from as well as reasoning measure
– BN “scores” corresponded with Marines’ self-ratings of their level of knowledge (80% agreement)
15CRESST ONR/NETC Meetings, 17-18 July 2003, v1
Preliminary ResultsPreliminary Results
• Between-group analyses inconclusive– Small sample size (n = 16)– Experimental-condition Marines
• Qualified in thunderstorm• Learned more from classroom training than
expected (i.e., > 70% of topics “correct”)• Knowledge map scores appear to be
increasing at a faster rate than the control group, but differences not statistically significant
16CRESST ONR/NETC Meetings, 17-18 July 2003, v1
SummarySummary
• An important opportunity of online assessments is the potential to measure many aspects of human behavior under a variety of different conditions
• An important challenge is extracting meaningful information from (potentially) voluminous amounts of data
• Bayesian networks and ontologies may be one approach to address