12-Multistrategy-learning.doc
description
Transcript of 12-Multistrategy-learning.doc
Learning Agents LaboratoryComputer Science Department
George Mason University
Prof. Gheorghe Tecuci
Multistrategy learningMultistrategy learning
MULTISTRATEGY LEARNINGSUMMARY
This tutorial presents an overview of methods, systems and applications of multistrategy learning. Multistrategy learning is concerned with developing learning systems that integrate two or more inferential and/or computational strategies. For example, a multistrategy system may combine empirical induction with explanation-based learning, symbolic and neural net learning, deduction with abduction and analogy, quantitative and qualitative discovery, symbolic and genetic algorithm-based learning. Due to the complementary nature of various strategies, multistrategy learning systems have a potential for a wide range of applications. This tutorial describes representative multistrategy learning systems, and their applications in areas such as automated knowledge acquisition, planning, scheduling, manufacturing, technical and medical decision making, and computer vision.
G. Tecuci 2
OUTLINE
1. Multistrategy concept learning: methods, systems and applications
2. Multistrategy knowledge base improvement: methods, systems and applications
3. Summary, current trends and frontier research
4. References
G. Tecuci 3
1. MULTISTRATEGY CONCEPT LEARNING:
METHODS, SYSTEMS AND APPLICATIONS
1.1 The class of learning tasks
1.2 Integration of empirical inductive learning and explanation-based learning
1.3 Integration of empirical inductive learning, explanation-based learning, and learning by analogy
1.4 Integration of genetic algorithm-based
G. Tecuci 4
learning and symbolic empirical inductive learning
G. Tecuci 5
1.1 MULTISTRATEGY CONCEPT LEARNING
The Class of Learning Tasks
INPUT: one or more positive and/or negative examples of a concept
BK: a weak, incomplete, partially incorrect, or complete domain theory (DT)
GOAL:learn a concept description characterizing the example(s) and consistent with the DT
G. Tecuci 6
by combining several learning strategies
G. Tecuci 7
Illustration of a Learning Task
INPUT: examples of the CUP conceptcup(o1) color(o1, white), shape(o1, cyl), volume(o1,
8), made-from(o1, plastic), light-mat(plastic), has-handle(o1), has-flat-bottom(o1), up-concave(o1).
BK: a theory of vessels (domain rules)cup(x) liftable(x), stable(x), open-vessel(x).liftable(x) light(x), graspable(x).stable(x) has-flat-bottom(x).
GOAL: learn an operational concept description of CUP
cup(x) made-from(x, plastic), light-mat(plastic),
G. Tecuci 8
graspable(x), has-flat-bottom(x), up-concave(o1).
G. Tecuci 9
1.2 INTEGRATION OF EMPIRICAL INDUCTIVE LEARNING
AND EXPLANATION-BASED LEARNING
• Empirical Inductive Learning (EIL)
• Explanation-Based Learning (EBL)
• Complementary nature of EIL and EBL
G. Tecuci 10
• Types of integration of EIL and EBL
G. Tecuci 11
EMPIRICAL INDUCTIVE LEARNING
• Compares the examples in terms of their similarities and differences, and creates a generalized description of the similarities of the positive examples
• Is data intensive (requires many examples)
• Performs well in knowledge-weak domains
Positive examples of cups: P1 P2 ...
Negative examples of cups: N1
G. Tecuci 12
...
Description of the cup concept: has-handle(x), ...
G. Tecuci 13
EXPLANATION-BASED LEARNING• Proves that the training example is an
instance of the target concept and generalizes the proof
• Is knowledge intensive (requires a complete DT)
• Needs only one example
Prove that o1 is a CUP: Generalize the proof:
G. Tecuci 14
cup(o1)
stable(o1)liftable(o1)
graspable(o1)light(o1)
light-mat(plastic)
made-from(o1,plastic)
......
cup(x)
stable(x)liftable(x)
graspable(x)light(x)
light-mat(y)made-from(x,y)
......
has-handle(o1) has-handle(x)
• "has-handle(o1)" is needed to prove "cup(o1)" • the material the cup is made• "color(o1,white)" is not needed to prove "cup(o1)"from need not be "plastic"
G. Tecuci 15
COMPLEMENTARY NATURE OFEIL AND EBL
Examples
EIL EBL MSL
Domain
many
weak
one
complete
several
incompleteor partiallyincorrect
(EIL+EBL)
Theory is enough
needs
Effect onsystem'sbehavior
improvescompetence
improvesefficiency
improvescompetence & efficiency
Types ofinference induction deductioninduction and
deduction
to be
G. Tecuci 16
MSL METHODS INTEGRATING EIL AND EBL
• Explanation before induction
• Induction over explanations
• Combining EBL with Version Spaces
• Induction over unexplained
• Guiding induction by domain
G. Tecuci 17
theories
G. Tecuci 18
EXPLANATION BEFORE INDUCTION
OCCAM (Pazzani, 1988, 1990)
OCCAM is a schema-based system that learns to predict the outcome of events by applying EBL or EIL, depending on the available background knowledge and examples. It may answer questions about the outcome of hypothetical events.
A learned economic sanctions schema:When a country threatens a wealthy country
G. Tecuci 19
by refusing to sell a commodity, then the sanctions will fail because an alternative supplier will want to make a large profit by selling the commodity at a premium.
G. Tecuci 20
Integration of EBL and EIL in OCCAM
G. Tecuci 21
ExplainExample
Yes
No
No
Schema = EBL(Example)
Schemata
EnoughExamples
Remember Example
Schema = EIL(Examples)Yes
G. Tecuci 22
INDUCTION OVER EXPLANATIONS (IOE)
WYL (Flann and Dietterich, 1989)
Limitations of EBL• The learned concept might be too specific
because it is a generalization of a single example
• Requires a complete DT
IOE• Learns from a set of positive examples
• May discover concept features that are
G. Tecuci 23
not explained by the DT (i.e. incomplete DT)
G. Tecuci 24
The IOE Method
• Build an explanation tree for each example
• Find the largest common subtree
• Apply EBL to generalize the subtree and retain the leaves as an intermediate concept description (ICD)
• Specialize ICD to reflect the similarities between the training
G. Tecuci 25
examples:- replace variables with constants (e.g. v = c)- introduce equality constraints (e.g. v1 = v2)
G. Tecuci 26
IllustrationExplanation tree for example1:
has-handle(o1)
cup(o1)
stable(o1)liftable(o1) open-vessel(o1)
has-flat-bottom(o1)graspable(o1)light(o1)
up-concave(o1)light-mat(plastic)
made-from(o1,plastic)
Explanation tree for example2:cup(o2)
stable(o2)liftable(o2) open-vessel(o2)
width(o2,small)insulating(plastic)
has-flat-bottom(o2)
graspable(o2)light(o2)
up-concave(o2)made-from(o2,plastic)
light-mat(plastic)made-from(o2,plastic)
G. Tecuci 27
Generalization of the common subtree:cup(x)
stable(x)liftable(x) open-vessel(x)
has-flat-bottom(x)
graspable(x)light(x)
up-concave(x)light-mat(y)
made-from(x, y)
ICD
Specialization of ICD:in example1: (y = plastic)in example2: (y = plastic)in ICD: (y = plastic)
Learned concept:cup(x) made-from(x, plastic), light-mat(plastic), graspable(x),
has-flat-bottom(x), up-concave(x).
G. Tecuci 28
COMBINING EBL WITH VERSION SPACES (EBL-VS)
(Hirsh, 1989, 1990)
Limitations of IOE• Learns only from positive examples
• Needs an "almost" complete domain theory (DT)
EBL-VS• Learns from positive and negative
examples
• Can learn with an incomplete DT
• Can learn with a special type of incorrect
G. Tecuci 29
DT
• Can learn with different amounts of knowledge, from knowledge-free to knowledge-rich
G. Tecuci 30
The EBL-VS Method
• Apply EBL to generalize the positive and the negative examples
• Replace each example that has been generalized with its generalization
• Apply the version space method (or the incremental version space merging method) to the new set of examples
G. Tecuci 31
INDUCTION OVER UNEXPLAINED (IOU)
(Mooney and Ourston, 1989)
Limitations of EBL-VS• Assumes that at least one
generalization of an example is correct and complete
IOU• DT could be incomplete but correct
- the explanation-based generalization
G. Tecuci 32
of an example may be incomplete- the DT may explain negative examples
• Learns concepts with both explainable and conventional aspects
G. Tecuci 33
The IOU Method
• Apply EBL to generalize each positive example
• Disjunctively combine these generalizations (this is the explanatory component Ce)
• Disregard negative examples not satisfying Ce and remove the features mentioned in Ce from all the examples
• Apply EIL to determine a generalization
G. Tecuci 34
of the reduced set of simplified examples(this is the nonexplanatory component Cn)
The learned concept is Ce & Cn
G. Tecuci 35
Illustration
Positive examples of cups: Cup1, Cup2
Negative examples: Shot-Glass1, Mug1, Can1
Domain Theory: incomplete (contains a definition of
drinking vessels but no definition of cups)
Ce = has-flat-bottom(x) & light(x) & up-concave(x) &
{[width(x,small) & insulating(x)] ⁄ has-handle(x)}
Ce covers Cup1, Cup2, Shot-Glass1, Mug1 but
G. Tecuci 36
not Can1
Cn = volume(x,small)
Cn covers Cup1, Cup2 but not Shot-Glass1, Mug1
C = Ce & Cn
G. Tecuci 37
GUIDING INDUCTION BY DOMAIN THEORY
The ENIGMA System(Bergadano, Giordana, Saitta et al. 1988, 1990)
Limitations of IOU
• DT rules have to be correct
• Examples have to be noise-free
ENIGMA
• DT rules could be partially incorrect
• Examples may be noisy
G. Tecuci 38
G. Tecuci 39
The Learning Method(trades-off the use of DT rules against the coverage of
examples)
• Successively specialize the abstract definition D of the concept to be learned by applying DT rules
• Whenever a specialization of the definition D contains operational predicates, compare it with the examples to identify the covered and the uncovered ones
• Decide between performing:- a DT-based deductive specialization of D- an example-based inductive modification of D
G. Tecuci 40
The learned concept is a disjunction of leaves of the specialization tree built.
G. Tecuci 41
Examples (4 pos, 4 neg)*
Positive example4 (p4):
Cup(o4) light(o4), support(o4, b), body(o4, a), above(a, b), up-concave(o4).
Domain TheoryCup(x) Liftable(x), Stable(x), Open-vessel(x).Liftable(x) light(x), has-handle(x).Stable(x) has-flat-bottom(x).Stable(x) body(x, y), support(x, z), above(y, z).Open-vessel(x) up-concave(x).
DT: - overly specific (explains only p1 and p2)- overly general (explains n3)
* Operational predicates start with a lower-case letter
G. Tecuci 42
induction
deduction
Cup(x)
Liftable(x), Stable(x), Open-vessel(x)
has-flat-bottom(x)
light(x), has-handle(x)Stable(x),Open-vessel(x)
light(x)Stable(x),Open-vessel(x)
light(x),
Open-vessel(x)
light(x), body(x, y),
Open-vessel(x)
p1,p2,n3 p1,p2,p3,p4,n2,n3,n4
p1,p3,n2,n3 p2,p4,n4
support(x, z), above(y, z)
has-flat-bottom(x),light(x),
up-concave(x)
p1,p3,n2,n3
has-flat-bottom(x),light(x),
has-small-bottom(x)
p1,p3light(x), body(x, y),
up-concave(x)
p2,p4
support(x, z), above(y, z),
Open-vessel(x)
deduction
deductiondeduction induction
deduction
deduction
G. Tecuci 43
The Learned Concept
Cup(x) light(x), has-flat-bottom(x), has-small-bottom(x).
Covers p1, p3
Cup(x) light(x), body(x, y), support(x, z), above(y, z), up-concave(x).
Covers p2, p4
G. Tecuci 44
Application of ENIGMA(Bergadano et al. 1990)
• Diagnosis of faults in electro-mechanical devices through an analysis of their vibrations
• 209 examples and 6 classes
• Typical example: 20 to 60 noisy measurements taken in different points and conditions of the device
• A learned rule:IF the shaft rotating frequency is w0 and the
harmonic at w0 has high intensity and the
G. Tecuci 45
harmonic at 2w0 has high intensity in at least two
measurementsTHEN the example is an instance of C1 (problems in
the joint), C4 (basement distortion) or C5
(unbalance)
G. Tecuci 46
Comparison Betweenthe KB Learned by ENIGMA and
the Hand-coded KB of the Expert System MEPS
Ambiguityof rules
ENIGMA
MEPS
Recognitionrate time
Development
1.21
1.46
0.95
0.95 18 months
4 months
G. Tecuci 47
1.3 INTEGRATION OF EMPIRICAL INDUCTIVE LEARNING, EXPLANATION-
BASED LEARNING,AND LEARNING BY ANALOGY
DISCIPLE (Tecuci, 1988; Tecuci and Kodratoff, 1990)
G. Tecuci 48
based mode based mode
Empirical learning mode
explanation analogy criterion
...
AnalogyExplanation
problem1solution1
problem2solution2
problemnsolutionn
generalrule
G. Tecuci 49
1.4 INTEGRATION OF GENETIC ALGORITHMS AND SYMBOLIC
INDUCTIVE LEARNINGGA-AQ (Vafaie and K.DeJong, 1991)
GeneticAlgorithm
feature set best feature
featuresubset
evaluationfunction
(EIL based)
goodness ofrecognition
training examples
subset
Application: Texture recognition
G. Tecuci 50
18 initial features, 9 final features
G. Tecuci 51
2. MULTISTRATEGY KNOWLEDGE BASE IMPROVEMENT (THEORY
REVISION): METHODS, SYSTEMS AND APPLICATIONS
2.1 The class of learning tasks
2.2 Cooperating learning modules
2.3 Integrating elementary inferences
2.4 Applying learning modules in a problem solving environment
2.5 Applying different computational
G. Tecuci 52
strategies
G. Tecuci 53
2.1 MULTISTRATEGY KNOWLEDGE BASE IMPROVEMENT (THEORY
REVISION)The class of learning tasks
Training
Knowledge TheoryReviser
Multistrategy
Examples
Base (DT)
InferenceEngine
KnowledgeBase (DT)
InferenceEngine
Improved
G. Tecuci 54
Types of Theory Errors(in a rule based system)
IncorrectTheory
OverlySpecific
OverlyGeneral
MissingRule
ExtraRule
MissingPremise
AdditionalPremise
positiveexamples
are not
graspable(x) width(x, small), insulating(x), shape(x, round).
graspable(x) insulating(x).
graspable(x) has-handle(x).
graspable(x) shape(x, round).
explained
some
examples are
negative
explained
some
G. Tecuci 55
2.2 COOPERATING LEARNING MODULES
(deduction, abduction and empirical induction)
EITHER (Mooney and Ourston, 1991)
G. Tecuci 56
Knowledge
examples
DeductionModule
AbductionModule
Empirical InductionModule
Minimal Cover andRule Retractor
new rules and
deleted
rules
Minimal Cover andAntecedent Retractor
partialproofs undeletable rules
ungeneralizable rules
generalizedrules specialized rules
rules
Base
unprovablepositive examples
proofs ofnegative examples
to be
G. Tecuci 57
Positive and negative examples of cupscup(o1) width(o1, small), light(o1), color(o1, red),
styrofoam(o1), shape(o1, hem), has-flat-bottom(o1), up-concave(o1), volume(o1,8).
Imperfect Theory of Vesselscup(x) stable(x), liftable(x), open-vessel(x).stable(x) has-flat-bottom(x).liftable(x) light(x), graspable(x).graspable(x) has-handle(x). graspable(x) width(x, small), insulating(x).insulating(x) styrofoam(x).insulating(x) ceramic(x).open-vessel(x) up-concave(x).
G. Tecuci 58
Applications of EITHERI. Molecular Biology: recognizing
promoters and splice-junctions in DNA sequences
.
100
90
40
50
60
70
800 20 40
60
80EITHER
ID3
Training examples
Corr
ect
ness
on
test
data
II. Plant Pathology: diagnosing soybean
G. Tecuci 59
diseases
G. Tecuci 60
2.3 INTEGRATING ELEMENTARY INFERENCES
MTL-JT (Tecuci, 1993, 1994)
• Deep integration of learning strategiesintegration of the elementary inferences that are employed by the single-strategy learning methods (e.g. deduction, analogy, empirical inductive prediction, abduction, deductive generalization, inductive generalization, inductive specialization, analogy-based generalization)
• Dynamic integration of learning strategiesthe order and the type of the integrated strategies depend of the relationship between the input information, the background knowledge and the learning goal
• Different types of input(e.g. facts, concept examples, problem solving episodes)
G. Tecuci 61
• Different types of DT knowledge pieces(e.g. facts, examples, implicative relationships, plausible
determinations)
G. Tecuci 62
Assumptions for the DevelopedFramework and Method
Input:• correct (noise free)• one or several examples, facts, or problem solving episodes
Knowledge Base:• incomplete and/or partially incorrect• may include a variety of knowledge types (facts, examples,
implicative or causal relationships, hierarchies, etc.)
Learning Goal:• extend, update and/or improve the KB so as to integrate new
input information
G. Tecuci 63
An Illustration from the Domain of Geography
Knowledge Base
Facts:terrain(Philippines, flat), rainfall(Philippines, heavy), water-in-soil(Philippines,
high)
Examples (of fertile soil):soil(Greece, fertile-soil) soil(Greece, red-soil)soil(Egypt, fertile-soil) terrain(Egypt, flat) & soil(Egypt, red-soil)
Plausible determination:water-in-soil(x, z) <≈ rainfall(x, y)
Deductive rules:soil(x, fertile-soil) soil(x, loamy)temperature(x, warm) climate(x, subtropical)temperature(x, warm) climate(x, tropical)grows(x, rice) water-in-soil(x, high) & temperature(x, warm) & soil(x,
fertile-soil)
G. Tecuci 64
Positive and negative examples of "grows(x, rice)"
Positive Example 1:grows(Thailand, rice)
rainfall(Thailand, heavy) & climate(Thailand, tropical) & soil(Thailand, red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia)
Positive Example 2:grows(Pakistan, rice)
rainfall(Pakistan, heavy) & climate(Pakistan, subtropical) & soil(Pakistan, loamy) & terrain(Pakistan, flat) & location(Pakistan, SW-Asia)
Negative Example 3:¬ grows(Jamaica, rice)
rainfall(Jamaica, heavy) & climate(Jamaica, tropical) & soil(Jamaica, loamy) &
G. Tecuci 65
terrain(Jamaica, abrupt) & location(Jamaica, Central-America)
G. Tecuci 66
The learned knowledge
Improved KB
New facts:water-in-soil(Thailand, high), water-in-soil(Pakistan, high)
New plausible rule:soil(x, fertile-soil) soil(x, red-soil)
Specialized plausible determination:water-in-soil(x, z) <≈ rainfall(x, y), terrain(x, flat)
Concept definitions
Operational definition of "grows(x, rice)":grows(x, rice) rainfall(x,heavy) & terrain(x,flat) & [climate(x,tropical)
⁄ climate(x,subtropical)]& [soil(x,red-soil) ۷soil(x,loamy)]
Abstract definition of "grows(x, rice)":grows(x, rice) water-in-soil(x, high) & temperature(x, warm) & soil(x, fertile-soil)
Abstraction of Example 1:
G. Tecuci 67
grows(Thailand, rice) water-in-soil(Thailand, high) & temperature(Thailand, warm) &
soil(Thailand, fertile-soil)
G. Tecuci 68
The Learning Method
• For the first positive example I1:- build a plausible justification tree T of I1- build the plausible generalization Tu of T- generalize the KB so that to entail Tu
• For each new positive example Ii:- generalize Tu so as to cover a plausible justification
tree of Ii- generalize the KB so that to entail the new Tu
• For each new negative example Ii:- specialize Tu so as not to cover any plausible
justification of Ii- specialize the KB so that to entail the new Tu without
entailing the previous Tu
• Learn different concept definitions:
G. Tecuci 69
- extract different concept definitions from the general justification tree Tu
G. Tecuci 70
Build a plausible justification of the first example
water-in-soil(Thailand, high)
temperature(Thailand, warm)
soil(Thailand, fertile-soil)
rainfall(Thailand, heavy)
climate(Thailand, tropical)soil(Thailand, red-soil)
analogy
deductionand abduction
grows(Thailand, rice)
deduction
inductive prediction
Example 1:grows(Thailand, rice) rainfall(Thailand, heavy) & soil(Thailand,
red-soil) & terrain(Thailand, flat) & location(Thailand, SE-Asia) & climate(Thailand, tropical)
G. Tecuci 71
Multitype generalization
GENERALIZATION
grows(x1, rice)
water-in-soil(x1, high)temperature(x1, warm)
soil(x1, fertile-soil)
rainfall(x2, heavy) soil(x4, red-soil)
DEDUCTIVE
climate(x3, tropical)
DEDUCTIVE GENERALIZATION
GENERALIZATIONINDUCTIVE
ANALOGY
GENERALIZATIONBASED ON
|||water-in-soil(x2, high)
|||temperature(x3, warm)
soil(x4, fertile-soil)|||
G. Tecuci 72
Build the plausible generalization Tu of T
GENERALIZATION
grows(x, rice)
water-in-soil(x, high)temperature(x, warm)
soil(x, fertile-soil)
rainfall(x, heavy) soil(x, red-soil)
DEDUCTIVE
climate(x, tropical)
DEDUCTIVE GENERALIZATION
GENERALIZATIONINDUCTIVE
ANALOGY
GENERALIZATIONBASED ON
G. Tecuci 73
The plausible generalization treecorresponding to the three input examples
GENERALIZATIONS
grows(x, rice)
water-in-soil(x, high)temperature(x, warm)
soil(x, fertile-soil)
rainfall(x, heavy)
climate(x, subtropical)
soil(x, red-soil)
soil(x, loamy)
terrain(x, flat)
DEDUCTIVE
climate(x, tropical)
OROR
DEDUCTIVE GENERALIZATION
GENERALIZATIONINDUCTIVE
GENERALIZATIONDEDUCTIVEANALOGY
GENERALIZATIONBASED ON
SPECIALIZATION INDUCTIVE
G. Tecuci 74
Features of the MTL-JT method
• Is general and extensible
• Integrates dynamically different elementary inferences
• Uses different types of generalizations
• Is able to learn from different types of input
• Is able to learn different types of knowledge
• Is supported by the research in Experimental Psychology
• Exhibits synergistic behavior
• May behave as any of the integrated strategies
G. Tecuci 75
2.4 APPLYING LEARNING MODULES IN
A PROBLEM SOLVING ENVIRONMENT
PRODIGY (Carbonell, Knoblock and Minton, 1989)
• Performance enginePlanner based on state space
search
• Learning strategiesExplanation-based learningLearning by analogyLearning by abstraction
G. Tecuci 76
Learning by experimentation
• ApplicationsMachine-shop schedulingHigh-level robotic planning
G. Tecuci 77
The PRODIGY Architecture
ProblemDomain
KnowledgeEvaluation User
AbstractionCompression InterfaceBuilderEBL PROBLEM
SOLVERPlan AbstractionLibrary
Deriv.Multi-Level Hierarchy
DerivationExtractor PS Trace
Solution ExternalProcesses
ControlKnowledge
Replay
Experimenter
G. Tecuci 78
Independent Learning Systemsin a Uniform Environment
MLT (LRI, ISoft, CGE-LdM, INRIA, BAe, Aberdeen, Turing Institute, GMD, Siemens,
Coimbra, Forth)
Common Interface
Consultant
APT (DISCIPLE)
MOBAL
KBG
NewID•••
10 independent ML systemsloosely integrated through:
• a common interface;
• a consultant;
• a common knowledge representation language (for communication)
G. Tecuci 79
2.5 APPLYING DIFFERENTCOMPUTATIONAL STRATEGIES(symbolic rules and neural networks)
KBANN (Towell and Shavlik, 1991)
SymbolicKB
Rules to NetworkTranslator
Network TrainedNetwork
Initial
Learning
Network to RulesTranslator
Imperfect
TrainingExamples
Network
SymbolicKB
Improved
G. Tecuci 80
Rules to Network Translator
G. Tecuci 81
A B, C.
B not H.
B not F, G.
C I, J.
A
B
X
F
Y C
G H I J K
A
B
C
F G H I J K
Network to Rules Translator(N of M rules)
Z [2 of {A, C, F}]
6.2 6.01.01.2
1.21.06.0
A B C D E F G
Zbias=10.9
6.1 6.1
1.16.11.11.1
1.1
A C F D E B G
6.1 2 > 10.9*
Zbias=10.9
Zbias=10.9
6.16.16.1
A C F
G. Tecuci 82
Error Rates
.
16
12
8
4
00.4
2.9 2.43.8
0.0
12.0
KBANNnetwork
N of M EITHER
Training set
Testing set
16
12
8
4
00.0
7.4
2.3
7.2
KBANNnetwork
N of M
Err
or
rate
Promoter Domain Splice-Junction Domain
Initial KB: 50% 39%
G. Tecuci 83
3. SUMMARY, CURRENT TRENDS
AND FRONTIER RESEARCH
• Summary of application domains
• Issues in selecting a multistrategylearning method
• Current trends in multistrategy learning
• Areas of frontier research
G. Tecuci 84
SUMMARY OF APPLICATION DOMAINS
• Classification: DNA concepts (EITHER, KBANN)
texture recognition (AQ-GA, GA-AQ)
• Diagnosis: mechanical trouble-shooting (ENIGMA)
plant pathology (EITHER)
• Manufacturing: loudspeakers (DISCIPLE1)
• Planning: high-level robot planning (PRODIGY)
G. Tecuci 85
• Prediction: economic sanctions (OCCAM)
• Scheduling: machine-shop scheduling (PRODIGY)
• Critiquing: military courses of action (Disciple-COA)
• Identification: mil. center of gravity (Disciple-COG)
G. Tecuci 86
SOME ISSUES IN SELECTING AMULTISTRATEGY LEARNING
METHOD
• Learning problem:- concept learning- theory revision
• Input data: - positive examples only- positive and negative
examples- noisy examples
• Domain theory: - weak
G. Tecuci 87
- complete- incomplete- partially incorrect
G. Tecuci 88
CURRENT TRENDS IN MSL
• Comparisons of learning strategies
• New ways of integrating learning strategies
• Dealing with incomplete or noisy examples
• General frameworks for MSL
G. Tecuci 89
• Integration of MSL and knowledge acquisition
• Integration of MSL and problem solving
• Applications of MSL systems
G. Tecuci 90
FRONTIER RESEARCH
• Synergistic integration of a wide range of learning strategies
• The representation and use of learning goals in MSL systems
• Evaluation of the certainty of the learned knowledge
• Investigation of human learning as MSL
G. Tecuci 91
• More comprehensive theories of learning
G. Tecuci 92
4. REFERENCES
• Proceedings of the International Conferences on Machine Learning, ICML-87, … , ICML-02, Morgan Kaufmann, San Mateo, 1987- 2002.
• Proceedings of the International Workshops on Multistrategy Learning, MSL-91, MSL-93, MSL-96, MSL-98.
• Special Issue on Multistrategy Learning, Machine Learning Journal, 1993.
• Special Issue on Multistrategy Learning, Informatica, vol. 17. no.4, 1993.
• Machine Learning: A Multistrategy Approach Volume IV, Michalski R.S. and Tecuci G. (Eds.),
G. Tecuci 93
Morgan Kaufmann, San Mateo, 1994.
G. Tecuci 94
Bibliography
Bala, J., DeJong, K. and Pachowicz, P., "Multistrategy Learning from Engineering Data by Integrating Inductive Generalization and Genetic Algorithms," in Proc.of the First Int.Workshop on Multistrategy Learning, Nov. 1991. Also in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Bergadano, F., Giordana, A. and Saitta, L., "Automated Concept Acquisition in Noisy Environments," IEEE Transactions on Pattern Analysis and Machine Intelligence, 10 (4), pp. 555-577, 1988.
Bergadano, F., Giordana, A., Saitta, L., De Marchi D. and Brancadori, F., "Integrated Learning in a Real Domain," in B.W. Porter and R.J. Mooney (Eds. ), Proceedings of the 7th International Machine Learning Conference, Austin, TX, 1990.
Bergadano, F. and Giordana, A., "Guiding Induction with Domain Theories," in Machine Learning: An Artificial Intelligence Approach Vollume 3, Y. Kodratoff and R.S. Michalski (Eds.), San Mateo, CA, Morgan Kaufmann, 1990.
Birnbaum, L. and Collins, G. (Eds.), Machine Learning: Proceedings of the Eighth International Workshop, Chicago, IL, Morgan Kaufmann, 1991.
Bloedorn E., Wnek, J. and Michalski, R.S. “Multistrategy Constructive Induction,” in Proceedings of the Second International Workshop on Multistrategy Learning, R.S. Michalski and G. Tecuci (Eds.), Center for AI, George Mason University, May, 1993.
Carbonell, J.G., Knoblock C.A. and Minton M., "PRODIGY: An Integrated Architecture for Planning and Learning," Research Report, CMU-CS-89-189, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15-213, 1989.
Cohen, W., "The Generality of Overgenerality," in Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins (Eds.), Chicago, IL, Morgan Kaufmann, pp. 490-494, 1991.
Danyluk, A.P., "The Use of Explanations for Similarity-Based Learning," Proceedings of the International Joint Conference on Artificial Intelligence, Milan, Italy, Morgan Kaufmann, pp. 274-276, 1987.
DeJong, G. and Mooney, R.J., "Explanation-Based Learning: An Alternative View," Machine Learning, Vol. 1, pp. 145-176, 1986.
Flann, N., and Dietterich, T., "A Study of Explanation-based Methods for Inductive Learning," Machine Learning, Vol. 4, pp. 187-266, 1989.
G. Tecuci 95
Genest, J., Matwin, S. and Plante, B., "Explanation-based Learning with Incomplete Theories: A Three-step Approach," in Machine Learning: Proc. of the Eighth International Workshop, B.W. Porter and R.J. Mooney (Eds.), TX, Morgan Kaufmann, 1990.
Gordon, D.F., "An Enhancer for Reactive Plans," in Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum, and G. Collins (Eds.), Chicago, IL, Morgan Kaufmann, pp. 505-508, 1991
Hieb, M. and Michalski, R.S. “Knowledge Representation for Multistrategy Task-adaptive Learning: Dynamic Interlaced Hierarchies,” in Proceedings of the Second International Workshop on Multistrategy Learning, R.S. Michalski and G. Tecuci (Eds.), Center for AI, George Mason University, May, 1993.
Hirsh, H., "Combining Empirical and Analytical Learning with Version Spaces," in Proc. of the Sixth International Workshop on Machine Learning, A. M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, 1989.
Hirsh, H., "Incremental Version-space Merging," in Proceedings of the 7th International Machine Learning Conference, B.W. Porter and R.J. Mooney (Eds. ), Austin, TX, 1990.
Kedar-Cabelli, S. T., "Issues and Case Studies in Purpose-Directed Analogy," Ph.D. Thesis, Rutgers University, New Brunswick, NJ, 1987.
Kodratoff, Y. and Michalski, R.S. (Eds.) Machine Learning: An Artificial Intelligence Approach Volume 3, Morgan Kaufmann Publishers, Inc., 1990.
Kodratoff, Y. and Tecuci, G., “DISCIPLE-1: Interactive Apprentice System in Weak Theory Fields,” Proceedings of IJCAI-87, pp. 271-273, Milan, Italy, 1987.
Laird, J.E., (Ed.), Proceedings of the Fifth International Conference on Machine Learning, University of Michigan, Ann Arbor, June 12-14, 1988.
Laird, J.E., Rosenbloom, P.S. and Newell A., “Chunking in SOAR: the Anatomy of a General Learning Mechanism,” Machine Learning, Vol. 1, No. 1, pp. 11-46, 1986.
Lebowitz, M., "Integrated Learning: Controlling Explanation," Cognitive Science, Vol. 10, pp. 219-240, 1986.
Mahadevan, S., "Using Determinations in Explanation-based Learning: A Solution to Incomplete Theory Problem," in Proceedings of the Sixth International Workshop on Machine Learning, A. Segre (Ed.), Ithaca, NY, Morgan Kaufman, 1989.
Michalski, R.S., “Theory and Methodology of Inductive Learning,” Machine Learning: An Artificial Intelligence Approach, R.S.
G. Tecuci 96
Michalski, J.G. Carbonell, T.M. Mitchell (Eds.), Tioga Publishing Co.(now Morgan Kaufmann), 1983.
Michalski, R.S., “Inferential Theory of Learning as a Conceptual Framework for Multistrategy Learning,” Machine Learning Journal, Special Issue on Multistrategy Learning, vol.11, no. 2&3, Guest Editor: R.S. Michalski, 1993.
Michalski, R.S., "Inferential Learning Theory: Developing Theoretical Foundations for Multistrategy Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Michalski, R.S. and Tecuci, G. (Eds.), Proceedings of the First International Workshop on Multistrategy Approach, Harpers Ferry, WV, November 7-9, Center for Artificial Intelligence, George Mason University, 1991.
Michalski, R.S. and Tecuci, G. (Eds.), Proceedings of the Second International Workshop on Multistrategy Approach, Harpers Ferry, WV, May 26-29, Center for Artificial Intelligence, George Mason University, 1993.
Michalski, R.S. and Tecuci, G. (Eds.), Machine Learning: A Multistrategy Approach Volume 4, Morgan Kaufmann Publishers, San Mateo, CA, 1993.
Michalski, R.S. and Tecuci, G., "Multistrategy Learning," in Encyclopedia of Microcomputers, Vol.12, Marcel Deker, New York, 1993.
Mitchell, T.M., Keller, T. and Kedar-Cabelli, S., "Explanation-Based Generalization: A Unifying View," Machine Learning, Vol. 1, pp. 47-80, 1986.
Mitchell, T.M., "Version Spaces: an Approach to Concept Learning," Doctoral Dissertation, Stanford University, 1978.
Mooney, R.J. and Ourston, D., "Induction Over Unexplained: Integrated Learning of Concepts with Both Explainable and Conventional Aspects,", in Proc. of the Sixth International Workshop on Machine Learning, A.M. Segre (Ed.), Cornell University, Ithaca, New York, June 26-27, 1989.
Mooney, R.J. and Ourston, D., "A Multistrategy Approach to Theory Refinement," in Proceedings of the First International Workshop on Multistrategy Learning, Center for AI, George Mason University, November, 1991. Also in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Morik, K., "Balanced Cooperative Modeling," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Pazzani, M.J., "Integrating Explanation-Based and Empirical Learning Methods in OCCAM," Proceedings of the Third European
G. Tecuci 97
Working Session on Learning, Glasgow, Scotland, pp. 147-166, 1988.
Pazzani, M.J., Creating a Memory of Causal Relationships: an Integration of Empirical and Explanation-Based Learning Methods, Lawrence Erlbaum, New Jersey, 1990.
Porter, B.W. and Mooney, R.J. (Eds. ), Proc. of the 7th International Machine Learning Conference, Austin, TX, 1990.
De Raedt, L. and Bruynooghe, M., "CLINT: a Multistrategy Interactive Concept Learner and Theory Revision System," in Proceedings of the First International Workshop on Multistrategy Learning, R.S. Michalski and G. Tecuci (Eds.), Center for AI, George Mason University, November, 1991.
Ram A. and Cox M., "Introspective Reasoning using Meta-explanations for Multistrategy Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Russell, S., "The Use of Knowledge in Analogy and Induction," Morgan Kaufman Publishers, Inc., San Mateo, CA, 1989.
Saitta, L. and Botta, M., "Multistrategy Learning and Theory Revision," Machine Learning Journal, Special Issue on Multistrategy Learning, vol.11, no. 2&3, Guest Editor: R.S. Michalski, 1993.
Segen, J., "GEST: A Learning Computer Vision System that Recognizes Hand Gestures," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Segre, A.M. (Ed.), Proc. of the Sixth IntWorkshop on Machine Learning, Cornell Univ., Ithaca, New York, June 26-27, 1989.
Shavlik, J.W. and Towell, G.G., "An Approach to Combining Explanation-based and Neural Learning Algorithms," in Readings in Machine Learning, J.W. Shavlik and T. Dietterich (Eds.), Morgan Kaufmann, 1990.
Sleeman, D. and Edwards, P., Proceedings of the Ninth International Workshop on Machine Learning (ML92), Morgan Kaufman Publishers, Aberdeen, July 1-3, 1992.
Tecuci, G., DISCIPLE: A Theory, Methodology, and System for Learning Expert Knowledge. Ph.D. Thesis, University of Paris-South, 1988.
Tecuci, G., "An Inference-Based Framework for Multistrategy Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Tecuci, G. and Kodratoff, Y., "Apprenticeship Learning in Imperfect Theory Domains," in Machine Learning: An Artificial Intelligence Approach Volume 3, Y. Kodratoff and R.S. Michalski (Eds.), San Mateo, CA, Morgan Kaufmann, 1990.
G. Tecuci 98
Tecuci, G. and Michalski, R.S., "A Method for Multistrategy Task-adaptive Learning Based on Plausible Justifications," in Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins (Eds.), Chicago, IL, Morgan Kaufmann, pp. 549-553, 1991.
Towell, G.G. and Shavlik, J.W., "Refining Symbolic Knowledge Using Neural Networks," in Proceedings of the First International Workshop on Multistrategy Learning, Center for AI, George Mason University, November, 1991. Also in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Vafaie, H. and DeJong, K., "Improving the Performance of a Rule Induction System Using Genetic Algorithms," in Proceedings of the First International Workshop on Multistrategy Learning, Center for AI, George Mason University, Nov. 1991. Also in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Veloso, M. and Carbonell, J.G., "Automating Case Generation, Storage, and Retrieval in PRODIGY," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Whitehall, B.L., "Knowledge-based Learning: Integration of Deductive and Inductive Leaning for Knowledge Base Completion," PhD Thesis, University of Illinois at Champaign-Urbana, 1990.
Whitehall, B.L. and Lu, S.C-Y., "Theory Completion using Knowledge-Based Learning," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Widmer, G., "A Tight Integration of Deductive and Inductive Learning," Proceedings of the Sixth International Workshop on Machine Learnin, A. Segre (Ed.), Ithaca, NY, Morgan Kaufmann, 1989.
Widmer, G., "Learning with a Qualitative Domain Theory by Means of Plausible Explanations," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
Wilkins, D.C., Clancey, W.J. and Buchanan, B.G., An Overview of the Odysseus Learning Apprentice, Kluwer Academic Press, New York, NY, 1986.
Wilkins, D.C., "Knowledge Base Refinement as Improving an Incorrect and Incomplete Domain Theory," in Machine Learning: An Artificial Intelligence Approach Volume 3, Y. Kodratoff and R.S. Michalski (Eds.), San Mateo, CA, Morgan Kaufmann, 1990.
Wnek, J. and Hieb, M., "Bibliography of Multistrategy Learning Research," in Machine Learning: A Multistrategy Approach Volume 4, R.S. Michalski and G. Tecuci (Eds.), San Mateo, CA, Morgan Kaufmann, 1994.
G. Tecuci 99