LEARNING MODEL FOR CONSTRUCTION OF THE BEST DECISION SEQUENCE USING PRIOR KNOWLEDGE Thesis...

Post on 26-Mar-2015

226 views 4 download

Tags:

Transcript of LEARNING MODEL FOR CONSTRUCTION OF THE BEST DECISION SEQUENCE USING PRIOR KNOWLEDGE Thesis...

LEARNING MODEL FOR CONSTRUCTION OF THE BEST DECISION SEQUENCE USING PRIOR KNOWLEDGE

Thesis Presentation

by Lilit Yenokyan

PROBLEM STATEMENT

The task of this thesis is development of a learning system capable of learning from prior solution instances of a problem and able of developing new solution procedures when similar problem is introduced to the system.

System involves multiple decision makers and multiple decision sequences.

PROBLEM STATEMENT (cont.)

Applications Medical Field Product Assembly Assisting Drivers in Navigation Other tasks presented in the terms of system

(Decision Makers, Decision sequences)

Stages of the Approach Constructing the Knowledge Base

1. Initial approach builds a learning system to document the steps of decision sequences in problem solving and also information/data used for decision making.

2. Directed Acyclic Graph (DAG) is used to represent these decision sequences, where the nodes in the graph represent the actions taken by the problem solver and directed edges represent the order of the decisions (steps).

3. In mathematics DAG, is a directed graph with no directed cycles; such that, for any vertex v, there is no nonempty directed path that starts and ends on v.

Stages of the Approach Finding the Best Decision Sequence

1. Dynamic programming approach is proposed to quickly examine various solution alternatives documented in DAG and develop as new improved solution for the problem.

2. The notion of “best” solution is based on the problem and the application, e.g. shortest completion time, shortest distance traveled, least load or cost required and so on.

Available Learning Models

Artificial Intelligence Agent – the actor in the system.

Multi-agent Systems• Cooperative • Concurrent

Supervised• Correct output is provided to system.

Unsupervised• Ability of recognize patterns in streams of input

Reinforcement• ‘Reward’ for correct decisions, ‘punishment’ for incorrect

decisions Semi-Supervised Learning

• Combination of Supervised and Unsupervised Learning

Available Learning Models (cont.)

Neural Networks Inspired and modeled based on brain neurons. Feed-Forward Recurrent Neural Networks

Decision Trees Classification Learning. Every branch is a chose between a

number of alternatives, and each leaf node represents a classification or decision.

Explanation Based Learning Uses learning model of an example is sufficient for learning.

Relevance Based Learning Generalizes the information from prior knowledge and uses

hypothesis to create new learning examples

Differences and similarities

Our approach includes elements of Reinforcement LearningSemi-Supervised LearningDecision Trees

Using Methods:Graph Theory Dynamic programming

Notations

Types of nodes

Nodes

Regular Nodes

Final EntitiesIntermidiate EntitiesBasic Entities

NodeSets

Nodes with no incoming edges. Nodes with no outgoing edges.Nodes with incoming and outgoing edges.

Nodes with references to regular nodes.

Knowledge base presented as Directed Acyclic Graph with distinct types of nodes NodeSets and Regular Nodes is called Hypertree.

Introducing concept of NodeSet

Decision that results from merging nodes i1 and i2 with attribute x, is the outcome A

i1 i2

A

x

i4i3

A

y

Decision that results from merging nodes i3 and i4 with attribute y, is the outcome A

Introducing concept of NodeSet (cont.)

NodeSet notation used to illustrate the case: and .

i1 i4i3i2

A

A1(x) A2(y)

}{},{1 21

xiiAA }{

},{2 43

yiiAA

Introducing concept of NodeSet (cont.)

Example – Two Decision sequences

First Decision Sequence

i1, i2 à A (1) A, i10, i4 à B (3) B, i7 à Solution1 (2)

DECISION SEQUENCE 1

Second Decision Sequence

i1, i5 à B (2) B, i6 à C (1) C, i7 à Solution2 (4)

DECISION SEQUENCE 2

Example – Two Decision sequences

Theory and Algorithms

Knowledge Base Construction Algorithm

Decision made by DM N, where A is resulting decision {e1, …, en} are the decision at the input that produce A, {a} is the attribute value of the decision.

}{}..{, 1aD N

eeA n

Decision sequence of decision maker N.

}]{,...},{},{[ }..{,}..{,}..{, 1mDbDaDDS N

eeMN

eeBN

eeAN

hgncn

}{}...{ 1

aee n

A NodeSet in the Hypertree, where A is resulting intermediate or final entity, {e1, …, en} are the basic or intermediate entities at the input

that produce A, {a} is the attribute value of the nodeSet.

Input: Set of decision sequences

Step 3: For each entity ei in {e1, …, en} do: If ei is not in H_Tree then add new node ei to H_tree

Endfor If A is not in H_Tree then add new node A to H_tree Add new nodeSet to H_tree,

with references to resulting node {A} and entity nodes {e1, …, en} and attribute value {a}.

Step 2: For each decision sequence DSm where m = 1, 2,...,S For each decision in decision sequence DSm do:

Goto Step 3 Endfor

Endfor Return H_Tree

Knowledge Base Construction Algorithm

SDSDSDSDS ,...,, 21

}{}..{, 1aDm

eeA n

Output: Hypertree of knowledge base H_tree

Step 1: Initialize H_tree=Ø

}{}...{ 1

aee n

A

Example – Constructing Hypertree

Hypertree Build based on two decision sequences

Example –KBC (Step 1)

First decision of the blue decision sequence

Example – KBC (Step 2)

Second decision of blue decision sequence

Example – KBC (Step 3)

Final decision of blue decision sequence

Example – KBC (Step 4)

First decision of the green decision sequence

Example – KBC (Step 5)

Second decision of green decision sequence

Example – KBC (Step 6)

Final decision of the green decision sequence

Example – Constructing Hypertree

Constructing Knowledge Base – Complete

Best Decision Sequence

where: BDS is Best Decision Sequence, N is the number of nodeSets for the entity node A, NodeSet nsi is the i-th nodeSet of node A, ai is the attribute value of nodeSet nsi , {ej} is the j-th entity node in nodeSet nsi resulting to A,

and m is the number of decisions sequences that result to

A by nodeSet nsi.

)}{((min)(0

,..,1

mj

jiiNns

ensBDSaABDSi

Step 2: Calculate BDS for all the solution sequences

For each outcome sequence where n = 1, 2,...,m

Calculate and store

End For

Algorithm 2- Construction of Best Decision Sequences for List of Outcomes

Input: Set of decisions to examine:

Output: Best solution sequence for each final outcome in Out_set

Step 1: Initialize array of solution paths

Des_array[1,..,m] = Ø

Step 3: Return Des_array[1,..,m]

]...,,,[_ },{}...{2}...{1 1 hkjin nsnsmnsnsnsns OOOsetOut

)(][_ }...{ jk nsnsnOBDSnarrayDes }...{ jk nsnsnO

Algorithm 3- Best Decision Sequence

Input: Node A

Output: Best Decision Sequence

Step 1: If BDS(A) has already been calculated then return BDS(A)Step 2: Initialize:

current_value = +∞ branch_value = +∞

Step 3: for each NodeSet of node A do Set branch_value = {a} +

branch_value = min(current_value, branch_value) Record index k, from {ek} which minimized current_value Endfor

ni

ieBDS,..,1

)(

Step 4: Return BDS(A)

Example – Illustration of Algorithm 2 & 3

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)

A1(1)

Given the Hypertree from the previous example extended with new Decision sequence:

i6, i7 Solution 3 (14) Construction of Best Decision Sequences (Algorithm 2) is

called with {Solution 1, Solution 2, Solution 3} on its input

Example – BDS Solution 3

14}min{}min{)3( 1761 kK aiiaSolutionBDS

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)

A1(1)

14

Example – BDS Solution 1

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)

A1(1)

Example – BDSSolution 1 (Step 1)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)2

A1(1)

}7)(min{)1( 1 iBBDSaSolutionBDS S

][

])([min}7)(min{)1(

5121

43111 iiaa

iiABDSaaiBBDSaSolutionBDS

BS

BSS

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)2

3 2

A1(1)

Example – BDSSolution 1 (Step 2)

5121

4321111

5121

43111

][min

][

])([min}7)(min{)1(

iiaa

iiiiaaa

iiaa

iiABDSaaiBBDSaSolutionBDS

BS

ABS

BS

BSS

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)2

3

1

2

A1(1)

Example – BDSSolution 1 (Step 3)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)2

2

A1(1)

Example – BDSSolution 1 (Final)

4422

6132min

][min

][

])([min}7)(min{)1(

5121

4321111

5121

43111

iiaa

iiiiaaa

iiaa

iiABDSaaiBBDSaSolutionBDS

BS

ABS

BS

BSS

Example – BDSSolution 2

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)

A1(1)

Example – BDSSolution 2 (Step 1)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)4

A1(1)

})(min{)2( 71 iCBDSaSolutionBDS P

Example – BDSSolution 2 (Step 2)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)4

A1(1)

1

}])([min{})(min{)2( 761171 iiBBDSaaiCBDSaSolutionBDS CPP

Example – BDSSolution 2 (Step 3)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)4

A1(1)

1

23

7651211

7643111

761171

][

])([min

}])([min{})(min{)2(

iiiiaaa

iiiiABDSaaa

iiBBDSaaiCBDSaSolutionBDS

BCP

BCP

CPP

Example – BDSSolution 2 (Step 4)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)4

A1(1)

1

3

1

2

7651211

7643211111

7651211

7643111

761171

][min

][

])([min

}])([min{})(min{)2(

iiiiaaa

iiiiiiaaaa

iiiiaaa

iiiiABDSaaa

iiBBDSaaiCBDSaSolutionBDS

BCP

ABCP

BCP

BCP

CPP

Example – BDSSolution 2 (Final)

C

i1 i4i3i2 i5

A

B

Solution 1

i6 i7

Solution 2

B1(3)B2(2)

C1(1)

P1(4)S1(2)

Solution 3

K1(14)4

A1(1)

1

2

77214

91314min

][min

][

])([min

}])([min{})(min{)2(

7651211

7643211111

7651211

7643111

761171

iiiiaaa

iiiiiiaaaa

iiiiaaa

iiiiABDSaaa

iiBBDSaaiCBDSaSolutionBDS

BCP

ABCP

BCP

BCP

CPP

Applications and Examples

Medical Application

Input - Various sequences of tests prescribed by doctors for patients with similar symptoms and their final diagnoses

Goal - Identify the best sequence of tests for developing accurate diagnosis of patients with certain symptoms. Sequence is based on specified criteria.

Criteria - Cost, accuracy, lead time to diagnose

Product Assembly Application Input - Various feasible assembly sequences

are captured in a knowledge base.

Goal – Construct assembly sequence based on sequences given at the input optimal with respect to certain criteria.

Criteria – Assembly time, cost, number of setups

Assisting Drivers Finding Best Driving Directions Application

Input - various driving routes between the origin and destination.

Goal - examine those routes and produce a new path that is optimal with respect to specific criteria

Criteria - time, distance traveled, cost of gas

Medical Application

Study involves medical history of group of patients with the similar initial symptoms and complains.

It is possible that during the examination of the patient several doctors prescribe different tests but arrive to same conclusion.

Medical Application

Consider the diagnostic history of patients with similar symptoms. The history of symptoms and diagnoses are documented for the patients and later the patients are divided by a medical professional to groups of people who had similar symptoms.

During the examination of a patient from a group, every test prescribed by the doctor is recorded, doctor also supplies the system all the possible conclusions or intermediate decisions he/she makes based on results of each test.

Medical Application

Operation is performing the test

Initial entities in the knowledge base model are medical tests

Intermediate entities are the decisions of doctors based on initial tests

Final entities are final diagnoses.

Medical ApplicationConstruction of knowledge base for tests prescribed for patients with similar symptoms

Medical ApplicationConstruction of Best Decision Sequence for Diagnosis 1 and Diagnosis 2

Medical Application

Diagnosis 1 - Comparing values (costs) of checking a patient against Diagnosis 1 is 11 versus initial costs 15 and 13. This is an improvement of 15% percent of the initial cost.

Diagnosis 2 - The value for checking a patient against Diagnosis 2 is the same as it was initially 15.

Medical Application

System is intended to assist the medical professional not replace.

In the case if during patient inspection doctor decides to examine a diagnosis that is not in the knowledge base, the learning system records new diagnosis and the tests prescribed by the doctor for checking it.

At any step if a test results do not satisfy the condition and diagnose k should not be considered any longer, sequence for performing the tests stops, as negative decision about diagnosis k has been reached.

Assembly Application

Given all the necessary mechanical parts of the desk, a number of professionals are assigned to independently assemble it.

The professionals perform the assembly in various orders requiring various amounts of time. The assumption is that each professional has same level of competency and that the time required performing a certain task is the same for all of them.

The time of each action performed is available, i.e. system is tracking the time it takes the professional to perform each step of assembly.

Assembly Application

Operation in the application is joining several parts of product together.

Basic entities in the knowledge base are the parts of the final product.

Intermediate entities are subassemblies obtained by sequence of operations.

Final entity is the assembled product. In this assembly example desk is the final entity.

Assembly Application

Assembly Application

RED LINE

1) A, B AB (2)2) C, AB ABC (2)3) D, ABC ABCD (4)4) K, F KF (3)5) G, KF KFG (6) 6) H, GKF KFGH (6)7) L, E EL (5)8) EL, KFGH Drawer (5)9) M, ABCD ABCDM (7)10) N, ABCDM Table (7)11) Table, Drawer Desk (6)

Total: 53 minutes

Assembly Application

GREEN LINE

1) A, M AM (2)

2) C, N CN (2)

3) AM, B ABM (2)

4) CN, ABM ABCNM (2)

5) D, ABCNM Table (5)

6) F, K KF (3)

7) H, FK KFH (3)

8) G, KFH KFGH (5)

9) E, KFGH Drawer_no_handle (2)

10) Drawer_no_handle, Table Desk_no_handle (6)

11) L, Desk_no_handle Desk (14 )

Total: 47 minutes

Assembly Application

BLUE LINE

A, M AM (2)

2) C, N CN (2)

3) D, AM ADM (2)

4) ADM, CN ACDNM (3)

5) B, ACDNM Table (1)

6) F, K KF (3)

7) G, KF KFG (6)

8) H, KFG KFGH (6)

9) L, E EL (5)

10) Table, KFGH Desk_no_front (4)

11) Desk_no_front, EL Desk (11)

Total: 45 minutes

Assembly Application

Assembly Application

blueblue blue blue bluegreen green greenred red red

1) A, M AM (2)2) C, N CN (2)3) D, AM ADM (2) 4) ADM, CN ACDNM (3) 5) B, ACDNM Table (1) 6) F, K KF (3)7) H, FK KFH (3)8) G, KFH KFGH (5) 9) L, E EL (5)10) EL, KFGH Drawer (5)11) Table, Drawer desk (6)

Total: 37 minutes

Deadlock Elimination

Software Screenshot (Desk Assembly)

Research at the UofM Dearborn

Mirror symmetric reconstruction and Matching

1. Right leg 2. Left leg missing patella

3. Mirror reconstructed left leg 4. Generated mesh for missing sections 5. Zoomed view

1. Right ribs 2. Left ribs 5th rib broken

3. Mirror reconstructed left ribs 4. Generated mesh for missing rib 5. Generated solid

Mirror symmetric reconstruction and Matching

Partial Symmetry Reconstruction

1. Right hand damaged middle finger 2. Damaged area intersection with box 3.Middle finger separate view in box

4. Candidate reconstruction area 5. Damaged bone view 6. Reconstructed finger bone

Density Analysis – Damage Area Recognition

1. Ulna bone 2.Ulna vertex mapping 3. Ulna edges mapping

4. Left ribs 5th rib broken 5. Left ribs vertex mapping 6. Left ribs edges mapping

Thank you!

Questions and Answers