W. California,pv457mv4522/pv457mv4522.pdfThe "objectmodel" willbe called the"informant" inwhat...
Transcript of W. California,pv457mv4522/pv457mv4522.pdfThe "objectmodel" willbe called the"informant" inwhat...
T
Notes on Inquiring Systems, generated from discussions between
E. A. Feigenbaum, Richard Watson and C. W. Churchman
C. W. ChurchmanUniversity of California, Berkeley
9-26-63
Imagine an "object model" that can create outputs as a result of various
"queries" and other types of inputs. For a subclass of inputs, the object
model will respond with "success."
Next, think of a device—call it a problem solver for the moment—that
queries the object model with the aim of generating a "success" output. It
is interested in doing this under various constraints—e.g., the conditions
of a theorem.
Consider now inside the device a "soft model," just like the object model
in kind, but much easier to query or to obtain "success" from.
To see how this soft model is used, suppose the problem solver to be
capable of setting itself a set of new problems (a problem is finding any
constrained set of inputs that produce "success") such that if the new
problems all reach "success," then a string of the final inputs of these
problems will produce "success" for the original problem. The soft model
is used to search for a suitable set of "new problems." We call it (the soft
model) inferentially rich if it generates correct sets of new problems.
Finally, consider an observer model of this system—that queries whether
the soft model is correct—but also may query whether the object model is
correct.
t
\ 2
Contrast of Problem Solvers and Inquiring Systems in terras of ranking
questions vs. asking questions in further depth.
10-23-63
Now call "0" in the notes of 9-26-63 an "Executive"—his resources aredifferent methodological schemes (including languages) that the "PS"—call
him now the inquiring unit—can use. To illustrate, suppose the inquiring
system is to determine the rules of checkers. At the disposal of the
Executive are two schemes; a simple inducer and the complete rules of chess.
The Executive instructs the inquirer to proceed as though checkers were the
same as chess, and as each piece of information is provided, to modify the
rules according to a given scheme. The information is the output of a
question the inquirer poses. The "object model" will be called the "informant"
in what follows.
The simple inducer behaves as follows. ,4ny one (n) "event " of the
same kind, with no counter instances, lead to an "hypothesis." (The Executive
controls n.) The hypothesis is to be accepted if it does not contradict
anything in the "chess-rule routine"—at a given stage. Otherwise the
3
hypothesis is stored, .and all stored hypotheses are checked with each new
piece of evidence, and all are recalled and reehecked with each change of the
rules of the chess-rule routine. The chess-rule routine knows only the
language of chess, though (perhaps) we should let it modify the language
somewhat by (say) talking about "bishop-like" pieces. The simple Inducer
knows a way of identifying objects and can talk about Mth±s-there-thingw as
opposed to "that-there-thing," but it may be wrong in the distructlons it
makes (it codes objects as 1, 2, 3, etc.). Anyway, the illustration proceeds
as follows (this is not a program, of course, just a dialogue)*
E (Executive )« Try the chess-rule routine.Inq [iconsults the rules of chess and begins with the boardj asks of Inf
(Informant)]: Is the board the same?
NB* This question assumes the Informant knows chess—but if not—a
translation routine into geometrical language could be used.
Inf i Yes.
[informant stores Rule #1: Checkers is played on a chess board.}E: Continue to use chess-rule routine.
Inq: Are there five kinds of pieces?
Infi No.
Es Use the simple inducer routine with n« 1.Inq: Describe the pieces.
Inf: There are twelve black pieces and twelve red pieces, all alike
in appearance.
Simple inducer routine proposes two hypotheses:
1. All pieces are alike.
2. Twelve pieces on each side at beginning.
These contradict the original rules of chess, but the rule on the pieces
has been cancelled by the Inf's response, so the hypotheses can be made into
4
Rule 2. Both sides have twelve pieces.
Rule 3. All pieces are alike at the beginning.
E: Go back to the chess-rule routine.
Inq: How are the pieces placed?
Inf: Piece #1 on black square, piece #2, etc.
Inq: (Rejects Rule 3).
: Use the S.I.
Inf does, and generates hypotheses which now become:
Rule h. Pieces .are placed initially on black squares, etc.
E: Return to chess-rule.
Etc.
Finally Inf ends up with something like the following:
Checkers is a game played only with black bishops, initially placed
on all the black squares except the two center rows. The bishops can only
move one square at a time forward until they "queen," when they can move
one square in any free direction; the exception occurs when they are adjacent
to an enemy piece with an empty square behind, in which case they can move
two squares and the enemy piece is removed, and can continue this process in
any broken or straight line on the same move. The object of the game is to
(the game is over when) remove all enemy pieces. A stalemate is a win for
the stalemated player.
Notes: The SI should permit generalizations even for counter instances
(n ? 1, i ? 0) n ■ confirming, m « non-confinning). E adjusts
these "knobs."
E should be stretching the endurance of his available "soft"
models—i.e., trying to subject them to the toughest tests.
5
Generals Who controls E?
Who' judges the Inf. to be bad, and how is he replaced?
Singer*s postulates for E*
(1) Use some method (image) to "adjust" informant»s responses—
i.e., to determine which responses are to be used as they
originally appeared, which are to be modified, which unused but
stored, which rejected. 1.e., iavariance criteria for responses.
(2) If the system makes n successful tries and m" 0 unsuccessful
ones, then find a more difficult set of questions to pose to
th© informant.
(3) If no model available t© E can pass the more difficult
test, then a) add to the model more points, or change the
"pattern"—or change the method of adjusting observations or
the data to be adjusted.
Two References: Kuhn (introduction to Science Revolutions).
P. G. Polya's Chapter .IX in Vol. 1 of Math and Plausible
Reasoning—where he discusses Bernoulli's discovery of the
Brachistoehione— using an optical soft model to give a clue
about the minimal time path for a body to drop from A to B.
11-1-63
We discussed the problem of the descriptors at the nodes of an exploring
net. We imagine Executive (E) to "visit" at set of nodes. Each node has a
profile, and E has a preliminary profile of th© area to be studied (e.g.,sots descriptors of checkers). Presumably E compares this set with the node's
6
profile and goes deeper in the node if the agreeraent is good ("going deeper"
means usinf; a routine like the one illustrated above for checkers and chess).
But now we should consider a more sophisticated process. First, the des-
criptors are in the language of the speciality of the nodej should there be
a translator into a "primitive" language? Will this result in making the
lists too long—or making comparisons of agreement difficult? Alternatively,
each node might supply the kind of terms and relations that make it unusual—
i.e., provide exploration by the method of rare events rather than overlapping
events. In any event, could the exploration of a node provide a better basis
for the "indicators" of the other nodes that E might visit later on?
11-Ui-63
We have tried out a tick-tack-toe (t-t-t) player learning "Go-Bang."
The procedure works reasonably well, except that there seems to be no way in
which the t-t-t player can come to recognize an indefinitely expanding board—nor would he acquire all the rules, since the more-or-less exceptional cases
come up only in the play. The really Interesting case is the draw—a very
familiar phenomenon to a t-t-t. He asks the informant for an instance of a
draw, and receives none, This illustrates the difficulty we have in under-
standing the design of the Informant. If we make him very passive, he becomes
less interesting—but if we let him do things—like give strong hints—then
what role does he have? For example, does he say "there are none," or "this
isn't one, and this isn't one, etc."? To say "there are no draws" is in
fact to supply a whole rule. Our difficulties with the informant are fami-
liar ones. Kant merely had him introduce a formless stream of inputs—vague
mutterings from the given. If the informant is the input-box, then he is
not a part of the system's design—but if he is part of design, how can he
7
inform? Consider the system—Brahe-Kepler. Brahe is the informer—so Kepler
merely checks against Brahe' s records tosse if his new scheme "works"—is
Brahe-Kepler a (degenerate) case of an inquiring system? At least K could have
asked B for more data? Gould he have told B how to collect the data? For the
moment, we forget this vexing problem, and let the informant be a simple
input, in order to create the first example.
Nevertheless a list of desirable properties of I.S. should be kept in
mind. This means a classification if IS's resources and procedures, as well
as hi3 goals.
Resources:
(a) a language (logic, syntax, semantics)
(b) an identification procedure.
(c) a storage of Information (the "informant")
(d) a storage of formal-structures.
Procedures:
(a) a method of querying the informant;
(b) a method of analogizing (asking questions in terms of
formal structure);
(c) a sense of success and of failure;
(d) an idea of what to do next when failure occurs;
(e) an idea about modifying any part of itself.
Goals:
(a) a way of identifying its own goals;
(b) a way of modifying its goals;
(c) a way of understanding why its goals are legitimate.
8
12-9-63
Some considerable time has been spent on the informant—who is clearly
the most difficult character to construct in this scenario. The point is,
what does he know and how does he know it? The informant of the problem
solvers is a rationalist type—all ideas are "innate." This is pretty much
the same character that we first introduced—he knew the rules of checkers and
hence could answer any question as long as it was clearly put. Indeed, in the
example above, the I.S. is a clarifier, much as Descartes was. Now EPAM is a
system that learns to perceive and discriminate. It can take something like
a set of letters, identify the letters, identify syllables, and ascribe pro-
perties and shapes. It does this by a built-in a priori—identification ♦
sub-part routine + property assigning. It is therefore quite Kantian, but
with an inability to reflect (hence it cannot by Itself tell what it is like).
So possibly what we seek in IS., is an EPAM-like informant. Schematically,
IS^ operates by its sensing EPAM-like structures. But it has initially stored
in it a number of other structures with their own "base terms"—these are all
processed through EPAM, so that links are established as illustrated below:
EPAK to which chesshas been applied.
Checkers+ Rules
Numbertheory
—a
_________ _
,
„„„,
,,.,
] '■■ ■■' .'■'
piece '—~ 1 piece - - number
board board empty matrix etc.
move (S with O's) move number pair
end (part property) end ~ ■ - solutionI
position (S with O's) position filled matrixjJ 1, 1
■
9
IS, poses questions of EPAM, using the routines already described, but now
restricted to asking EPAM-like questions.
Illustraion (symbolic logic). EPAM has been presented with a series of
wff's of some formal system, and metasyntatical assertions about these:
"in T" and "not-in-T." Thus
*p c p' in T
'p* not-in-T
etc.
IS. has stored PM - (+ other systems) - The initial translations are as
follows. PM is (p, q, r (alphabet), V, ", 3, -<, <, ' , r $ i (subparts) in T,
not-in-T). The links are
Thus PM proceeds to find that all its axioms are stored in the form Ax-in-T
in EPAM provided it makes + into V . But making <S into ~ does not evolve
any stored theorems—e.g., PM asks whether ERfcM has stored c s<>]? < p J in T—
r'p £q* < (~q < -p)1 in T
<>p not-in-T
*~sp <: ~p* in T
r, P*q' <P1 in T1 < 1 in T
*
10
a test of ~^~p -^ p. EPAM has not. IS^ now tries a second source, etc.???
Some Notes on I-elbnlzian 1.3, '5.
1, Perceptions are all stored in some form (i.e., the reception-
process is irrelevant. L simply puts it Into the. activity of God—but a
design way to look at it is to say that there Is no design problem of the
sensor).
2. There are some principles at work that "makes perceptions change"
(called appet.ition)—that is, there is a flow of perceptions going on. Pre-
sumably L means that the state of the system keeps changing, but he is cer-
tainly not clear as to what this state means.
3. One principle operates by "efficient causes"—this means that the
perception-system lias its own built-in, mechanical process of keeping the
perceptions moving.
li. The second principle operates by "final causes"—i.e., the system
can scan alternative ways of changing its state, evaluate each one, and pick
the one that seems best. E.g., it can recall from memory, or it can "imagine"
—i.e., combine perceptions that were never so combined in the efficient-
cause stream, or it can abstract—i.e., play with the properties only—this
second principle L calls apperception.
£" The goal is to find th© true primitives, definitions and axioms
which make all the perceptions "true,"
6, A perception is true contingently if it has a sufficient reason.
This means that I'© 1.3. searches for "if, then" relations—i.e., p r>q, where
p Is a stored perception and qis the candidate for being a "fact." Thus q
is a fact according to the principle of sufficient reason if there is a fact
that implies q. Presumably this infinite regra^sus does not bother the I.S.
At no stage could any perception qualify as an irrevocable fact in the system
r»
V 11
of sufficient reasons. Indeed, we can think of the I.S. as constructing a
number of alternative "fact-nets,"
7. A statement is an assertion about a class of perceptions. A state-
ment is true analytically (a truth of Reason. H 33) if its opposite is a
contradiction. The goal is to find that one fact-net which is analytically
true, MB. the goal Is to find the one formal system that can stand the test
of consistency.
8. Generalised Leibnizian I.S.'s. In L's philosophy, the problem is
one of searching for a way to link perceptions that will satisfy some basic
criterion that is wanted, (in his case by the Supreme Designer). Therefore,we imagine a system stored with perceptions—e.g., an EPAM capable of estab-
lishing nets. We have a class of binary Relatione |L . The system is
capable of verifying whether or not pR. q holds for any perceptions p, q,
and any relation belonging to IL . A net is any connected set of relational
sentences, where ti-very sentence p R^ q is connected with another sentence
r ||g p (with the possible exception of a "first sentence"). Now the designer
wants to find from among the