CYBERNETICS Dr. Tom Froese. The nature of “equilibrium” “We have three objects on the table...

download CYBERNETICS Dr. Tom Froese. The nature of “equilibrium” “We have three objects on the table before us: one is a cube resting on one face, the second is.

If you can't read please download the document

Transcript of CYBERNETICS Dr. Tom Froese. The nature of “equilibrium” “We have three objects on the table...

  • Slide 1
  • CYBERNETICS Dr. Tom Froese
  • Slide 2
  • The nature of equilibrium We have three objects on the table before us: one is a cube resting on one face, the second is a sphere, and the third is an inverted cone exactly balanced on its point. They correspond to the usual stable, neutral and unstable equilibria respectively. The criterion used to distinguish the types of equilibria is that we apply a small disturbance to the object and see what happens. Asbhy (1940, p. 479)
  • Slide 3
  • Three kinds of equilibria
  • Slide 4
  • Stable equilibrium We may now define stable equilibrium more precisely: a variable is in stable equilibrium if, when it is disturbed, reactive forces are setup which act back on the variable so as to oppose the initial disturbance. (Ashby 1940, p. 479) Stable attractor
  • Slide 5
  • Three kinds of equilibria
  • Slide 6
  • Necessary for existence Finally, there is one point of fundamental importance which must be grasped. It is that stable equilibrium is necessary for existence, and that systems in unstable equilibrium inevitably destroy themselves. Consequently, if we find that a system persists, in spite of the usual small disturbances which affect every physical body, then we may draw the conclusion with absolute certainty that the system must be in stable equilibrium. Ashby (1940, p. 482)
  • Slide 7
  • Stable equilibrium = adaptedness It is suggested here that adaptive behaviour may be identical with the behaviour of a system in stable equilibrium, and that this latter concept may, with advantage, be substituted for the former. The advantages of this latter concept are that (1) it is purely objective, (2) it avoids all metaphysical complications of purpose, (3) it is precise in its definition, and (4) it lends itself immediately to quantitative studies. (Ashby 1940, p. 483) Beer (1997) Domain of viability
  • Slide 8
  • Definition of cybernetics According to Norbert Wiener (1948), cybernetics was born in the confluence of three lines of work: 1. Information theory 2. Neural networks 3. Connections between negative feedback and purposive behavior The first two were closely related, for example in the work of McCulloch and Pitts (1943): a logical calculus of the ideas immanent in nervous activity. Firing/no-firing = on/off = 1/0
  • Slide 9
  • Types of feedback Differences in stability?
  • Slide 10
  • Negative feedback --
  • Slide 11
  • Negative feedback control Centrifugal governor
  • Slide 12
  • Negative feedback and teleology
  • Slide 13
  • although a gun may be used for a definite purpose, the attainment of a goal is not intrinsic to the performance of the gun; random shooting can be made, deliberately purposeless. Some machines, on the other hand, are intrinsically purposeful. A torpedo with a target-seeking mechanism is an example. All purposeful behavior may be considered to require negative feed-back. (Rosenblueth, Wiener and Bigelow 1943, p. 19)
  • Slide 14
  • Categories of behavior Rosenblueth, Wiener and Bigelow (1943)
  • Slide 15
  • Arturo Rosenblueth Rosenblueth was born in 1900 in Ciudad Guerrero, Chihuahua. He began his studies in Mexico City, then traveled to Berlin and Paris where he obtained his medical degree. Returning to Mexico city in 1927, he engaged in teaching and research in physiology. In 1930 he obtained a Guggenheim Scholarship and moved to Harvard University, to the department of Physiology, then directed by Walter Cannon. With Cannon he explored the chemical mediation of homeostasis. Rosenblueth co-wrote research papers with both Cannon and Norbert Wiener, pioneer of cybernetics. Rosenblueth was an influential member of the core group at the Macy Conferences. In 1944 Rosenblueth became professor of physiology at the National Autonomous University of Mexico and, in 1961, director of the Center for Scientific Research and Advanced Studies (Cinvestav) at the National Polytechnic Institute. Arturo Rosenblueth died September 20, 1970 in Mexico City.
  • Slide 16
  • Grey Walters tortoises
  • Slide 17
  • From feedback to complexity The concept of 'feedback', so simple and natural in certain elementary cases, becomes artificial and of little use when the interconnections between the parts become more complex [...]. Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole. For understanding the general principles of dynamic systems, therefore, the concept of feedback is inadequate in itself. What is important is that complex systems, richly cross- connected internally, have complex behaviors, and that these behaviors can be goal-seeking in complex patterns. Ashby (1957, p. 54)
  • Slide 18
  • Science and consciousness Throughout the book, consciousness and its related subjective elements are not used for the simple reason that at no point have I found their introduction necessary. This is not surprising, for the book deals with only one of the properties of the brain Such an observation, showing that consciousness is sometimes not necessary, gives us no right to deduce that consciousness does not exist. The truth is quite otherwise, for the fact of the existence of consciousness is prior to all other facts. Ashby (1960, p. 11)
  • Slide 19
  • Adaptation v2.0 Ashby dedicates his first book, Design for a Brain, to systematically answering the question: why does the burnt kitten avoid the fire?
  • Slide 20
  • The Homeostat
  • Slide 21
  • Why does the burnt kitten avoid the fire?
  • Slide 22
  • The Ultrastable System The arrows to and from R represent, of course, the sensory and motor channels. The part R belongs to the organism, []. R is defined as the system that acts when the kitten reacts to the fire the part responsible for the overt behaviour. Ashby (1960, p. 80)
  • Slide 23
  • The Ultrastable System the kitten has a variety of possible reactions, some wrong, some right. This variety of reactions implies [] that some parameters, call them S, have a variety of values their primary action is to affect the kittens behavior Ashby (1960, p. 80)
  • Slide 24
  • The Ultrastable System The essential variables must now be introduced The essential variables have been represented collectively by a dial with a pointer, and with two limit-marks, to emphasize that what matters about the essential variables is whether or not the value is within physiological limits. Ashby (1960, p. 81)
  • Slide 25
  • The Ultrastable System In the case we are considering, the reacting part R is not specially related or adjusted to what is in the environment and how it is joined to the essential variables. Thus the reacting part R can be though of as an organism trying to control the output of a Black Box (the environment), the contents of which is unknown to it. Ashby (1960, p. 82)
  • Slide 26
  • The Ultrastable System the essential variables are to have an effect on which behavior the kitten will produce; and this is equivalent to saying that in the diagram of immediate effects there must be a channel from the essential variables to the parameters The organism that can adapt thus has a motor output to the environment and two feedback loops Ashby (1960, p. 82)
  • Slide 27
  • Distinct paradigms of cybernetics Rosenblueth et al. (1943) Single feedback loop Guided adaptation Environmental state is internally represented Appropriate response is internally computed Symbolic AI Ashby (1960) Double feedback loop Random adaptation Environmental state is internally unknown Appropriate response is interactively emergent Embodied AI
  • Slide 28
  • Adaptation to inverting goggles Erismann 1930s Kohler 1950s and 60s Demonstrated the plasticity of perceptual systems. Technologically inverted senses would part by part adapt over time and return to normal. Which paradigm can explain this better?
  • Slide 29
  • Significance of the homeostat I cant actually think of any prior example of a real machine that would randomly open-endedly, as I would say reconfigure itself in response to its inputs. It seems reasonable, then, to speak of the homeostat as having a kind of agency it did things in the world that sprang, as it were, from inside itself, rather than having to be fully specified from outside in advance. Pickering (2002, p. 417) But is the homeostat really capable of open-ended self- transformation? Can the principle of ultrastability do any real work?
  • Slide 30
  • Gordon Pask
  • Slide 31
  • Stafford Beer
  • Slide 32
  • Homeostasis in mobile robots Di Paolo (2003)
  • Slide 33
  • Homeostasis in robots Di Paolo (2003)
  • Slide 34
  • The collapse of cybernetics Shannon, C. E. (1948), A Mathematical Theory of Communication Information theory is the study of the encoding, decoding, storage and transmission of information.
  • Slide 35
  • Entropy as information
  • Slide 36
  • The rise of computer science
  • Slide 37
  • History of cognitive science Froese (2010)
  • Slide 38
  • History of alternative paradigms
  • Slide 39
  • Homework Please read the whole article if possible: Di Paolo, E. A. (2010). Robotics inspired in the organism. Intellectica, 1-2(53-54): 129162. Optional: Rosenblueth, A., Wiener, N. & Bigelow, J. (1943). Behavior, purpose and teleology. Philosophy of Science, 10(1): 18-24