Human-robot interaction
description
Transcript of Human-robot interaction
![Page 1: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/1.jpg)
Human-robot interaction
Michal de Vries
![Page 2: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/2.jpg)
Humanoid robots as cooperative partners for people
Breazeal, Brooks, Gray, Hoffman, Kidd, Lee, Lieberman, Lockerd and Mulanda
International Journal of Humanoid RobotsSubmitted 2003, published in 2004
![Page 3: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/3.jpg)
Overview 1) Introduction
2) Understanding others
3) Social robotics
4) Meet Leonardo
5) Task learning
6) Discussion
![Page 4: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/4.jpg)
1) Introduction The goal of the paper: developing robots with
social abilities. Such robots can understand natural human
instructions (such as natural language, gestures, emotional expressions).
Learning new skills should be done quickly. Create robots that play a role in the daily
lives of ordinary people.
![Page 5: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/5.jpg)
2) Understanding others Theory of Mind: People attribute mental
states (beliefs, desires, goals) to understand and predict behavior.
This is even the case with non-living things of sufficient complexity (Breitenberg).
Although this is far from scientific, it is suprisingly useful (Dennett, 1987).
Mirror neurons are a possible neural mechanism for the Theory of Mind (Gallese & Goldman, 1998).
![Page 6: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/6.jpg)
3) Social robots Human-robot collaboration: No master-slave
relation between human and robot, but cooperating partners.
Joint Intention Theory: Doing something together as a team where the teammates share the same goal and same plan of execution.
![Page 7: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/7.jpg)
Robots understanding others Most robots interact with people as objects or
socially impaired people.
Social robots must be capable of understanding the intentions, beliefs, goals and desires of people. It must also understand social cues (and vice versa).
Such robots must be able to take multiple points of view. Common vs. partial knowledge.
![Page 8: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/8.jpg)
How should social robots learn?
It is a trend in machine learning to eschew built-in structure or a priori knowledge of the environment. The main focus is on statistical learning techniques.
Such techniques need hunderds or thousands examples to learn something succesfully.
![Page 9: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/9.jpg)
How should robots learn? Learning without built-in structure is a
problem: A robot needs to learn quickly. Learning in biology is robust and fast.
Furthermore, humans are also born with
innate cognitive and behavioral machinery which develops in an environment.
So the authors use a combination of bottom-up and top-down processing.
![Page 10: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/10.jpg)
4) Meet Leonardo
Leonardo: a robot with 65 degree of freedom
![Page 11: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/11.jpg)
Leonardo's architecture
Leo's computational architecture
![Page 12: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/12.jpg)
Understanding speech Leo cannot speak, but has a natural
language understanding system called Nautilus.
Nautilus supports: For instance, a basic vocabulary, simple contexts and spatial relations.
![Page 13: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/13.jpg)
The vision system Leo percieves the environment with 3
camera systems. A camera behind the robot to track people
and objects in Leo's environment (peripheral information).
An overhead camera mounted in the ceiling that faces vertically down to track gestures and objects (color, position, shape, size).
The third camera system is in Leo's eyes and processes face recognition and facial features.
![Page 14: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/14.jpg)
Attention Leo's attentional system computes the level
of saliency (interest) for objects and events.
Three factors compute saliency: perceptual properties internal states (belief system) socially directed reference
![Page 15: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/15.jpg)
Beliefs Seeing reflects the state of the world as it is
directly precieve.
Beliefs are representational and are held even if they do not happen to agree with immediate perceptual experience.
Leo's belief system gets input (visual, tactile information and speech) and merges this information into a coherent set of beliefs.
![Page 16: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/16.jpg)
Beliefs Beliefs must be processed and updated
correctly. Leo can compare his beliefs with beliefs of
others. It must make a distinction between his beliefs and the beliefs of others, but also know which beliefs are common knowledge.
Leo can represent beliefs of others by monitoring people's looks at objects, their gestures and talks over time.
![Page 17: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/17.jpg)
5) Task learning Leo can learn from natural human
instructions. “Leo, this is a hammer.” By hearing its own name, the word “this” and
in combination with a pointing gesture to a hammer, the speech understanding passes this knowledge to the spatial reasoning and belief system.
Leo can show whether he understands the instructions. “Leo, show me the hammer.”
Leo can evaluate its own capabilities.
![Page 18: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/18.jpg)
Push the button
Leo learns pushing buttons
![Page 19: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/19.jpg)
How Leo learns pushing buttons
Task:“Buttons-On-and-Off” Leo indicates that he does not know such task and goes into learning mode.
Subtask:“Button-On” The same reaction. A person learns Leo this by demonstrating it.
It says “Press button 1” and turns it on. Leo encodes the goal state associated with
an action performed in the tutorial setting by comparing the world state before and after its execution.
The same holds for the subtask “Button-Off”.
![Page 20: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/20.jpg)
A schematic overview
All subtasks must be learned in order to master the overall task.
![Page 21: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/21.jpg)
6) Discussion Social skills (performing/recognizing) are
important for robots in relation with humans. No master-slave relation, but collaboration. Knowing what matters: attention. An action can be determined in more ways:
Reinforcement learning (large state-action spaces -> large number of trials)
Learning by imitation (a lot faster, but requires innate knowlegde)
![Page 22: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/22.jpg)
Some remarks Leo cannot speak, but speech is very
important in social interaction. The practical and biological implausibility of 3
camera systems, especially an overhead camera.
The biological implausibility of innate knowledge. Of course, we are biased towards some behaviour, but we do not have an a priori vocabulary.
No role for mirror neurons?
![Page 23: Human-robot interaction](https://reader036.fdocuments.net/reader036/viewer/2022081503/56816790550346895ddcbfa0/html5/thumbnails/23.jpg)
Extra Info
More information about Leo can be found at:http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html
Questions?