Wearable Computing - Charlie...

5
Wearable Computing Editor: Thad E. Starner Georgia Institute of Technology [email protected] 16 PERVASIVE computing Published by the IEEE CS and IEEE ComSoc 1536-1268/06/$20.00 © 2006 IEEE Wearables and Robots: A Shared View Charles C. Kemp U ntil recently, robots were mostly relegated to factories’ fenced-off areas, hobbyists’ homes, and research- ers’ laboratories. Robots like the Roomba are now entering our homes on a large scale and performing use- ful tasks. They’re the physical embodi- ment of computation, representing a new stage of the computer revolution. With a robotic body, a computer is no longer confined to modifying bits and conveying information; it can enter and act upon our world. As robots get increasingly sophisti- cated, we’ll ask them to help us with our daily lives in increasingly complicated ways. We’ll expect them to work closely with us and the objects in our environ- ments. To do so, they’ll need to per- ceive our world and understand us well enough to perform the tasks we desire. Achieving this level of sophistication presents challenging research problems for roboticists. Even the most advanced robots have limited mobility in human environments. Social constraints, safety issues, limited physical ability, and high expenses restrict robots to a small part of our world. How can robots learn to perceive and act in the diverse environ- ments we inhabit from this limited van- tage point? How can researchers test their algorithms outside the lab? Roboticists can use wearables as a development platform for the sophisti- cated robots of the future. Wearables provide a way for computers to share our experiences. Instead of using a robotic body to explore our world, a computer can ride along with us. Through wearables, computers can observe the world and our activities from a privileged, first-person perspec- tive that’s analogous to the perspective they would have from a robotic body. YOU’RE THE ROBOT For wearables to serve as a robotics platform, their sensing ability must be comparable to a robot’s. Fortunately, because wearables and robots operate with similar goals and constraints, they already use many of the same types of sensors, including cameras, microphones, and accelerometers. High-end wearables and robots benefit from better sensing and greater mobility, so their designs tend to use the latest small, low-power sensors. At MIT’s Humanoid Robotics Lab (http://people.csail.mit.edu/cckemp), we have used wearables to help us develop humanoid robots that manipulate objects within human environments. Our wear- able Duo emulates much of the sensing performed by the humanoid robot Domo. It includes a wide-angle head-mounted camera that coarsely captures the wear- er’s field of view and points down slightly to better observe the workspace of the wearer’s hands (see figure 1). It also has body-mounted orientation sensors. The backpack contains batteries and a laptop for real-time processing, data capture, and wireless communication. KNOW YOUR BODY To act in the world, a robot must account for its body. A kinematic model that describes a robot’s body in terms of the lengths of body parts, how they’re connected, and the angles at their joints is a powerful representation that helps robots perceive and control their bodies. To better interpret the wearer’s actions and apply these observations to robots, we designed Duo to sense the pose of the wearer’s body and construct a kinematic model. Orientation sensors mounted to the wearer’s head, torso, upper arm, and lower arm estimate each body part’s ori- entation with respect to Earth’s magnetic field and gravity. With a robot, we need to carefully tune the kinematic model only once because the sensors remain rigidly fixed to the robot’s body, which varies little over time. Unfortunately, a wearable is more complex: each time a person puts it on, the sensors will likely be in different positions with respect to the wearer’s body. Moreover, the wearer’s body can change over time, and different wearers will have different bodies. To account for these variations, we developed algorithms that enable the wearable to efficiently learn a kinematic model of the wearer’s body. The wear- able automatically determines the sen- sors’ placement on the body, the pose of the camera on the wearer’s head, and the lengths of the wearer’s body parts. One way it does this is by finding the fastest- moving object visible with the camera, which typically corresponds with the hand. With this kinematic model, Duo can predict the location of the wear- er’s hand in images from the head- mounted camera, which lets it closely observe manipulation events. Figures

Transcript of Wearable Computing - Charlie...

Page 1: Wearable Computing - Charlie Kempckemp.bme.gatech.edu/cckemp_pervasive_computing_2006.pdfJULY–SEPTEMBER 2006 PERVASIVE computing 19 WEARABLE COMPUTING human environments introduce

Wearable Computing Editor: Thad E. Starner ■ Georgia Institute of Technology ■ [email protected]

16 PERVASIVE computing Published by the IEEE CS and IEEE ComSoc ■ 1536-1268/06/$20.00 © 2006 IEEE

Wearables and Robots: A Shared ViewCharles C. Kemp

U ntil recently, robots were mostly relegated to factories’ fenced-off

areas, hobbyists’ homes, and research-ers’ laboratories. Robots like the Roomba are now entering our homes on a large scale and performing use-ful tasks. They’re the physical embodi-ment of computation, representing a new stage of the computer revolution. With a robotic body, a computer is no longer confi ned to modifying bits and conveying information; it can enter and act upon our world.

As robots get increasingly sophisti-cated, we’ll ask them to help us with our daily lives in increasingly complicated ways. We’ll expect them to work closely with us and the objects in our environ-ments. To do so, they’ll need to per-ceive our world and understand us well enough to perform the tasks we desire.

Achieving this level of sophistication presents challenging research problems for roboticists. Even the most advanced robots have limited mobility in human environments. Social constraints, safety issues, limited physical ability, and high expenses restrict robots to a small part of our world. How can robots learn to perceive and act in the diverse environ-ments we inhabit from this limited van-tage point? How can researchers test their algorithms outside the lab?

Roboticists can use wearables as a development platform for the sophisti-cated robots of the future. Wearables provide a way for computers to share our experiences. Instead of using a robotic body to explore our world, a computer can ride along with us.

Through wearables, computers can observe the world and our activities from a privileged, fi rst-person perspec-tive that’s analogous to the perspective they would have from a robotic body.

YOU’RE THE ROBOTFor wearables to serve as a robotics

platform, their sensing ability must be comparable to a robot’s. Fortunately, because wearables and robots operate with similar goals and constraints, they already use many of the same types of sensors, including cameras, microphones, and accelerometers. High-end wearables and robots benefi t from better sensing and greater mobility, so their designs tend to use the latest small, low-power sensors.

At MIT’s Humanoid Robotics Lab (http://people.csail.mit.edu/cckemp), we have used wearables to help us develop humanoid robots that manipulate objects within human environments. Our wear-able Duo emulates much of the sensing performed by the humanoid robot Domo. It includes a wide-angle head-mounted camera that coarsely captures the wear-er’s fi eld of view and points down slightly to better observe the workspace of the wearer’s hands (see fi gure 1). It also has body-mounted orientation sensors. The backpack contains batteries and a laptop for real-time processing, data capture, and wireless communication.

KNOW YOUR BODYTo act in the world, a robot must

account for its body. A kinematic model that describes a robot’s body in terms of

the lengths of body parts, how they’re connected, and the angles at their joints is a powerful representation that helps robots perceive and control their bodies.

To better interpret the wearer’s actions and apply these observations to robots, we designed Duo to sense the pose of the wearer’s body and construct a kinematic model. Orientation sensors mounted to the wearer’s head, torso, upper arm, and lower arm estimate each body part’s ori-entation with respect to Earth’s magnetic fi eld and gravity. With a robot, we need to carefully tune the kinematic model only once because the sensors remain rigidly fi xed to the robot’s body, which varies little over time. Unfortunately, a wearable is more complex: each time a person puts it on, the sensors will likely be in different positions with respect to the wearer’s body. Moreover, the wearer’s body can change over time, and different wearers will have different bodies. To account for these variations, we developed algorithms that enable the wearable to effi ciently learn a kinematic model of the wearer’s body. The wear-able automatically determines the sen-sors’ placement on the body, the pose of the camera on the wearer’s head, and the lengths of the wearer’s body parts. One way it does this is by fi nding the fastest-moving object visible with the camera, which typically corresponds with the hand.

With this kinematic model, Duo can predict the location of the wear-er’s hand in images from the head-mounted camera, which lets it closely observe manipulation events. Figures

Page 2: Wearable Computing - Charlie Kempckemp.bme.gatech.edu/cckemp_pervasive_computing_2006.pdfJULY–SEPTEMBER 2006 PERVASIVE computing 19 WEARABLE COMPUTING human environments introduce

JULY–SEPTEMBER 2006 PERVASIVE computing 17

2a and 2b show two images that Duo’s camera captured. Duo has marked the kinematically predicted hand location with a white circle. Aaron Edsinger’s humanoid robot Domo (http://people.csail.mit.edu/edsinger/domo.htm) cap-tured the bottom two images (fi gures 2c and 2d) from its perspective and marked predicted hand locations with

solid green circles. By sensing their body pose, both the wearable and the robot can predict where the hand will appear. The systems use similar meth-ods to automatically discover the rela-tionship between their body and the hand’s location.

The kinematic model also lets Duo automatically segment the wearer’s activity on the basis of the speed of the wearer’s hand (see figure 3). Because the hand tends to move rapidly between destinations and slow down as it approaches a destination, this is a

Wide-angle camera focused onthe hand's workspace

Orientationsensors

Backpack with batteries and a laptop

Figure 1. The main components of Duo, a wearable system that captures fi rst-per-son video as well as the posture of the wearer’s head, torso, and arm.

0.70.60.50.30.20.1

00

t ~ 0.0 t ~ 2.5 t ~ 5.0 t ~ 6.3 t ~ 8.0

2.5Time (seconds)

5 6.3 8

Arm

leng

ths/

sec.

Figure 3. At the bottom, a graph of the estimated speed of the wearer’s hand indicates where to split the wearer’s manipulation activities. Red spikes mark the slowest moments. Duo’s camera captured images, shown at the top, of the wearer’s activities at these slowest moments. These moments naturally divide the wearer’s movements.

(a) (b)

(c) (d)

Figure 2. Images captured by the wearable Duo and the humanoid robot Domo: (a, b) Duo marked the predicted hand locations with white circles; (c, d) Domo marked the predicted hand locations with solid green circles.

Page 3: Wearable Computing - Charlie Kempckemp.bme.gatech.edu/cckemp_pervasive_computing_2006.pdfJULY–SEPTEMBER 2006 PERVASIVE computing 19 WEARABLE COMPUTING human environments introduce

18 PERVASIVE computing www.computer.org/pervasive

WEARABLE COMPUTING

WEARABLE COMPUTING

useful way to divide the hand’s motion into significant actions. By collect-ing the locations of these destinations with respect to the body, Duo can fi nd common hand positions such as reach-ing out into the world, hanging at the wearer’s side, and manipulating objects just in front of the center of the chest.

The kinematic model also facilitates annotating and browsing captured activity. For example, it provides a clearly interpretable visualization of the wearer’s body, lets the user jump among actions, and enables search over activities based on the hand’s location (fi gures 4a and 4b).

PERCEPTUAL TRANSFERDuo has succeeded as a platform for

developing perceptual algorithms for robots. Domo (see fi gure 5) uses much of the same perceptual code as Duo, especially in the modality of visual motion. Moreover, the methods that proved successful on Duo in everyday human environments heavily influ-enced the methods we developed that let Domo autonomously learn about its body and manipulate handheld tools. Using Duo, we were able to develop algorithms that were robust to the challenges of perception in real human environments from the perspective of a fully capable walking humanoid. Duo also inspired us by letting us view every-day manipulation activities performed by people in a home environment.

MOTOR TRANSFERWe have yet to transfer the wear-

er’s motor skills to the robot. We could treat the wearable as a mobile motion-capture system and retarget human-hand trajectories to the robot (see the sidebar “Intelligent Rooms”), but we’re skeptical of these methods’ value in the context of manipulation in human environments. Robot manipu-lation fundamentally involves contact between the robot and the world, and

Figure 4. (a) Using the browsing software, we can quickly annotate the wearer’s activities by labeling objects. The kinematic model displays simultaneously on the right. (b) We can also browse effi ciently over the wearer’s activities using activity segmentations, which are based on the kinematically estimated hand speed.

(a) (b)

Figure 5. Domo uses visual feedback to move a brush to bring it into contact withthe black tube in order to clean the tube. (image courtesy of Aaron Edsinger)

Page 4: Wearable Computing - Charlie Kempckemp.bme.gatech.edu/cckemp_pervasive_computing_2006.pdfJULY–SEPTEMBER 2006 PERVASIVE computing 19 WEARABLE COMPUTING human environments introduce

JULY–SEPTEMBER 2006 PERVASIVE computing 19

WEARABLE COMPUTING

human environments introduce uncer-tainties and disturbances that we must consider when developing robot con-trol algorithms. For robots to operate robustly under these conditions, open-loop control is unlikely to be suffi cient. Instead, we’re focusing on tight closed-loop control that couples perception and action, thereby letting the robot reject unmodeled disturbances and overcome perceptual uncertainty.

For example, to pour a bottle’s con-tents, we don’t move the bottle through a motion-captured 3D trajectory. In-stead, Domo fi rst moves the bottle to actively detect the bottle’s opening using visual motion, then uses visual feedback to move the bottle’s open-ing to a location above the receptacle, and fi nally tips the bottle to move the contents into the receptacle. Similarly, to brush a surface, Domo uses visual feedback to move the brush until it’s in contact with the surface and then uses force control to maintain contact with the surface while moving the brush over it (see fi gure 5).

Learning controllers such as these from the experiences captured from a wearable system presents a chal-lenging direction for future research on the application of wearables to robots, which is distinct from tradi-tional forms of context recognition. For this approach, a wearable would try to learn what perceptual features are important to a task and infer how to control these perceptual features in a closed-loop fashion. For example, the wearable would learn that a con-tainer should remain upright to avoid prematurely pouring out the contents. Likewise, it would ideally learn that the ultimate goal of pouring is to move the contents from the container into the receptacle.

This research direction highlights additional challenges for wearables that learn motor control from human behav-ior. A wearable will typically be learn-ing from an expert. Unlike a robot that autonomously explores the world, the wearer will rarely fail. This will make

learning more diffi cult, because with-out positive and negative examples, the wearable won’t be able to directly assess the boundaries that distinguish success from failure nor the costs associated with various forms of errors. Also, when making contact with the world, force, torque, and tactile sensing become criti-cal to control. Designing a wearable that reliably captures these sensory param-eters would be challenging.

A potentially easier direction for research into transferring motor skills from wearables to robots would be a hybrid approach, where the wearable provides coarse strategies that robots can use as hints while exploring the

world. This approach could exploit the advantages of both platforms (see fi gure 6) and might have analogies in human learning, where people interleave watching an expert and attempting the task for themselves. The wearable could increase the effi ciency of the robot’s task learning by getting the robot in the ballpark before it explores the space of possible behaviors on its own.

Vannevar Bush in the 1940s recog-nized the opportunity for machines

to augment mental abilities or serve as cognitive prosthetics, and wear-able researchers have developed devices

INTELLIGENT ROOMS

Sensors for wearables and robots don’t have to be mounted on the body. For example,

traditional motion-capture systems, like those Hollywood studios use, consist of cameras

mounted around a room that can directly perceive small refl ective markers’ 3D positions.

Like Hollywood’s computer graphics specialists, robotics researchers such as Chad Jenkins

at Brown University (www.cs.brown.edu/~cjenkins) have used such systems to record

people’s movements in order to help robots learn how to move. These systems have

enabled robots to perform motions, such as dance moves or karate routines, that don’t

require perception of the world or involve contact.

Instrumented rooms such as these will continue to play a useful role in training robots

and helping robots act within the world. There are, however, two drawbacks to this

approach that wearables help overcome. First, not all rooms will be instrumented. Robots

can only be trained with experiences that occur within the instrumented rooms, which

limits and biases the training data and makes robust operation diffi cult to test. In the

long run, room-based sensor services might become ubiquitous and standardized so that

both wearables and robots can query a room and use the services it offers. Rooms could

know about the things they contain and provide suggestions for their use, potentially

helpful to both people and robots. Even coarse location-based services might answer the

question, “Where can I fi nd a power outlet?” Despite this idealized future, environments

will offer varying levels of service. Not all objects will have RFID tags that simplify their

identifi cation and tracking, and not all rooms will be equipped with servoed, high-

resolution cameras that support a robot’s fi ne-motor skills. By bringing along their own

sensors, a wearable or robot can ensure that it has the sensing capabilities it requires.

Second, room-mounted sensors might not be well-suited to the robot’s needs. First-per-

son sensing provides a detailed view of activity from a consistent perspective with respect

to the body. When a person manipulates an object, the relevant visual characteristics of the

object might not be visible from a wall-mounted camera, owing to camera resolution and

occlusion from the person’s body and the object itself. In contrast, a person will tend to

manipulate an object such that he or she can view the relevant visual features. Also, a per-

son can hold an object so that it has a standard orientation with respect to his or her eyes,

simplifying perception and control. For intuition about the challenges of room-mounted

sensors, imagine manipulating an object if your eyes were attached to the room’s walls.

Page 5: Wearable Computing - Charlie Kempckemp.bme.gatech.edu/cckemp_pervasive_computing_2006.pdfJULY–SEPTEMBER 2006 PERVASIVE computing 19 WEARABLE COMPUTING human environments introduce

20 PERVASIVE computing www.computer.org/pervasive

WEARABLE COMPUTING

WEARABLE COMPUTING

that deliver on this promise. Similarly, robotics researchers are striving to cre-ate robots that can effectively augment people’s physical abilities. At one end of

the spectrum, the person and the robot have distinct bodies and work together to achieve some goal. At the other end of the spectrum, the robot is directly integrated with the person’s body in the form of an intelligent prosthetic or exoskeleton. These wearable robots are exciting examples of research that blurs the lines. For wearable robots such as the HAL-5 full-body exoskeleton developed at the University of Tsukuba in Japan (see figure 7; http://sanlab.kz.tsukuba.ac.jp/HAL/indexE.html) and the intelligent prosthetic ankles of Hugh Herr’s group at MIT (see fi gure 8; http://biomech.media.mit.edu/index.html), the robot and the person share the same body.

In the future, to seamlessly help the wearer, the wearer’s new body parts might think for themselves. In situa-tions where the user has only a low-bandwidth interface to the wearable robot, the robotic portion of the per-son’s body might use high-level percep-tion to autonomously help the person. For example, artifi cial legs might use wearable cameras or laser scanners to autonomously step over obstacles while traversing complex terrain. Similarly, an artifi cial hand might use wearable sen-sors to autonomously anticipate a task and properly grasp an object.

Given their complementary nature, wearables and robots will continue to work together to help us in our daily lives. The core perceptual techniques

they use will likely evolve in parallel and benefi t from one another’s advances, leading to exciting new capabilities and a world of pervasive robotics.

Figure 7. The HAL-5 full-body exoskeleton (image courtesy of Yoshiyuki Sankai, University of Tsukuba/Cyberdyne)

Charles C. Kemp is a post-

doctoral researcher in Rod

Brooks’s Humanoid Robot-

ics Group at the MIT Com-

puter Science and Artifi cial

Intelligence Laboratory.

Contact him at [email protected].

Body variation Varied sensor placement Constraints on sensor placement No direct controlLearns by observing an expert with few failuresFully capable

Stable bodyStable sensors

Easy to add sensors Direct control

Learns from failure and explorationLimited capabilities

Figure 6. Wearables and humanoid robots share many similarities but also differ in signifi cant ways.

Figure 8. (a) Hugh Herr and an intelligent prosthetic knee developed by MIT’s Biomechatronics Group (image courtesy of Hugh Herr and Peter Menzel); (b) a new actuated ankle developed by the same group. (image courtesy of Hugh Herr)

(a)

(b)