Center for Computer-Aided Design material/Ki… · Graduate College The University of Iowa Iowa...
Transcript of Center for Computer-Aided Design material/Ki… · Graduate College The University of Iowa Iowa...
TASK-BASED PREDICTION OF UPPER BODY MOTION
by
Zan Mi
A thesis submitted in partial fulfillment of the requirements for the Doctor of Philosophy degree
in Mechanical Engineering in the Graduate College of The University of Iowa
May 2004
Thesis Supervisor: Associate Professor Karim Abdel-Malek
Graduate College The University of Iowa
Iowa City, Iowa
CERTIFICATE OF APPROVAL
_______________________
PH.D. THESIS
_______________
This is to certify that the Ph.D. thesis of
Zan Mi
has been approved by the Examining Committee for the thesis requirement for the Doctor of Philosophy degree in Mechanical Engineering at the May 2004 graduation.
Thesis Committee: ________________________________________ Karim Abdel-Malek, Thesis Supervisor
________________________________________ Jasbir Arora
________________________________________ Kendall Atkinson
________________________________________ James Cremer
________________________________________ Jia Lu
________________________________________ Geb Thomas
To my parents, my sister and
my husband
ii
ACKNOWLEDGMENTS
I first wish to express my sincere gratitude to my thesis supervisor Professor
Karim Abdel-Malek, for his expert guidance, continued encouragement and valuable
suggestions throughout the research work. His incredible creativity and invaluable
advice have deeply influenced me.
I am very grateful to Professor Jasbir Arora, from Department of Civil and
Environmental Engineering, member of my thesis committee, for his valuable
suggestions and kind consultation on the optimization part in my research work. I am
also grateful to my other thesis committee members: Professor James Cremer, from
Department of Computer Science, Professor Kendall Atkinson, from Department of
Mathematics, Professor Jia Lu and Professor Geb Thomas, from Department of
Mechanical and Industrial Engineering, for serving on my committee and providing me
their valuable suggestions.
I would like to extend my appreciation to Professor Laurent Jay, from Department
of Mathematics, for his help and valuable discussions on numerical analysis, Professor
James Schmiedeler, who served on my comprehensive exam committee, now has left
University of Iowa, for his valuable advice and Professor K.K. Choi for serving on my
comprehensive exam committee.
I also would like to thank my colleagues in Digital Humans Lab, Jingzhou Yang
and Joo Hyun Kim, for their encouragement and help from all aspects.
Finally special thanks to my husband, Yonghang, who always encouraged and
supported me throughout the thesis work.
iii
ABSTRACT
The proposed research deals with digital human modeling and simulation. Digital
humans are avatars that are digitally created, have the appearance of human-like motion
and behavior, and are used to simulate human motion and performance. Digital humans
have become a fundamental cornerstone of engineering analysis towards achieving a
higher level of digital prototyping. Our proposed work deals with predicting static and
kinematic motions of digital humans, in the most realistic manner possible, to simulate
their existence in a virtual world and to enable them to test and experience products that
are only defined in the digital world, thus reducing the time and cost associated with
prototyping. To achieve this goal, we have started with a significantly larger number of
degree-of-freedom model than typically used by researchers (15 DOF’s from the waist to
the hand) and have created a formulation that takes into consideration joint ranges of
motion. We have then introduced a unique task-based approach to posture prediction as a
postulate for why people assume specific postures. This postulate led to an optimization-
based approach, where human performance measures are quantified as functions of
variables that evaluate to real numbers, thus can be implemented in a multi-objective
optimization algorithm for arriving at “the best” posture. Of course, real-time
implementation of such an algorithm is no trivial matter; therefore, we have investigated
several optimization methods, including gradient-based and genetic to arrive at a near
real-time solution. Results of predicting postures for a digital human model were then
compared (at a simplified number of DOF’s) with existing codes and with experimental
data for verification purposes. Path trajectories followed by humans in space to execute a
task were also addressed. The concept of admissible kinematically-smooth trajectories
was created to characterize a path that does not admit switching of inverse kinematic
solutions during motion, therefore produce realistic smooth motion…a concept that is
true for humans but not for robotic motions. Because of the underlying formulation, we
iv
are able to design (or predict) such paths for digital humans. We have also investigated
the prediction of joint variables as vector functions of time to predict how human upper
body (including the upper extremities) behaves as the hand moves between any two
points in space. The end result is an optimization-based method using human
performance measures such as discomfort and smoothness in combination with minimum
jerk model for calculating joint path trajectories that look and feel most natural. An
optimization-based methodology for layout design using our human model and posture
prediction algorithm was also presented. Long term goals of this research are to enable
autonomous behavior and realistic motion of digital humans, with the ultimate goal of
reducing or eliminating the use of prototypes in the design cycle.
v
TABLE OF CONTENTS
Page
LIST OF TABLES........................................................................................................... viii
LIST OF FIGURES ........................................................................................................... ix
CHAPTER
1 INTRODUCTION .................................................................................................1
1.1 Motivation....................................................................................................1 1.2 Objectives ....................................................................................................4 1.3 Literature Review.........................................................................................8
1.3.1 Human Modeling .............................................................................8 1.3.2 Posture Prediction ..........................................................................13 1.3.3 Control Barriers .............................................................................20 1.3.4 Trajectory Planning........................................................................20 1.3.5 Layout Design................................................................................25
2 MODELING OPEN LOOP KINEMATIC STRUCTURES ...............................27
2.1 A 15-Degree-of-Freedom Model of Torso and Arm .................................27 2.2 Denavit-Hartenberg Representation Method .............................................34 2.3 Conclusions................................................................................................40
3 TASK-BASED POSTURE PREDICTION .........................................................41
3.1 Task-Based Behavior .................................................................................42 3.2 Cost Functions and Constraints .................................................................44
3.2.1 Discomfort .....................................................................................44 3.2.2 Effort ..............................................................................................45 3.2.3 Potential Energy.............................................................................45 3.2.4 Dexterity ........................................................................................46 3.2.5 Torque ............................................................................................48 3.2.6 Constraints .....................................................................................51
3.3 Optimization Formulation..........................................................................52 3.4 Predicting Postures and Validation............................................................57
3.4.1 Comparison with IKAN.................................................................63 3.4.2 Validation against Experimental Data ...........................................65
3.5 Multi-Objective Optimization....................................................................73 3.6 Real-Time Algorithm.................................................................................75 3.7 Conclusions................................................................................................84
4 HUMAN UPPER EXTREMITY PATH TRAJECTORY DESIGN ...................86
4.1 Non-crossable Surfaces..............................................................................87 4.2 Problem Definition.....................................................................................92 4.3 Problem Formulation .................................................................................94 4.4 Runge-Kutta Method for DAE of Index 2 .................................................95 4.5 Iteration Formulation .................................................................................97
vi
4.6 Optimization ..............................................................................................99 4.7 Examples..................................................................................................100
4.7.1 A Planar 3-DOF Human Arm Model...........................................100 4.7.2 A Spatial 4-DOF Manipulator .....................................................113
4.8 Conclusions..................................................................................................121
5 UPPER BODY MOTION PREDICTION.........................................................122
5.1 Path in Cartesian Space............................................................................123 5.1.1 Unconstrained Point-to-Point Movements...................................123 5.1.2 Curved Point-to-Point Movements ..............................................125
5.2 B-Spline Functions for Joint Variables....................................................133 5.2.1 Definition of B-Spline Curves .....................................................133 5.2.2 Joint B-Spline Functions..............................................................135
5.3 Illustration of Motion Prediction Method................................................136 5.4 Optimization ............................................................................................138 5.5 Results and Discussion ............................................................................141 5.6 Conclusions..............................................................................................156
6 OPTIMIZATION-BASED LAYOUT DESIGN ...............................................157
6.1 Problem Definition...................................................................................158 6.2 Human Model ..........................................................................................159 6.3 Layout Design..........................................................................................161
6.3.1 Cost Functions and Constraints ...................................................161 6.3.2 Optimization Scheme...................................................................164 6.3.3 Comparison of GA and SA..........................................................166
6.4 An Example .............................................................................................168 6.5 Conclusions..............................................................................................173
7 COMPUTER INTERFACE DESIGN ...............................................................174
7.1 Modeling..................................................................................................175 7.2 Posture Prediction ....................................................................................179 7.3 Motion Prediction ....................................................................................186 7.4 Visualization ............................................................................................193 7.5 Layout ......................................................................................................195
8 CONCLUSIONS AND RECOMMENDATIONS ............................................197
8.1 Conclusions..............................................................................................197 8.2 Recommendations....................................................................................199
REFERENCES ................................................................................................................202
vii
LIST OF TABLES
Table
2.1 Joint limits..........................................................................................................35
2.2 The DH Table for the 15-DOF human model....................................................39
3.1 Joint weights used in cost function ....................................................................56
3.2 DH Table of the 15-DOF model ........................................................................58
3.3 Distance and Discom- fort obtained from GA-GBA .........................................81
3.4 Distance and Discomfort obtained from the four faster methods ......................81
3.5 CPU time of computations on a HP-UX workstation........................................82
4.1 Traced results for one unsuccessful trial..........................................................106
4.2 Traced results for the successful trial ..............................................................106
4.3 Traced results for the successful trial without correction............................108 z
4.4 Summarized final result without optimization.................................................109
4.5 Traced optimized result with optimization ......................................................110
4.6 Comparison of results without and with optimization.....................................111
4.7 Comparison of results for different integration step sizes ...............................119
4.8 Comparison of CPU time on a 1.8GHz processor with 512MB memory .......120
6.1 DH Table for upper body.................................................................................160
viii
LIST OF FIGURES
Figure
1.1 One DOF elbow...................................................................................................9
1.2 The shoulder joint (1. Clavicle. 2. Body of scapula. 3. Surgical neck of humerus. 4. Anatomical neck of humerus. 5. Coracoid process. 6. Acromion) ............................................................................................................9
1.3 A model of the shoulder complex......................................................................10
1.4 Modeling of the shoulder complex as three revolute and two prismatic DOF’s.................................................................................................................11
2.1 Modeling of a human using a series of rigid links connected by joints.............28
2.2 Human skeletal system ......................................................................................29
2.3 Anatomy of the spine.........................................................................................29
2.4 Anatomy of the shoulder....................................................................................30
2.5 Anatomy of the elbow........................................................................................31
2.6 Anatomy of the wrist .........................................................................................32
2.7 Modeling of the torso-shoulder-arm..................................................................33
2.8 Joint coordinate system convention and its parameters.....................................37
2.9 A 15-DOF model of the torso, spine, shoulder, arm, and wrist.........................38
3.1 The task-based approach to selecting cost functions .........................................43
3.2 Illustrating the potential energy of the forearm .................................................46
3.3 GA-GBA Algorithm for predicting a posture....................................................53
3.4 Neutral position..................................................................................................56
3.5 Modeling of the torso, shoulder, and arm as a 15-DOF system ........................57
3.6 , Target Point 1 (41.2, -57, 31.5) Discomfort 2.2022=
...............................59 [.0847, -.0007,.0407,.0091,.0567,.0820, -.0019,.0075,=q .0110,.3465,.5328, -.4244, -1.4772, -.1081,.1557]T
3.7 , Target Point 2 (40, 0, 36) Discomfort 7.38=
.............................59 [.1022, -.1310, -.0235,.0198,.0014,.0072, -.0112,.0444,=q - .7829, -.1346,1.3475, -1.2451, -1.4099, -.1625, -.3101]T
ix
3.8 , Target Point 3 (20, 35, 50) Discomfort 12.8254=
.............................60 [.3087, -.2618,.0510,.0843,.0416,.0020,.0022,.2022,=q - .2543, -1.4352,.3640, -1.1986, -1.2240, -.3469,.3487]T
3.9 , Target Point 4 (-30, 10, 20) Discomfort 2.0873=
.................................60 [-.0713,.0075,.1673,.0884,.1153,.0497,.0097,.0760,=q - .6950,1.9193, -.2518,.0000, -2.3357,.4159, -.2897]T
3.10 , Target Point 5 (-40, 0, 36) Discomfort 1.5824=
...............................60 [.0135,.0206,.1265,.0720,.1204,.0805,.0021, -.0005,=q - .8226,1.9184, -.5517, -.0003, -1.7336,.1164, -.1126]T
3.11 , Target Point 6 (-50, -20, 20) Discomfort 1.0783=
................................61 [-.0871, -.0053,.0546,.0661,.0340,.0676,.0074,.0200,=q - .7906,1.5271, -.4086, -.1063, -1.5175,.0095,.2451]T
3.12 , Target Point 7 (0, -60, 5) Discomfort 0.7253=
...............................61 [-.0120, -.0348,.0052,.0096,.0344,.0022,.0016,.0265,=q - .0643,.9742, -.1020, -.5219, -1.7381, -.0602,.1526]T
3.13 , Target Point 8 (30, -40, 60) Discomfort 3.8352=
................................61 [.1634, -.2485,.0292,.0468,.0703,.0418, -.0048,.0400,=q - .1602,.3555,.1484, -1.0230, -1.0510,.0806, -.0517]T
3.14 , Target Point 9 (30, -40, 0) Discomfort 0.4966=
.............................62 [.0568, -.0618,.0135,.0030, -.0030, -.0056, -.0103,.0795,=q - .0314,1.2111,.3822, -.3199, -1.7188, -.0346, -.0813]T
3.15 , Target Point 10 (60, 0, 0) Discomfort 3.3709=
.................................62 [.2311,.0104,.2571,.0849,.1081, -.0150,.0203, -.3032,=q - .0402,1.9117,1.2721, -.0113, -1.6577,.0791,.3176]T
3.16 Marker placement on front of subject................................................................65
3.17 C7 to suprasternal notch ....................................................................................66
3.18 1H and 2H measurements.................................................................................66
3.19 15-DOF model with Michigan measurements...................................................68
3.20 Our 15-DOF model (left) and Michigan model (right)......................................69
3.21 g003h .................................................................................................................69
3.22 g004h .................................................................................................................69
x
3.23 g007h .................................................................................................................69
3.24 g097l ..................................................................................................................70
3.25 g122h .................................................................................................................70
3.26 g170h .................................................................................................................70
3.27 g218l ..................................................................................................................70
3.28 g244h .................................................................................................................71
3.29 g340l ..................................................................................................................71
3.30 g349l ..................................................................................................................71
3.31 g363h .................................................................................................................71
3.32 g458l ..................................................................................................................72
3.33 Posture prediction for ...........................................................73 (41.2, -57, 31.5)
3.34 Posture prediction for (40, 0, 36) .......................................................................73
3.35 Prediction postures based on two different initial postures ...............................74
3.36 A Partitioned Reach Envelope...........................................................................76
3.37 BFGS-BFGS method .........................................................................................77
3.38 DIS-CONS method ............................................................................................78
3.39 MOO method .....................................................................................................79
3.40 CONS-SQP method ...........................................................................................80
4.1 A planar 3-DOF model of human arm.............................................................100
4.2 Singular curves ................................................................................................102
4.3 A non-crossable singular curve and a path ......................................................102
4.4 Movement of the arm for the unsuccessful trial ..............................................107
4.5 Movement of the arm for the successful trial ..................................................107
4.6 Singular configuration and neutral configuration............................................111
4.7 Movement of the arm obtained with optimization...........................................112
4.8 Configurations obtained without (left) and with optimization (right) .............112
4.9 A spatial 4-DOF RPRP manipulator................................................................113
xi
4.10 Singular surfaces..............................................................................................114
4.11 Crossable and non-crossable surfaces..............................................................115
4.12 Path AB and a non-crossable surface S ...........................................................115
4.13 Singular configuration at intersection point.....................................................117
4.14 Snapshots of the spatial manipulator of an unsuccessful planning..................118
4.15 Snapshots of the spatial manipulator during movement of a successful planning............................................................................................................119
5.1 A B-spline ........................................................................................................134
5.2 Modeling of the torso, shoulder, and arm as a 15-DOF system ......................136
5.3 Motion prediction illustration ..........................................................................137
5.4 Refined motion prediction module ..................................................................137
5.5 Path design with control points prediction ......................................................141
5.6 Predicted motion 1 at time 0 ............................................................................142
5.7 Predicted motion 1 at time 0.25 ft ...................................................................143
5.8 Predicted motion 1 at time 0.5 ft .....................................................................143
5.9 Predicted motion 1 at time 0.75 ft ...................................................................144
5.10 Predicted motion 1 at time ft ..........................................................................144
5.11 Predicted joint splines for motion 1.................................................................145
5.12 Predicted motion 2 at time 0 ............................................................................146
5.13 Predicted motion 2 at time 0.35 ft ...................................................................146
5.14 Predicted motion 2 at time 0.5 ft .....................................................................147
5.15 Predicted motion 2 at time 0.65 ft ...................................................................147
5.16 Predicted motion 2 at time ft ..........................................................................148
5.17 Predicted joint splines for motion 2.................................................................148
5.18 Predicted motion 3 with a via point at time 0 ..................................................149
5.19 Predicted motion 3 with a via point at time 0.25 ft .........................................150
5.20 Predicted motion 3 with a via point at time 0.5 ft ...........................................150
5.21 Predicted motion 3 with a via point at time 0.75 ft .........................................151
xii
5.22 Predicted motion 3 with a via point at time ft ................................................151
5.23 Predicted joint splines for motion 3 with a via point .......................................152
5.24 Predicted motion 4 with a via point at time 0 ..................................................153
5.25 Predicted motion 4 with a via point at time 0.3 ft ...........................................153
5.26 Predicted motion 4 with a via point at time 0.5 ft ...........................................154
5.27 Predicted motion 4 with a via point at time 0.7 ft ...........................................154
5.28 Predicted motion 4 with a via point at time ft ................................................155
5.29 Predicted joint splines for motion 4 with a via point .......................................155
6.1 A layout problem .............................................................................................158
6.2 15-DOF model for upper body from waist up to hand ....................................159
6.3 Optimization scheme .......................................................................................164
6.4 A manufacturing cell .......................................................................................168
6.5 Designed layout ...............................................................................................171
6.6 Posture reaching tuff bin..................................................................................172
6.7 Posture pressing button....................................................................................173
7.1 Posture and motion prediction computer interface in 3D Studio MAX ..........175
7.2 15-DOF model of the torso, shoulder, and arm ...............................................176
7.3 15-DOF model in 3D Max...............................................................................177
7.4 Hierarchy of the bone structure .......................................................................178
7.5 Posture prediction interface .............................................................................181
7.6 Cost function interface.....................................................................................182
7.7 Posture prediction interface .............................................................................183
7.8 Real-time posture prediction algorithm ...........................................................184
7.9 Unreachable target point ..................................................................................185
7.10 Prediction for left arm......................................................................................186
7.11 Motion prediction interface .............................................................................188
7.12 Motion prediction interface flowchart .............................................................189
xiii
7.13 Motion prediction algorithm............................................................................190
7.14 Predicted curved motion of upper body with left arm .....................................191
7.15 Predicted joint profiles for a curved motion ....................................................192
7.16 Visualization interface .....................................................................................194
7.17 Layout interface ...............................................................................................195
xiv
1
CHAPTER 1
INTRODUCTION
1.1 Motivation
One of the main industrial applications of virtual human modeling and simulation
is to make an ergonomic evaluation of the man-machine interface of a product in a
computer-aided design (CAD) environment at a very early stage of design (also called
digital prototyping). The cost of developing prototypes is typically waived or completely
reduced when digital prototyping is implemented. While digital mockups have already
made significant impact on manufacturing, the use of digital humans to evaluate a design
has not been extensively used. Today, the development of video games on the internet,
the development of interactive training systems, the computer-aided production of
movies, especially, virtual reality applications, also raise the need for tools that will
facilitate the creation and animation of autonomous virtual characters in 3-D worlds.
Motion planning techniques will be used to direct digital human characters at the task
level and to create highly interactive systems. This motivates us to create fast planners
capable of using physics-based models to generate realistic-looking motions. The long-
term vision for this research is to create and develop methodologies and formulations for
intelligent digital humans that can be launched into a digital environment. These digital
humans will be queried for fundamental questions pertaining to ergonomics,
functionality, and any other aspect that is needed.
Towards this objective, it is necessary to simulate human postures and movements
under different task and environmental conditions. Man models have been used to
simulate human anthropometry and postures in the context of the product or workspace
being evaluated (Dooley, 1982). However, the lack of ease of use has been a problem
common to many of three-dimensional man models, since ergonomists cannot be sure of
2
the operator’s posture and behavior without a mock-up or prototype of the product or
workplace. Moreover, these humans have appeared as animated avatars that can be
manipulated but have lacked the mathematical formulation to render them intelligence,
able to perform tasks unaided by the user.
Difficulties in this respect have been encountered due to lack of biomechanically
modeling large numbers of degrees of freedom (DOF) associated with various joints.
Other difficulties are often caused by various joint angle determinations when a working
posture is to be modified. Models involving many joints are difficult to handle,
especially if accuracy and intelligence are an issue. Further constraints have been
inherently imposed because of the lack of rigor in the field of ergonomics, where rules of
thumb and empirical results have traditionally been used. Due to these limitations,
ergonomic evaluations through the three-dimensional man models have only been
accomplished at an elementary level (Kuusisto and Mattila, 1990).
Simulating human postures is a very difficult and complex problem owing to the
redundancy of the human musculoskeletal system. Inverse kinematics (IK) has
traditionally been used either in terms of geometric closed form methods or as numerical
methods, towards predicting joint values of a kinematic model resembling in output that
of the human. Those methods have been met with success in the robotics field but have
not been able to address human motion prediction. One way of representing realistic
human motions is the method of rotoscopy (Thalmann and Thalmann, 1990), sometimes
called brute-force method. It consists of recording the exact motion based on real
persons off-line, then playing back on demand on-line. In future animation systems,
based on synthetic actors, it is expected that motion control will be automatically
performed using artificial intelligence and robotics techniques, drawing upon cognitive
science and behavior. Our objective is to give our virtual humans naturalistic motion
with an ability to autonomously predict their own motion and behavior. Indeed, motion
3
will be planned at a task level and computed using physical laws (in our case using
kinematics, dynamics, and optimization), which is the focus of this study.
Realistic motion and natural-looking simulations require a thorough
understanding of human movement control strategies. It must take into account not only
the extrinsic geometric constraints (e.g. position and orientation imposed on the hand) by
the task and the intrinsic geometric constraints such as joint limits, but also others than
just geometric aspects of the task (e.g., force level). The arm reach posture of simply
touching a point in space with the index finger is certainly different than that of the
posture assumed for pressing a button in the same location with the index finger.
The purpose of this research is to obtain a better understanding of the
mathematical modeling of human motion using well-established kinematic theories.
Particularly from the field of robotics, we believe there are significant applicable theories
and numerical algorithms that, if appropriate, can be tailored to address long standing
problems in human modeling and simulation…We list some of these long standing
problems that will be addressed in this work.
Prediction of human motions and postures is particularly difficult because of two
main reasons: (i) the large number of degrees of freedom that is required to model
realistic motion and (ii) the inverse kinematic solution (i.e., predicting a posture) is not as
straightforward as in the case of robots, because while many solutions are mathematically
admissible, they do not make sense and are unrealistic! This has been a long standing
problem in human modeling, simulation, and ergonomics. Indeed, traditional algebraic
and geometric IK methods are difficult to implement and yield an infinite number of
solutions, one of which must be selected. Some numerical IK methods have been used to
solve low degree-of-freedom human models. For human models, a realistic solution must
be determined, one that resembles the actual motion.
Ergonomic design has traditionally been associated with rules of thumb and
empirical data, based on thousands of experiments. In this research, we propose,
4
formulate, and demonstrate a number of ergonomic design methods that are
mathematically based, but that yield an optimized result. Most human motion prediction
methods have taken into consideration the anthropometry, but have not taken into
account the task at hand. Our motivation for introducing a task-based approach, one that
simulates how humans react and perform according to different tasks, stems from the fact
that humans perform differently in response to the task at hand.
Robot trajectory planning has been widely studied. However, for humans,
trajectory planning is complex and requires careful analysis and attention. While barriers
in the reachable workspace surrounding a human have been delineated, crossing these
barriers under various conditions has not been addressed. It is our supposition that
humans determine a posture at the onset of motion such that a trajectory is followed in
space, uninterrupted. This initial configuration is chosen based on criteria that we also
introduce and that include crossability, comfort, dexterity, etc. In order to realistically
simulate human motion, it is important to study how humans psychologically determine
an initial posture (from an infinite number of possible postures) to cross a barrier. By
analogy, the same formulation also applies to robot manipulators.
Predicting joint motion variables as a function of time while the hand moves in
space poses a significant number of problems because of the infinite domain available for
a computer to choose. Many researchers have used the concept of minimum jerk alone to
design a path, mostly to study disabilities and human cognitive behavior dealing with
reach; however, predicting joints as a function of time has been a long standing problem,
particularly when coupled with joint limits and larger than seven degrees of freedom.
1.2 Objectives
(1) To develop methods and algorithms for kinematic modeling of realistic human
anatomy.
5
a. To enable a more realistic model of the human upper body, including the waist,
the shoulder complex, and the upper extremities. Limitations of existing human
models can be summarized as follows: (i) Low number of degrees of freedom,
particularly focused on using six or a maximum of seven DOF’s to represent a
biomechanical chain. This limitation is inherent in the fact that most inverse
kinematic solutions are only able to handle a maximum number of seven DOF’s.
Indeed, the four commercial software systems that perform digital human
modeling and simulation are also limited. (ii) Accurate biomechanical models
exist but cannot be executed in real time. We will endeavor to develop a general
mathematical method for representing human segmental motion.
b. Develop a mathematical formulation towards using a systematic method for
representing constrained large degree-of-freedom models of humans. Perhaps
the most difficult element of representing human segmental motion is the
incorporation of unilateral constraints representing joint ranges of motion into
the formulation. We will investigate methods for augmenting our modeling
technique and subsequent numerical methods for human motion prediction to
incorporate joint ranges of motion. Similarly, any such model must have the
ability to include the kinematics and dynamics of motion.
(2) To introduce the concept of task-based posture prediction.
c. Investigate and better understand how humans assume postures in space. Given
our postulate that humans must be represented with a larger number of degrees
of freedom than currently used, the issue of assuming a realistic posture and
subsequently a realistic motion, can now be addressed. Because human upper
extremities and upper bodies in general are highly redundant, predicting a
realistic (naturalistic) posture is typically difficult, if not impossible. The
objective of this work is to understand what is real and what is not, especially
when compared with robotic motions.
6
d. Introduce the concept of task-based driven behavior. We introduce the concept
of task-based posture prediction as a viable approach to predicting naturalistic
final postures of humans represented by a relatively large number of degrees of
freedom. This postulate is as follows: Given a task to be executed, a human will
inherently select one or more human performance measures (which we call cost
function) that will be minimized or maximized. This postulate gives rise to a
mathematical formulation for a computational algorithm, whereby human
performance measures must first be determined as functions that evaluate to real
numbers and that can be optimized. We will develop such functions and will
demonstrate the concept of task-based posture prediction.
e. Investigate various approaches and numerical algorithms for implementing task
based posture prediction using optimization algorithms. While the classical
theory of optimization is readily applicable, methods for implementing the
theory for a 15-DOF model (which we have used to represent the upper body)
are not direct. Indeed, a task in our postulate above comprises one or more cost
functions, which renders the problem a multi-objective optimization algorithm.
Predicting the design variables (in our case the set of 15 joint variables) subject
to a multitude of constraints will be investigated.
f. Investigate real-time (or near real-time) methods for implementing fast posture
prediction formulations. A key element of posture prediction is the ability to
calculate the vector of joint variables on-line, which requires a real- or near real-
time implementation. We will investigate various implementations of our
approach to posture prediction using both gradient based and genetic algorithms
towards a real-time implementation.
g. Validate our task-based posture prediction model against other software
systems and against experimental data. In order to validate our results, we shall
compare a simplified model (i.e., reduce the number of DOF’s to that of an
7
existing system). We shall also compare our results with experimental data
obtained through motion capture.
h. Implement a graphical interface for visualizing digital humans and predicted
motions. Visualization of postures and motions predicted by our numerical
algorithms cannot be performed in existing commercial software systems
because of the limitations in number of degrees of freedom. Therefore, a unique
graphical interface must be developed.
(3) To obtain a better understanding of human motion, particularly realistic prediction of
path trajectories for large DOF models.
i. Introduce the concept of kinematically-smooth trajectories as a method for
predicting initial starting configurations. For a large number of degrees of
freedom, redundancy may present impediments to smooth motion, particularly
when motion of the hand is executed along a path, but when the computational
algorithm yields solutions of the joint variables that would require switching of
the solution during the motion. As a result, we introduce a new concept called
kinematically-smooth motion. We then use this concept to design (hence
predict) path trajectories that would enable completion of the motion
uninterrupted. Because of the complexity of this problem, we shall investigate
converting the problem into a Runga-Kutta index 2 formulation and implement
it within our optimization algorithm.
j. To obtain a better understanding of joint-time variation (motion prediction) of
the upper body while the hand moves along a specified trajectory using the
concept of minimum jerk. While predicting static postures as stated above is a
significant problem in itself, predicting time-dependent joint variables for a
large DOF model of a human is a considerable problem. We will address the
determination of joint functions (parametrically time-dependent) that
8
simultaneously define a prescribed trajectory while taking into account the well-
established concept of minimum jerk.
1.3 Literature Review
Because of the multidisciplinary nature of this research, the literature review
section is partitioned in five sub-sections to review work in human modeling, posture
prediction, control barriers, trajectory planning and layout design.
1.3.1 Human Modeling
To establish a systematic method for biomechanically modeling human anatomy,
researchers have implemented conventions for representing segmental links and joints.
Human anatomy can be represented as a sequence of rigid bodies (links) connected by
joints. Of course, this serial linkage could be an arm, a leg, a finger, a wrist, or any other
functional mechanism. Joints in the human body vary in shape, function, and form. The
complexity offered by each joint must also be modeled, to the extent possible, to enable a
correct simulation of the motion. The degree by which a model replicates the actual
physical model is called the level of fidelity.
Perhaps the most important element of a joint is its function, which may vary
according to the joint’s location and physiology. The physiology becomes important
when we discuss the loading conditions of a joint. In terms of kinematics, we shall
address the function in terms of the number of degrees of freedom associated with its
overall movement. Muscle ligament, tendon and actions at a joint are also important and
contribute to the function.
For example, consider the elbow joint, which is considered a hinge or one degree-
of-freedom (DOF) rotational joint (e.g., the hinge of a door) because it allows for flexion
and extension in the sagittal plane (Figure 1.1) as the radius and ulna rotate about the
humerus. We shall represent this joint by a cylinder that rotates about one axis and has
9
no other motions (i.e., 1 DOF). Therefore, we can now say that the elbow is
characterized by one DOF and is represented as a cylindrical rotational joint also shown
in Figure 1.1.
Figure 1.1 One DOF elbow
Figure 1.2 The shoulder joint (1. Clavicle. 2. Body of scapula. 3. Surgical neck of humerus. 4. Anatomical neck of humerus. 5. Coracoid process. 6. Acromion)
10
On the other hand, consider the shoulder complex (Figure 1.2). The
glenohumeral joint (shoulder joint) is a multi-axial (ball and socket) synovial joint
between the head of the humerus (5) and the glenoid cavity (6). There is a 4 to 1
incongruency between the large round head of the humerus and the shallow glenoid
cavity. A ring of fibrocartilage attaches to the margin of the glenoid cavity forming the
glenoid labrum. This serves to form a slightly deeper glenoid fossa for articulation with
the head of the humerus.
There are a number of methods that can be used to model this complex joint
(Figure 1.3). One such method (Maurel, 1999) is to consider the shoulder girdle
(considering bones in pairs) as four joints that can be distinguished as: the sterno-
clavicular joint, which articulates the clavicle by its proximal end onto the sternum, the
acromio-clavicular joint, which articulates the scapula by its acromion onto the distal end
of the clavicle, the scapulo-thoracic joint, which allows the scapula to glide on the thorax,
and the gleno-humeral joint, which allows the humeral head to rotate in the glenoid fossa
of the scapula.
humerus
scapula
clavicle
thorax
Figure 1.3 A model of the shoulder complex
11
Another method takes into consideration the final gross movement of the joint
(Abdel-Malek et al., 2001), as abduction/adduction (about the anteroposterior axis of the
shoulder joint), flexion/extension and transverse flexion/extension (about the
mediolateral axis of the shoulder joint). Note that these motions provide for three
rotational degrees of freedom having their axes intersecting at one point. This gives rise
to the effect of a spherical joint typically associated with the shoulder joint (Figure 1.4).
In addition, the upward/downward rotation of the scapula gives rise to two substantial
translational degrees of freedom and total 5 DOF’s in the shoulder complex. This model
allows for consideration of the coupling between some of the joints, as is the case in the
shoulder where muscles extend over more than one segment. When muscles are used to
lift the arm in a rotational motion, unwittingly, a translational motion of the shoulder
occurs.
q1
q2
q3
q4
q5xo
yo
Figure 1.4 Modeling of the shoulder complex as three revolute and two prismatic DOF’s
12
Hogfors et al. (1987) introduced a rigid body shoulder model of twelve degrees of
freedom: three orientations for each bone and the position of the center of the humeral
head. Hogfors et al. reported that the twelve descriptive kinematic degrees of freedom
are functionally interrelated due to the constraints among them. The loop conformation
of the trunk, the clavicle and the scapula induces interdependencies between these
parameters, thus reducing the number of true DOF’s. Groot and Brand (2001) developed
a three-dimensional regression model of the shoulder rhythm that showed that the
orientation of the clavicle and the scapula is dependent on the humerus orientation.
Lepoutre (1993) modeled the human as a 10-DOF system, where 6 for the trunk, 3
for arm, 1 for leg and all are rotational joints. Jung and Choe (1996) modeled upper body
with seven degrees of freedom which consist of hip flexion, hip lateral bending, shoulder
flexion, shoulder abduction-adduction, shoulder rotation, elbow flexion and wrist flexion-
extension. Maurel (1999) developed a 10-DOF shoulder-arm model for the CHARM
project of which 8 are for the shoulder, 1 is for the elbow and 1 is for the wrist.
To describe the translational and rotational relationships between adjacent links of
the open kinematic chain, Denavit and Hartenberg (1955) notation (DH notation) has
been used because of its strength in handling large numbers of degrees of freedom and
because of its ability to systematically enable kinematic and dynamic analyses. DH
notation is used to systematically establish a coordinate system (body-attached frame) to
each link of articulated chain in robotics (Asada and Slotine, 1986). The DH notation
uses a minimum number of parameters to completely describe the kinematic relationship,
and the relative location of the two frames can be completely determined by four
parameters. Indeed, the Denavit-Hartenberg representation method has been
demonstrated to yield an effective method for modeling humans (Jung et al., 1995;
Abdel-Malek et al., 2001).
In another modeling method by Wang et al. (1998), a model was developed
expressing the upper arm axial motion limits as a function of elbow position. It has been
13
shown that the axial motion range of the upper arm depends strongly on the position of
the upper arm in the shoulder sinus cone and varies on average from 94° to 157°. The
elbow joint motion range is characterized by the simple inequality from 0 to the elbow
maximum flexion angle, which is on average around 142° (Chaffin and Anderson, 1991).
1.3.2 Posture Prediction
Posture prediction can be considered as an inverse kinematic problem in robotics:
Given a kinematic chain, and given a point in space that must be reached (and sometimes
given the trajectory of the hand and its orientation), it is required to calculate the set of
joint angles that achieve a desired posture. It is evident that this is an ill-posed problem
because of the high level of redundancy inherent in the musculoskeletal system. A
solution to this ill-posed problem in terms of movement control requires in addition to
biophysical and anatomical constraints, some other constraints to reduce the number of
degrees of freedom (Gielen et al., 1995). It is hypothesized that human body control
utilizes a cost function attached to each joint, which defines a cost value for each joint
angle, and a posture configuration is chosen based on the minimum total cost (Cruse et
al., 1990; Jung et al., 1994). Posture prediction models developed under this hypothesis
used cost functions such as joint torque and L5/S1 pressure from the biomechanical
perspective, and joint discomfort from the psychophysical perspective. Among these
functions, L5/S1 pressure and energy consumption have been used mainly to predict
whole body postures and the postures that exert heavy load such as a lifting task (Jung
and Choe, 1996). One method to calculate the discomfort is to conduct a series of
experiments, where the perceived discomfort of human subjects is measured and a model
to calculate the discomfort from the joint angles is obtained through regression analysis
(Jung and Choe, 1996).
There have been two schools of thought regarding posture prediction. The first,
perhaps the more traditional, uses anthropometrical data, collected from performing
14
thousands of experiments by human subjects, or simulation using three-dimensional
computer-aided human-modeling software [see for instance, Porter et al. (1990) and Das
and Sengupta (1995)], which were statistically analyzed to form a predictive model of
posture such as regression models. This school of thought is referred to as empirical-
statistical modeling. These models have been implemented in various simulation
software systems with some variations as to the method for selecting the most probable
posture. Among the empirical-statistical modelers are Beck and Chaffin (1992), Zhang
and Chaffin (1996), Das and Behara (1998), and Faraway et al. (1999).
The second school of thought often used biomechanics and kinematics as a
predictive tool (often referred to as inverse kinematic solutions), on a posture that has not
been observed but has been estimated as a likely posture for a task (Tracy, 1990). This
approach mathematically models the motion of a limb with the goal of formulating a set
of equations that can be solved for the joint variables. Among the researchers who
belong to this school of modeling are Jung et al. (1992; 1995), Kee et al. (1994), Jung and
Choe (1996), Kee and Kim (1997), and Wang (1999).
Researchers that belong to one school of modeling (in particular Beck and
Chaffin, 1992) cautioned that the inverse kinematics algorithm is not necessarily correct
for prediction of posture because of its theoretical foundation, difficulty with evaluating
the Jacobian, determining a closed form equation for the posture, and in modeling large
number of degrees of freedom. On the other hand, others (Abdel-Malek et al., 2001)
have stated that the use of only statistical models do not provide avenues for rigorous
ergonomic design and do not reflect a task-based approach. An impracticable number of
experiments involving human subjects must be conducted for each specific task, for
every gender of every age and anthropometric measure.
The following summarizes the methods that have been proposed for posture
prediction. Generally speaking, methods used in posture prediction can be divided into
four categories: experimental, algebraic, geometric, and iterative IK solutions.
15
Experimental approach is based primarily on statistical regression equations
developed from a large number of measured postures (Beck and Chaffin, 1992; Verriest
et al., 1994). The advantage of statistical regression-based methods is that no numerical
iterations are needed and joint limits are automatically satisfied. However, their
application is limited by the database. Furthermore, to make the prediction accurately,
many experiments will be needed according to different sizes of people. Faraway et al.
(1999) used pseudo-inverse to rectify the posture obtained from the experimental
database to meet position constraints of the end-effector. This method is limited to the
situation that the database of human posture has been generated for a specific desired
task. It becomes very inconvenient or impossible for general task-based posture
prediction.
Algebraic solutions are significantly faster than iterative IK solutions. The
problem with the algebraic method is that not all kinematic chains have a closed-form
solution, especially for kinematic chains with more than 6 DOF’s (McKerrow, 1991).
Furthermore, the derivation of the closed-form equations is a lengthy process, and it is
impossible to apply the algebraic method to general cases.
A geometric algorithm of inverse kinematics was proposed to predict the arm
reach posture (Wang and Verriest, 1998; Wang, 1999). The main advantage of the
geometric method compared with the algebraic one is that the non-linear nature of the
shoulder joint limit can be handled in a direct and easy way and that matrix inverse
calculation is avoided. This geometric algorithm can only be applied to the specific
model of the arm where only the rotational movement of the shoulder was considered
since it used the geometric relation between shoulder, elbow and wrist. Thus, it is not
suitable for general cases when the more complex shoulder model is involved or when
larger number of degrees of freedom is used.
Generalized iterative IK methods enable any structure with an arbitrary number of
degrees of freedom to be animated with minimal human intervention. Because of this
16
advantage, iterative IK methods have become the main approach applied in posture
prediction. As a result, numerous reports have appeared that use this approach, but that
are limited in breadth generality. For example, Goldenberg et al. (1985) reported a
generalized solution to the IK problem in robotic manipulators. A modified Newton-
Raphson method was used where the Jacobian matrix was partitioned according to the six
joint correction variables and (n-6) free joint correction variables. For each iteration, the
free joint correction variables were obtained by optimizing some cost function. This
method is very time-consuming with large redundant DOF system due to the optimization
needed in each iteration step.
The so-called Distributed Positioning (DP) concept was originally developed for
problems where massive robots were involved in fast manipulation (Potkonjak, 1990).
The same author (Potkonjak et al., 1998) later applied this concept to the motion analysis
of redundant anthropomorphic arm/hand with 8 DOF’s during writing. The modeling is
based on the separation of the prescribed movement into two motions: smooth global and
fast local motion. The preceding motion is distributed to the subsystem with greatest
inertia called basic configuration, while the latter is distributed to the redundancy
subsystem. A numerical integration method is then used to solve the basic configuration.
The redundancy subsystem is determined in the second step with the knowledge of the
basic configuration. Although this method avoids the redundancy in the first step, it still
needs to use pseudo-inverse in the second step. Moreover, global and local motion for
each specific task needs to be analyzed precisely prior to the initiation of problem
solving.
The most popular iterative approach to posture prediction is based on the pseudo-
inverse method used for redundant manipulator control in robotics (e.g., Nakamura,
1991; Chiaverini and Siciliano, 1991). The general formulation of pseudo-inverse
method is based on the following equation
17
Δ Δ Δ+ +θ = J x + (I - J J) z (1.3.1)
where Δθ is the vector of the joint variables, Δx describes the main task as a variation of
the end-effector position and orientation in Cartesian space, J is the Jacobian matrix, +J
is the pseudo-inverse of J , Δz describes a secondary task in the joint space. +(I - J J) is
a projection operator on the null space of the linear transformation J , which means any
value of Δz won’t modify the achievement of the main task. Consequently, Eq. (1.3.1)
provides a set of solutions. The solution obtained only by the first term is the one with
minimized norm among all the solutions. Usually Δz is calculated so as to minimize a
cost function. For example, Boulic and Thalmann (1992) proposed an approach for the
animation of articulated figures. The fundamental idea of this approach is to consider
any desired joint space motion as a reference model inserted into the secondary task of an
inverse kinematic control scheme. In other words, Δz is calculated to minimize the
difference between the resolved motion and the reference motion.
Lepoutre (1993) used pseudo-inverse technique to predict upper body postures of
a two-dimensional model and compared different optimization criteria such as
minimization of articular torques and remoteness of articular limits (dexterity criterion).
It appeared that the workspace geometrical organization corresponding to the dexterity
criterion optimal posture is generally the best one. Jung et al. (1995) applied the same
method with a dexterity criterion (joint range availability) to predict upper body postures
of a three-dimensional model, where the DH notation was used to represent human
motion. The same group reportedly demonstrated that humans adopt postures of
minimum discomfort among all feasible body configurations (a result we shall expound
upon in our work). Similar results were reported by Dysart and Woldstad (1996) who
used three separate models and objective functions to predict the postures of humans
performing static sagittal lifting tasks. The models used a common inverse kinematics
characterization to represent mathematically feasible postures, but explored different
18
criteria functions for selecting a final posture, where the minimum total torque was
shown to be more accurate. Their models were limited to planar motion (stick models
were used) with a relatively small number of DOF’s. Jung and Choe (1996) predicted the
arm reach posture using an algorithm that works in several steps. In the first step it
predicts multiple sets of joint angles, which position the hand to a specific target location
using the inverse kinematics technique; secondly, it applies joint range of motion criteria
to find kinematically feasible posture set among the predicted body postures; finally, it
applies the discomfort prediction model to feasible postures and selects the most
favorable upper body posture that has a minimum discomfort value. They showed that
hip lateral bending and wrist flexion are the most sensitive joint movements among the
seven joints they modeled. Boulic et al. (1996) extended the range of inverse kinematics
based on pseudo-inverse by integrating mass distribution information to embody the
position control of the center of gravity of any articulated figure in single support (open
tree structure). Zhang et al. (1998) used a weighted pseudo-inverse method for posture
prediction in seated reaching movements. The weights were obtained through
minimizing the posture difference between the predicted and experiment data.
There are some drawbacks with pseudo-inverse methods. First, the second term
in Eq. (1.3.1) is normally omitted due to the computation complexity. Subsequently,
unnatural motion is yielded sometimes due to the minimized norm nature of the provided
solution of only the first term. If the second term is considered, optimization needs to be
used in every step along a path to calculate Δz which greatly increases the computation
cost. Second, the local nature of this approach provides a local solution based on the
current configuration of the articulated structure. As a consequence, a different initial
configuration leads to a different final configuration. Although this is true under some
circumstances, postures for many other cases mainly relate to discomfort or some other
criteria no matter what kind of initial posture is assumed. Third, pseudo-inverse
approach naturally computes a path for the end-effector going from an initial position to a
19
final position, this is overdone for computation since for the purpose of most ergonomic
evaluation, only the information of final posture is needed. Because of the above
limitations, pseudo-inverse method hasn’t been seen used in large DOF systems that are
very likely for the realistic modeling of human.
Zhao and Badler (1989, 1994) used a non-linear optimization method that can
take into account multiple geometric constraints (e.g. position, orientation, etc.) on the
hand or reference points. These constraints were formulated as a goal function to be
minimized, providing a very solid method especially when the body is overconstrained.
The minimization algorithm of Broyden, Fletcher, Goldfarb, and Shanno (BFGS)
(Fletcher, 1987) is shown to converge quickly using a small number of objective function
evaluations, making it the best method for solving IK systems of those considered in the
studies by Chin et al. (1997) and Zhao and Badler (1994). However, Zhao and Badler
didn’t consider human factors in their optimization method making their solutions less
realistic. Moreover, the gradient-based optimization method they used terminates on the
local minima.
Some other approaches have been presented. Badler et al. (1987) developed a
reach tree-balancing algorithm for multiple reach constraints where different weights
were assigned to different constraints. The reach problem was solved using a greedy
algorithm based on the triangle inequality. Consecutive links are aligned until there is
sufficient length to solve the reach as a triangle or the chain is fully straightened. This
algorithm didn’t consider the joint limits and would incur undesired results. Artificial
neural networks models also emerged and they provide more accurate predictions over
the standard statistical models (Hestenes, 1994; Jung and Park, 1994; Eksioglu et al.,
1996).
20
1.3.3 Control Barriers
The problem of determining kinematically-smooth path trajectories has only
marginally been addressed in the literature. However, delineation of singular behavior
where the manipulator may or may not be able to cross a barrier was addressed by many
researchers and is typically based on a null-space criterion of the manipulator’s Jacobian
(Spanos and Kohli, 1985; Lai and Yang, 1986; Shu et al., 1986; Soylu and Duffy, 1988;
Burdick, 1991; Lipkin and Pohl, 1991; Pai and Leu, 1992; Tourassis and Ang, 1992).
While specific crossability criteria on barriers in the workspace were reported by
Nielsen et al. (1991), the fundamental concept of crossable and noncrossable surfaces
inside a manipulator’s workspace was addressed early on by Oblak and Kohli (1988).
Although most of these works have touched upon the concept of crossable surface using a
singularity criterion of the Jacobian, a unified methodology for identifying position
control problems across a barrier has not been presented. One criterion to define possible
motion (so-called feasible trajectory) from a singularity was presented by Chevallereau
and Daya (1994) and Chevallereau (1996).
Haug et al. (1995) presented a numerical algorithm for identifying and analyzing
barriers to output control of manipulators using first- and second-order Taylor
approximations of the output in selected directions. Haug and colleagues showed that the
output velocity in the direction normal to such curves and surfaces must be zero (Haug et
al., 1996) and manipulator boundaries were consequently mapped.
A nice method was presented recently where singular surfaces in manipulator
workspaces were delineated and acceleration-based crossability criteria were defined
(Yeh, 1996; Abdel-Malek and Yeh, 1997, 2000; Abdel-Malek et al., 1997, 2001).
1.3.4 Trajectory Planning
First, we will review the approaches used for trajectory generation in robotics.
Two common approaches are used to plan manipulator trajectories. The first approach
21
requires the user to explicitly specify a set of constraints (e.g., continuity and
smoothness) on position, velocity, and acceleration of the manipulator’s generalized
coordinates at selected locations (called knot points or interpolation points) along the
trajectory. The trajectory planner then selects a parameterized trajectory from a class of
functions (usually the class of polynomial functions of degree n or less, for some n), in
the time interval [ 0 , ft t ] that “interpolates” and satisfies the constraints at the
interpolation points. In the second approach, the user explicitly specifies the path that the
manipulator must traverse by an analytical function, such as a straight-line path in
Cartesian coordinates, and the trajectory planner determines a desired trajectory either in
joint coordinates or Cartesian coordinates that approximates the desired path.
By using a quaternion to represent rotations and translations, Taylor (1979)
proposed an approach, called bounded deviation joint path. This approach requires a
motion planning phase that selects enough knot points so that the manipulator can be
controlled by linear interpolation of joint values.
Lin et al. (1983) proposed an approach where a set of joint spline functions are
used to fit the segments among the selected knot points along the given Cartesian path.
This approach involves the conversion of the desired Cartesian path into its functional
representation of n joint trajectories, one for each joint. Since cubic polynomial
trajectories are smooth and have small overshoot of angular displacement between two
adjacent knot points, Lin et al. adopted the idea of using cubic spline polynomials to fit
the segment between two adjacent knots. The total traveling time for the manipulator
were minimized by adjusting the time intervals between two adjacent knot points subject
to the velocity, acceleration, jerk, and torque constraints.
Bobrow (1988) presented a path planning technique, which makes use of
approximations of an initial feasible trajectory in conjunction with an iterative, nonlinear
parameter optimization algorithm to produce time-optimal motions for a manipulator
with 3 DOF’s in a workspace containing obstacles. The Cartesian path of the
22
manipulator was represented with B-spline polynomials, and the shape of this path was
varied in a manner that minimized the traversal time. Obstacle avoidance constraints
were included in the problem through the use of distance functions. His method did not
prevent the arm from colliding with the obstacle at points other than the tip.
Yun and Xi (1996) used genetic algorithms for optimum motion planning in joint
space for robot, where some inter-knots were selected and their parameters and the
traveling time of each trajectory segment were coded and optimized. Similarly,
Constantinescu and Croft (2000) put up with a smooth and time-optimal trajectory
planning which minimizes time under path constraints, torque limits and torque rate
limits. The variables of the optimization are the end-effector pseudo-velocities at the pre-
selected knot-points along the path and the slopes of the trajectory in the s- s phase plane
at the path end-points, where s is the path parameter, e.g., the arc length. The path itself
is pre-imposed as a constraint.
Significant research has also been done on collision free motion planning. For
example, in the early 1980s, Lozano-Perez (1983) introduced the concept of a robot’s
configuration space, in which the robot is represented as a point–called a configuration–
in a parameter space encoding the robot’s DOF’s–the configuration space. Path planning
for a dimensioned robot is thus “reduced” to the problem of planning a path for a point in
a space that has as many dimensions as the robot has DOF’s. Two popular approaches
were introduced in the 1980s: approximate cell decomposition, where the free space is
represented by a collection of simple cells (Brooks and Lozano-Perez, 1983), and
potential field (Khatib, 1986). Potential fields are used in path planning to create regions
with numeric values that give an indication of a measure of safety of that region. But
none of these approaches extends well to robots with more than 4 or 5 DOF’s, either the
number of cells becomes too large or the potential field has local minima.
A randomized planner was introduced (Barraquand and Latombe, 1991), which
was able to solve complex path-planning problems for many-DOF robots by alternating
23
“down motions” to track the negated gradient of a potential field and “random motions”
to escape local minima. Later, a probabilistic roadmap (PRM) planner (Kavraki et al.,
1996) was developed. By sampling the configuration space by “local” paths (typically
straight paths), a PRM can be created. Samples and local paths are checked for collision
using a fast collision checker, which avoids the prohibitive computation of an explicit
representation of the free space.
Quinlan and Khatib (1993) put up with an elastic band concept. The free space
around the path was represented as a series of hyperspheres, called bubbles. A bubble
represents a region of configuration space that is free of collision. Covering the path with
those bubbles, a channel of free space was formed through which the robot’s trajectory
could be executed. Later, Khatib et al. (1999) used elastic strip method for the collision-
free path modification behaviors of the robots. An elastic strip represents the workspace
volume swept by a robot along a preplanned trajectory. This representation was
incrementally modified by external repulsive forces originating from obstacles to
maintain a collision-free path.
Above approaches are applied in trajectory planning of manipulators which
normally have only 2 to 3 DOF’s and up to 6 at most. On the other hand, for the realistic
motion generation, human models normally have more than 10 DOF’s. Moreover, the
criteria used for motion planning will be quite different. For example, time optimum is
always selected for the manipulator trajectory planning in application. But for human
motion, this is not always important; instead, human tends to adopt the motion with least
discomfort, effort and most smoothness. This leads to a different research area where
different strategies will be used in human motion planning.
Barring particular overriding circumstances, natural movements–and, more
markedly, hand movements–tend to be smooth and graceful. One can then postulate that
this characteristic feature corresponds to a design principle, or, in other words, that
maximum smoothness is a criterion to which the motor system abides in the planning of
24
end-point movements. Point-to-point movements performed under a wide variety of
conditions using a wide variety of limb segments exhibit the same velocity pattern (Flash
and Hogan, 1985; Hogan and Flash, 1987): a smooth, bell-shaped time course, typically
symmetrical (or nearly so) about the mid-point of the movement, starting from zero,
growing to a single peak and declining again to zero. Many researchers have also
reported that the velocity profiles of rapid-aimed movements have a global asymmetric
bell-shape, which is invariant over a wide range of movement sizes and speeds, and
asymmetry increased with higher accuracy demands (Plamondon, 1995, Part I, Part II;
Plamondon, 1998).
Because the common invariant features of these movements were only evident in
the extracorporal coordinates of the hand, there is a strong indication that planning takes
place in terms of hand trajectories rather than joint rotations. Flash and Hogan (1985)
presented a mathematical model which was shown to predict both the qualitative features
and the quantitative details observed experimentally in planar, multi-joint arm
movements. The objective function is the square of the magnitude of jerk (rate of change
of acceleration) of the hand integrated over the entire movement. This is equivalent to
assuming that a major goal of motor coordination is the production of the smoothest
possible movement of the hand.
The observation that unconstrained, unperturbed arm movements are coordinated
in terms of hand motion shows that motor control is organized in a hierarchy of
increasing levels of abstraction (Hogan et al., 1987). These arm motions are organized as
though a disembodied hand could be moved in space; the details of how this is achieved
must then be supplied by a different level in the hierarchy.
Other models have also been proposed and studied. The comparison of Nelson
(1983) showed the remarkable similarity of movements predicted by the linear-spring
model and minimum-jerk model. Uno et al. (1989) proposed a mathematical model,
25
which is formulated by defining an objective function, square of the rate of change of
torque integrated over the entire movement.
Wolpert et al. (1995) have studied the effects of artificial visual feedback on
planar two-joint arm movements to distinguish between the two main groups of human
trajectory planning models–those specified in kinematic coordinates and those specified
in dynamic coordinates. Their results suggested that trajectories are planned in visually
based kinematic coordinates, and the desired trajectory is straight in visual space, which
is incompatible with purely dynamic-base models such as the minimum torque change
model.
Kawato et al. (1988) studied the problems of coordinates transformation from the
desired trajectory to the body coordinates and motor command generation. They
proposed an iterative learning control as an algorithm for simultaneously solving these
two problems. This approach appears to be very attractive, but it lacks capability of
generalization.
1.3.5 Layout Design
The layout design problem is defined as the method whereby positions of target
points are specified in the environment surrounding a human. Given a person’s
dimensions and ranges of motion, it is required to locate a number of objects in the
environment such that a specified cost function is optimized. The problem is of interest
to ergonomists, automobile packaging engineers, and designers interested in locating
targets (e.g., lever, buttons, control knobs, switches, etc.) in the reachable space of a
person. Because there are an infinite number of solutions to this problem, the field of
optimization presents a viable venue for formulating the problem.
Implementing a systematic optimization scheme in ergonomics has been, to a
certain extent, addressed by some researchers (Fisher, 1993; Pham and Onder, 1992).
There also have been many studies, the majority of which are experimental, that delimit
26
“comfort” and “convenient” reach zones where objects may be placed for operators, to
reduce effort and minimize potential injuries (Lim and Hoffmann, 1997; Das and
Sengupta, 1996).
27
CHAPTER 2
MODELING OPEN LOOP KINEMATIC STRUCTURES
A 15-degree-of-freedom (DOF) model of a human torso and arm is developed in
this chapter. This model is used throughout the study in later chapters. This chapter also
presents the general human modeling method used in our development and adapted from
the field of kinematics. The method is used to characterize joints of a mechanism in the
study of motion, such that a position vector describing the location of a given point in
terms of all joint displacements is determined.
2.1 A 15-Degree-of-Freedom Model of Torso and Arm
The human body is indeed arranged in series where each independent anatomical
structure is connected to another via a joint. Consider, for example, that there exists a
main coordinate system located at the waist. From that coordinate system, one may be
able to draw a branch by identifying a rigid link, connected through a joint to another
rigid link, connected to another link, until you reach the hand. Each finger also
comprises a number of segmental links connected via joints. Similarly, also starting from
the waist, one may follow the connection to reach the head, the other hand, the left foot,
and the right foot. We shall refer to one such chain as a branch. For example, Figure 2.1
depicts the modeling of a human into a number of kinematic branches.
It is important to distinguish the difference between a rigid body and a flexible
body. A rigid body is one that cannot deform (we typically consider bone as non-
deforming). A flexible body (or deformable object) is one that undergoes relatively large
strains when subjected to a load (e.g., soft tissue). For the approach presented in this
chapter, only rigid body motion is assumed at all times. Indeed, for ergonomic design
considerations, rigid body motion is adequate to address most problems. However,
although the effect of muscle interaction and deformation is considered as a whole in our
28
discomfort cost function that will be shown in Chapter 3, they need to be modeled in the
development of more elaborate discomfort cost functions.
Figure 2.1 Modeling of a human using a series of rigid links connected by joints
Human skeleton is comprised of 206 bones that are strong, light tubes, rods and
plates (Figure 2.2). Bones are linked by joints–some fixed fibrous, others mobile, with
ligaments uniting bone ends buffered by shock absorbent cartilage. The movements of
each joint of the torso, shoulder and arm are analyzed and modeled in the following.
The normal anatomy of the spine is usually described by dividing up the spine
into 3 major sections: the cervical, the thoracic, and the lumbar spine (Figure 2.3). Below
the lumbar spine is a bone called the sacrum, which is part of the pelvis. Each section is
made up of individual bones called vertebrae. There are 7 cervical vertebrae, 12 thoracic
vertebrae, and 5 lumbar vertebrae.
The movements permitted in the vertebral column are: flexion, extension, lateral
movement, circumduction, and rotation. Flexion, or movement forward, is the most
extensive of all the movements of the vertebral column and is freest in the lumbar region.
Extension, or movement backward, is limited by the anterior longitudinal ligament. It is
29
freest in the cervical region. The extent of lateral movement is limited by the resistance
offered by the surrounding ligaments. This movement may take place in any part of the
column, but is freest in the cervical and lumbar regions. Circumduction is very limited,
and is merely a succession of the preceding movements. Rotation is produced by the
twisting of the intervertebral fibrocartilages. This, although only slight between any two
vertebrae, allows of a considerable extent of movement when it takes place in the whole
length of the column, the front of the upper part of the column being turned to one or
other side. This movement occurs to a slight extent in the cervical region, is freer in the
upper part of the thoracic region, and absent in the lumbar region.
Figure 2.2 Human skeletal system Figure 2.3 Anatomy of the spine
Since the reach movement of hand is not related to the position of the head, the
cervical part of the spine (neck) is not included in our spine model. The other parts,
30
thoracic region and lumbar region are modeled as 6 DOF’s of rotations as shown in
Figures 2.7 and 2.9. The DOF’s along axes 2z , 3z , 4z and 5z (Figure 2.9) represent the
flexion and extension movements. Rotation and lateral movement are represented by the
DOF’s along the axes 0z and 1z respectively.
Figure 2.4 Anatomy of the shoulder
The two main bones of the shoulder (Figure 2.4) are the humerus and the scapula
(shoulder blade). The scapula extends up and around the shoulder joint at the rear to
form a roof called the acromion. The end of the scapula, called the glenoid, meets the
head of the humerus to form a glenohumeral cavity that acts as a flexible ball-and-socket
joint. The sternoclavicular and the acromioclavicular joints are regarded as accessory
structures to the shoulder-joint. The scapula is capable of being moved upward and
downward, forward and backward. The sternoclavicular joint forms the center from
which all movements of the supporting arch of the shoulder originate and is the only
point of articulation of the shoulder girdle with the trunk. The acromioclavicular joint is
where the collarbone (clavicle) meets the shoulder. When the whole arch formed by the
clavicle and scapula rises and falls (in elevation or depression of the shoulder), the joint
31
between these two bones enables the scapula still to maintain its lower part in contact
with the ribs. The movement of both sternoclavicular and acromioclavicular joints are
passive motions caused by the scapular rotation and movement. The mobility of the
scapula is very considerable and greatly assists the movements of the arm at the shoulder-
joint.
The shoulder is modeled as a 5-DOF joint (Figures 2.7 and 2.9) following Abdel-
Malek et al. (2001), in which two are translational and three are rotational. As shown in
Figure 2.9, translations along axes 6z and 7z represent the upward-downward and
forward-backward movements of scapula. Rotations about axes 8z , 9z and 10z reflect
the movement of the ball-and-socket joint of the shoulder.
The elbow (Figure 2.5) is a hinge joint made up of the humerus, ulna and radius.
The unique positioning and interaction of the bones in the joint allows for a small amount
of rotation as well as hinge action. This rotation is easily noticed during activities such as
hand-to-mouth eating motions.
Figure 2.5 Anatomy of the elbow
32
The hinge action and the rotation of the elbow are modeled as two DOF’s as
shown in Figures 2.7 and 2.9. The rotations about axes 11z and 12z (Figure 2.9) represent
the hinge action and the forearm rotation respectively.
The wrist is a collection of many joints and bones with one main purpose, to
allow a human to use the hands. The wrist has to be extremely mobile. At the same time,
it has to provide the strength for gripping. The wrist (Figure 2.6) is comprised of eight
separate small bones called the carpal bones. These bones connect the two bones of the
forearm, the radius and the ulna, to the bones of the hand and fingers. The movements
permitted in the wrist joint are flexion, extension, abduction and adduction. The wrist-
joint is a condyloid articulation. The parts forming it are the lower end of the radius and
under surface of the articular disk above and the navicular, lunate, and triangular bones
below. The articular surface of the radius and the under surface of the articular disk form
together a transversely elliptical concave surface, the receiving cavity. The superior
articular surfaces of the navicular, lunate, and triangular form a smooth convex surface,
the condyle, which is received into the concavity.
Figure 2.6 Anatomy of the wrist
33
The wrist is modeled as a joint with 2 DOF’s as shown in Figures 2.7 and 2.9.
The movements about axes 13z and 14z (Figure 2.9) represent the flexion-extension and
abduction-adduction of the wrist respectively.
The torso, shoulder and arm are modeled using 15 DOF’s in total (Figure 2.7) as
described above. The joint limits based on the experiments on three human subjects are
listed in Table 2.1. For reach posture, the movement of the joints among phalanges,
metacarpals and carpals of the hand (Figure 2.6) is not important and is not included in
our current model. However, some other activities of the hand like gripping, pinching
and typing are highly related to the hand gesture itself. Carpal Tunnel Syndrome (CTS),
a pinching of the median nerve within the wrist, is seen frequently in people who tend to
do forceful repetitive types of work, such as grocery store checkers, assembly line
workers, meat packers, typists, accountants, writers, etc. In order to simulate these kinds
of movements for helping the avoidance and treatment of this disease, the hand has to be
modeled in detail.
x(q)
xoyo
zo
Figure 2.7 Modeling of the torso-shoulder-arm
34
2.2 Denavit-Hartenberg Representation Method
In order to obtain a systematic method for describing the configuration (position
and orientation) of each pair of consecutive segmental links, a method was proposed by
Denavit and Hartenberg (1955). We shall utilize the method of Denavit and Hartenberg
to address human kinematics.
The Denavit-Hartenberg (DH) method was created in the 1950’s to systematically
represent the relation between two coordinate systems but was only extensively used in
the early 1980’s with the appearance of computational methods and hardware that enable
the necessary calculations. The method is currently used to a great extent in the analysis
and control of robotic manipulators. This method has also been successful in addressing
human motion, in particular towards a better understanding of the mechanics of human
motion.
The method, now referred to as the DH method, is based upon characterizing the
configuration of link i with respect to link (i-1) by a (4 4)× homogeneous transformation
matrix representing each link’s coordinate system. If each pair of consecutive links
represented by their associated coordinate system is related via a matrix, then using the
matrix chain-rule multiplication, it is possible to relate any of the segmental links (e.g.,
the hand) with respect to any other segmental link (e.g., the shoulder).
For an n-DOF model, the position vector of a point of interest on the end-effector
of a human articulated model (e.g., a point on the thumb with respect to the torso
coordinate system) can be written in terms of joint variables as
( )x x q= (2.2.1)
where n∈q R is the vector of n-generalized coordinates, and ( )x q can be obtained from
the multiplication of the homogeneous transformation matrices defined by the DH
method as
35
0
0 0 1 11 2
( ) ( )...
1n n
n n
⎡ ⎤= = ⎢ ⎥
⎣ ⎦- R q x q
T T T T0
(2.2.2)
where ijR is the rotation matrix relating coordinate frames i and j. The vector function
( )x q characterizes the set of all points touched by the fingertip.
Min. Max.
1q 6π− 6π
2q 12π− 12π
3q 18π− 6π
4q 18π− 6π
5q 18π− 6π
6q 18π− 6π
7q -3.81cm 3.81cm
8q -3.81cm 3.81cm
9q 2π− 2π
10q 2 3π− 11 18π
11q 3π− 2 3π
12q 5 6π− 0
13q π− 0
14q 3π− 3π
15q 9π− 9π
Table 2.1 Joint limits
36
In order to obtain a systematic method for generating the (4 4)× homogeneous
transformation matrix between any two links, it is necessary to follow a convention in
establishing coordinate systems on each link. This can be accomplished by implementing
the following rules. It should be emphasized that a suitable home configuration must first
be established before applying these rules. A home configuration denotes the start
configuration of the serial chain (segmental links). It is customary to start from a well-
known position where the user indicates that this posture is the home configuration. The
procedure for establishing coordinate frames at each link is as follows:
(1) Name each joint starting with 1, 2, ... up to n-degrees of freedom.
(2) Embed the −1z i axis along the axis of motion of the ith joint.
(3) Embed the xi axis normal to the −1z i (and of course normal to the z i axis) with
direction from joint i to joint (i+1).
(4) Embed the y i axis such that it is perpendicular to the xi and z i subject to the right
hand rule. However, since the y i axis is not needed for determining the DH
parameters, on the kinematic skeleton, it is customary not to show the y i axis so as
not to clutter the drawing.
The location of the origin of the first coordinate frame (frame 0) can be chosen to
be anywhere along the 0z axis. In addition, for the (n+1)th coordinate system (frame n),
it can be chosen to be embedded anywhere in the nth link subject to the above four rules.
In order to generate the matrix relating any two links, four parameters are needed. The
four parameters (depicted in Figure 2.8) are:
(1) θ i is the joint angle, measured from the −1xi to the xi axis about the −1z i (right hand
rule applies). For a prismatic joint θ i is a constant. It is basically the angle of
rotation of one link with respect to another about the −1z i axis.
(2) di is the distance from the origin of the coordinate frame (i-1) to the intersection of
the −1z i axis with the xi axis along −1z i axis. For a revolute joint di is a constant. It
37
is basically the distance translated by one link with respect to another along the −1z i
axis.
(3) ai is the offset distance from the intersection of the −1z i axis with the xi axis to the
origin of the frame i along xi axis (shortest distance between the −1z i and z i axis).
(4) α i is the offset angle from −1z i axis to z i axis about the xi axis (right hand rule).
Joint i
Joint i + 1
di
ai
αi
Link i
zi-1
xi-1
θi
xi
zi
qi
qi+1
Figure 2.8 Joint coordinate system convention and its parameters
Careful attention must be given when the following cases occur:
(1) When two consecutive axes are parallel, the common normal between them is not
uniquely defined, i.e., the direction of xi must be perpendicular to both axes,
however, the position of xi is arbitrary.
(2) When two consecutive axes intersect, the direction of xi is arbitrary.
The (4 4)× transformation matrix describing a transformation from link (i-1) to
link i for joint i is
38
1
cos cos sin sin sin cossin cos cos sin cos sin
0 sin cos0 0 0 1
i i i i i i i
i i i i i i iii
i i i
aa
d
θ α θ α θ θθ α θ α θ θ
α α−
−⎡ ⎤⎢ ⎥−⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦
T (2.2.3)
The four values for the DH parameters iiii ad αθ ,,, are typically entered into a
table known as the DH Table. Each row is used to generate the homogeneous
transformation matrix. This data set provides complete information about the model in
terms of kinematic functionality as well as dimensions.
Consider our model of the human body comprising the torso, spine, shoulder,
arm, and wrist with a total of 15 DOF’s as shown in Figure 2.9.
xo
yo
zo
z2
z1
z3
z4
z5
z6
z7
z8
z9
z10
z11
z13
z15
z14
L1
L2
L3
z12
L4
L5
L6L7L8
L9
Figure 2.9 A 15-DOF model of the torso, spine, shoulder, arm, and wrist
39
The coordinate systems are located along each joint, and the DH Table with the
dimensions of a human subject is shown in Table 2.2.
iθ id iα ia
1 12 q+π 0 2π 0
2 22 q+π 0 2π 0
3 3q 0 2π 1L =9
4 4q 0 0 2L =9
5 5q 0 0 3L =9
6 62 q+π 4L− =-18 2π 0
7 2π− 75 qL + 5( 9)L = 2π 0
8 2π 8q 2π 0
9 9q 0 2π− 0
10 102 q+−π 0 2π− 0
11 11q 0 2π− 6L =20
12 122 q+−π 0 2π− 0
13 132 q+−π 7L =25 2π− 0
14 142 q+−π 0 2π− 0
15 15q 0 0 0
Table 2.2 The DH Table for the 15-DOF human model
40
2.3 Conclusions
This chapter has presented human modeling in terms of kinematics. It is evident
that the human body is complex, one that requires a true multi-disciplinary approach
among researchers from the medical field and from engineering. Detailed physics-based
modeling of human joints may require knowledge far more than is currently available.
However, for all practical purposes, it has been shown that approximate modeling of
gross human motion, for the purpose of human motion simulation or ergonomic analysis
is possible.
The DH approach presents a systematic method for locating coordinate systems
on each moving part and for establishing a mathematical relation between any two
coordinate systems. The DH method is easy to implement and provides the basis for
introducing human motion as will be presented in later chapters.
41
CHAPTER 3
TASK-BASED POSTURE PREDICTION
A general methodology and associated computational algorithm for predicting
realistic postures of digital humans is presented. The basic postulate for this is a task-
based approach, where we believe that humans assume different postures for different
tasks. The underlying problem is characterized by the calculation (or prediction) of the
joint displacements of the human body in such a way to accomplish a specified task. In
this work, we have not limited the number of degrees of freedom associated with the
model. Each task has been defined by a number of human performance measures that are
mathematically represented by cost functions that evaluate to real numbers. Cost
functions are then optimized, i.e., minimized or maximized, subject to a number of
constraints including joint limits. The problem is formulated as a multi-objective
optimization algorithm where one or more cost functions are considered as objective
functions that drive the model to a solution set. The formulation is demonstrated and
validated against both another posture prediction module and experimental data. We
present this computational formulation as a broadly applicable algorithm for predicting
postures using one or more human performance measures.
The optimization method was based on genetic algorithms because of their global
solutions and ability to search a relatively large domain. A disadvantage associated with
this method is its computational time that is unsuitable for use in an on-line posture
prediction algorithm where digital humans are typically used to evaluate digital mockups
in a computer-aided engineering environment. A fast and efficient scheme is introduced
for predicting postures in an on-line algorithm suitable for real-time implementation.
Before proceeding, it is important to note that although our exposition has focused
on the torso, shoulder and arm, it is applicable to any serial chain representing the human
body.
42
3.1 Task-Based Behavior
The reason why researchers have become increasingly interested in posture
prediction algorithms is because of the recent appearance of digital human codes that
enable a user to model a human mannequin, to manipulate the model, and to perform
analysis on products and vehicles, that would not otherwise be possible. Instead of
building a complete prototype of a vehicle, a virtual model is built from CAD data and
the digital human is deployed into the vehicle to report various characteristics such as
reach and field of view.
While there have been several posture prediction algorithms, a few have focused
on calculating a posture for a given target point in space. Because of the apparent
mathematical complexity of the problem and because the human body has many more
than six DOF’s, most of this research has focused on calculating as many inverse
kinematic solutions (postures) as possible with a post-processing technique to select those
that are most realistic.
We propose a new framework and associated algorithm for predicting postures
that are based on an individual task, where each task is comprised of one or more cost
functions. In order to better understand the motivation behind cost functions, consider
the case of a driver in a vehicle who is about to reach a radio control button on the
dashboard. It is believed that the driver will reach directly to the button while exerting
minimum effort and perhaps expending minimum energy. However, the same driver
when negotiating a curve will have to place his/her hand on the steering wheel in such a
way to be able to exert the necessary force needed to turn the wheel. As a result,
involuntarily, the driver will select a posture that maximizes force at the hand, minimizes
the torque at each joint, minimizes energy, and minimizes effort needed to accomplish
this task (as illustrated in Figure 3.1). Therefore, our underlying scheme is that each task
is driven by the optimization of one or more cost functions. Note that simple logic has
43
been implemented in the processor module to correlate between the task and the cost
functions.
EffortEnergyDexterityReachDiscomfortForce at fingertipTorque at jointStress
Cost FunctionsTaskTurn steering wheelTurn headlights on/offShift gearAdjust mirror...
Processor
min
min
min
Figure 3.1 The task-based approach to selecting cost functions
What we conclude from this simple example is that humans assume different
postures for different tasks. Indeed, the objective leading to a given posture is one or
more performance measures, which we denote by cost functions. The designation comes
from the field of optimization where the formulation contains three main ingredients:
(1) A cost function to be minimized or maximized. In our case, many cost functions,
thus forming a multi-objective optimization problem.
(2) Design variables: the variables that will be calculated from the algorithm. In our
case, the joint variables that define the position and orientation of each link
connecting two joints.
(3) Constraints which are mathematical expressions that bound the problem. In our
case, joint-limit and distance constraints.
While many cost functions are currently being developed, we will attempt to
demonstrate the use of a number of such functions for posture prediction. It is evident
that the motion subtended by the human upper extremity to reach a specific target is
44
directly dependent upon the arm’s initial posture (i.e., initial conditions). A person will
usually reach towards a target using the least motion of the joints possible. A person will
also usually avoid exerting unnecessary energy against gravity (i.e., humans do not like to
maintain their arms up in the air). In addition, we believe that the displacement of each
joint from its most neutral position is a major factor towards comfort and plays an
important role towards assuming a posture.
These functions are subject to kinematic constraints (developed in the following
section). While these are perhaps the most basic functions that will be used to illustrate
our formulation, other cost functions have been developed characterizing dexterity,
stress, torque and force. We believe that many more exist such as strength of a person,
balance capability, shoulder mobility, orientation and weight of objects being moved, and
will be the subject of our future investigations. This chapter will focus on introducing a
general formulation and demonstrating that realistic postures are predicted.
3.2 Cost Functions and Constraints
In this section, we address the development of simple human performance
measures that enable the mathematical evaluation of a cost function. The basic scheme is
based upon obtaining a real number that evaluates the task, where each task comprises
several cost functions. Each cost function must evaluate to a number and must be
mathematically defined. Once this is achieved, it is then possible to formulate an
optimization algorithm that iteratively evaluates the task.
3.2.1 Discomfort
Consider a cost function that measures the level of discomfort from the most
neutral position of a given joint. Let qiN be the neutral position of a joint measured from
the starting home configuration (i.e., from the position and orientation specified in the
DH Table). Then the displacement from the neutral position is given by Ni iq q− .
45
Because the discomfort is usually felt higher in some joints, we also introduce a weight
function wi to stress the importance of one joint versus another. The total discomfort of
all joints is then characterized by the function
1
( )n
Ndiscomf i i i
i
f w q q=
= −∑q (3.2.1)
where wi is a weight function assigned to each joint for the purpose of giving importance
to joints that are typically more affected than others.
3.2.2 Effort
Effort is measured as the displacement of a joint from its original position. Effort
will greatly depend on the initial configuration of the limb prior to moving to another
location. For an initial set of joint variables qiinitial and for a final set of joint variables qi ,
a simple measure of the effort is expressed by
1
( )n
initialeffort i i i
if w q q
=
= −∑q (3.2.2)
Note that feffort depends on the initial configuration of each joint.
3.2.3 Potential Energy
Now consider the potential energy exerted by a limb. Each link (e.g., the
forearm) has a specified center of mass. The vector from the origin of the link’s
coordinate system to the center of mass is given by iir , where similar superscript and
subscript indicate that the vector is resolved in the link’s coordinate system as illustrated
in Figure 3.2.
46
iir
0A rii
iΔ( )
Figure 3.2 Illustrating the potential energy of the forearm
The total potential energy f potential is the sum of all individual potential energies Pi .
In order to determine the position and orientation of any one part of the arm, we shall use
the transformation matrices ( 1)ii
− A that relate one part to another using the (4 4)×
transformation matrix. Let g be the gravity vector, then for the first body part in the
chain, the potential energy is 0 11 1 1 1P m= − g A r . The energy contribution by the second
body part in the chain is 0 1 22 2 1 2 2P m= − g A A r . For a complete chain, the total potential
energy is given by
( )0
1 1
( ) ( )n n
ipotential i i i i
i i
f P m= =
= = −∑ ∑q g A r (3.2.3)
where [ ]0 0 Tg= −g is the gravity vector.
3.2.4 Dexterity
It is believed that humans also configure their extremities around an object in
such a way to have the maximum accessibility to that object. We define a cost function
47
that is based on maximizing the dexterity at specified target points. Indeed, to
mathematically formulate this problem, it is necessary to use a dexterity measure at
specific target points. Such a measure must account for the ranges of motion for each
joint. Because of the need for an analytical expression that can be used in the proposed
optimization method, we define a new dexterity measure.
Because human joints are constrained, we must characterize each joint limit by an
inequality constraint in the form of L Ui i iq q q≤ ≤ . In order to include ranges of motion in
the formulation, a transformation is introduced as
iibiaiq λsin+= (3.2.4)
where max min( ) 2i i ia q q= + and max min( ) 2i i ib q q= − are the mid point and half range of the
inequality constraint and iλ is a slack variable (i.e., we have converted the inequality to
an equality). The position constraint function is then written in terms of the extended
vector [ ]1 2, , ..., T nnλ λ λ= ∈λ R as
( )=q q λ
For any admissible configuration (i.e., for the hand at a specific position that can be
reached), the following (n+3) augmented constraint equations must be satisfied
*
( 3) 1
( )( )
( ) n+ ×
−= =
−⎡ ⎤⎢ ⎥⎣ ⎦
Φ q xH q 0
q λ q (3.2.5)
where the augmented vector of generalized coordinates is * [ ], TT T=q q λ .
The set defined by *( )H q is the totality of points in the reach envelope that can be
touched by the hand. The so-called extended Jacobian of *( )H q is obtained by
differentiating H with respect to *q as
48
( ) =−
⎡ ⎤⎢ ⎥⎣ ⎦
*q
qλ
Φ 0q
I qH (3.2.6)
which is an ( 3) (2 )n n+ × matrix, where qΦ is a (3 )n× matrix, I is the ( )n n× identify
matrix, and jiq λ= ∂ ∂λq is an ( )n n× diagonal matrix with diagonal elements as
( ) cosii i ib λ=λq .
Since the extended Jacobian *qH inherently combines information about the
position, orientation, and ranges of motion of the hand, it is a viable measure of dexterity.
Furthermore, because of the simplicity in determining an analytical expression of *qH , it
is well suited as a cost function for an optimization problem. We define the dexterity
measure as
* *( ) ( ) ( )dexterityTf =
q qq H q H q (3.2.7)
Note that the measure characterized by Eq. (3.2.7) takes into consideration all ranges of
motion and singular orientations for a given kinematic chain. The proposed dexterity
measure is more accurate in describing the manipulability of robot manipulators than that
proposed by Yoshikawa (1985), because it considers all singularities as well as joint
limits.
3.2.5 Torque
Stress induced at a joint is a function of torque imposed at that joint due to the
biomechanical interaction. A person will generate the torque at a given joint to overcome
a load by exerting muscle forces that is also a function of the position and orientation of
the joint during loading. In order to account for all of the elements that enter into
calculating the torque at a given joint, we must employ a systematic formulation. To
develop a mathematical expression for the torque, we first introduce a few preliminary
49
concepts. The velocity of a point on the hand is obtained by differentiating the position
vector as
= xx J q (3.2.8)
where the position Jacobian [ ]( ) = ∂ ∂xJ q x q is a (3 )n× matrix and q is the vector of
joint velocities. Note that the reach envelope can be determined from analytically
stratifying the Jacobian (Abdel-Malek et al., 2001). Similarly, the angular velocity can
be obtained as
ωω = J q (3.2.9)
where the orientation Jacobian ωJ is a (3 )n× matrix. Combining Eqs. (3.2.8) and
(3.2.9) into one vector yields
( )⎡ ⎤= =⎢ ⎥⎣ ⎦
xv J q q
ω (3.2.10)
where J q( ) is the Jacobian of the limb or kinematic structure defined by
( ) x
ω
⎡ ⎤= ⎢ ⎥⎣ ⎦
JJ q
J (3.2.11)
The goal in this section is to determine the relationship between the generalized
forces applied to the hand (e.g., carrying a load) and generalized forces applied to the
joints. Let τ denote the ( 1)n× vector of joint torques and F the ( 1)×m vector of hand
forces applied at p, where m is the dimension of the operational space of interest
(typically six).
50
Using the principle of virtual work, we can determine a relationship of joint
torques and forces at the hand. Since the upper extremity is a kinematic system with
time-invariant, holonomic constraints, its configuration only depends on the joint
variables q (not explicitly on time). Consider the virtual work performed by the two
force systems. As for the joint torques, its associated virtual work is
TdW d=τ τ q (3.2.12)
For the hand forces TTT⎡ ⎤= ⎣ ⎦F f m , comprised of a force vector f and moment
vector m , the virtual work performed is
T TdW d dt= +F f x m ω (3.2.13)
where dx is the linear displacement and dtω is the angular displacement. Substituting
Eqs. (3.2.8) and (3.2.9) into Eq. (3.2.13) yields
T T
T
dW d d
dW dω= +
=F x
F
f J q m J q
F J q (3.2.14)
Since virtual and elementary displacements coincide, virtual works associated with the
two systems are
T
T
W
W
δ δ
δ δ
=
=τ
F
τ q
F J q (3.2.15)
where δ denotes a virtual quantity. The system is under static equilibrium if and only if
δ δ δ= ∀F τ qW W (3.2.16)
51
which means that the difference between the virtual work of the joint torques and the
virtual work of the hand forces shall be null for all joint displacements. Substituting Eq.
(3.2.15) into Eq. (3.2.16) yields
( )δ δ= ∀τ q F J q q qT T (3.2.17)
Therefore, the relationship between the joint torques and forces on the hand is given by
T=τ J F (3.2.18)
where the torque vector is [ ]1 2, ,..., Tnτ τ ττ = .
We now develop the cost function that is fundamental to our formulation. The
objective function is to be minimized and is comprised of the weighted summation of all
joint torques
1
n
torque i ii
f w τ=
=∑ (3.2.19)
where wi is a weight function used to distribute the importance of the cost function
among all joints.
3.2.6 Constraints
For the point on the end-effector characterized by [ ]Tx y z=x as a function of
all joint variables to reach a target point p, it is necessary that ( )x q - p = 0 . This criterion
is a constraint that must be driven by one or more cost functions, i.e., a driving function
should be imposed to move the human linkage while enforcing the zero-distance
constraint. Therefore, we implement this equation as a constraint to be imposed within a
specified tolerance ε (e.g., 0.001), such that
52
( ) ε- §x q p (3.2.20)
Furthermore, each degree of freedom has unilateral constraints imposed in the form of
L Ui i iq q q§ § , 1,...,i n= (3.2.21)
where n is the number of DOF’s used in the model. Note that for a 15-DOF model as
implemented in our code, a total of 31 constraints must be imposed (two for each
unilateral constraint and Eq.(3.2.20)).
3.3 Optimization Formulation
Genetic Algorithms (GA) are based on an evolution of random tries by
“individuals”, not on logic as regular algorithms (Goldberg, 1989). It is a computer
simulation of Darwin’s theories. Though the whole process is built on randomness, the
effect is not. It marches towards a “solution”. The driver in a GA is the cost function,
which determines fitness of each individual. Only the better fit “individuals” survive to
breed. Children are bred by combining the traits of the 2 parents. Traits are codings in
the domain knowledge, i.e. the genes. Each cycle in the process of evaluation, selecting
survivors, breeding, evaluation, etc. is called a generation. The GA continues the
iteration of generations until the goal is found or some other end criterion occurs.
Mutation helps prevent a degenerate population, which tends to settle prematurely at a
non-optimum solution. Indeed, GA do not find the optimum solution, they find the
optimum first solution at the time of that the program is ended. However, they have two
important characteristics that make them applicable to calculating postures:
(1) Genetic algorithms lead to a global minimum versus only a local solution.
(2) Genetic algorithms are usually used when the search space is very large.
53
On the other hand, GA algorithms are known to be computationally expensive, therefore
a real-time solution for a posture of a 15-DOF model is difficult to obtain.
Evaluate: Main generational processing loop in GA
.
Input
Count the total number of chromosomes (bits) required e.g.Initialize all the individuals of the population randomly in binary
Convert parametersBinary Decimal
Gradient Based algorithmMinimize distance d to target point
min d = ||x(q) - p||
d < 0.001
Assign fitness = discomfort festablish the best individual
PerformSelection Crossover Mutation
Write child array back into parent array for new generation
Optimized parameters
Ni
Ui
Li qqq ,, 15,......,2,1=i
iq
f = -10000
f w q qi i iN
i= -
=Ê d i
2
1
15
Children
150
N
Y
-
Evaluate: Main generational processing loop in GA
.
Input
Count the total number of chromosomes (bits) required e.g.Initialize all the individuals of the population randomly in binary
Convert parametersBinary Decimal
Gradient Based algorithmMinimize distance d to target point
min d = ||x(q) - p||
d < 0.001
Assign fitness = discomfort festablish the best individual
PerformSelection Crossover Mutation
Write child array back into parent array for new generation
Optimized parameters
Ni
Ui
Li qqq ,, N
iUi
Li qqq ,, 15,......,2,1=i
iqiq
f = -10000
f w q qi i iN
i= -
=Ê d i
2
1
15
Children
150
N
Y
-
Figure 3.3 GA-GBA Algorithm for predicting a posture
54
An adaptation of a GA algorithm to the problem described herein is shown in
Figure 3.3. First, lower and upper limits for each joint variable are inputted along with
other information, such as position of the target point, population size, crossover
probability, mutation probability, and number of possibilities per parameter. In our
implementation, the latter is specified as 1024. Then, the total number of chromosomes
(bits) required is counted. For example, there are 102 =1024 possibilities for each
parameter, which means that each will need 10 bits of binary to characterize its
description, i.e., number of chromosomes needed is 10. Subsequently, for a total of 15
joint variables, 150 chromosomes are required. Then the main generational processing
loop begins: each individual of the population is evaluated, i.e., binary values are
converted to decimal and a function evaluator is executed to calculate the fitness. At this
time, a commercial gradient-based optimization algorithm (GBA) called Design
Optimization Toolkit (DOT) is called inside the function evaluator to perform a
minimization of the distance between a specified point on the end-effector and the target
point (i.e., to enforce the distance constraint). Normally, in gradient-based algorithms,
for unconstrained optimization, there are Broydon-Fletcher-Goldfarb-Shanno variable
metric method (BFGS) and Fletcher-Reeves Conjugate Gradient method (FRCG); for
constrained optimization, there are Modified Method of Feasible Directions (MMFD),
Sequential Linear Programming method (SLP) and Sequential Quadratic Programming
method (SQP).
The overall procedure is as follows: GA passes the parameters to the function
evaluator while the gradient-based algorithm (GBA) inside the function uses these
parameters as initial values to minimize the distance. If the distance is less than a small
tolerance, the weighted cost function (e.g., discomfort) for the 15 joints is calculated and
its negative is passed as the function value, otherwise a very large negative value is
passed. Since GA only processes maximization, negative value of the cost functions is
passed for minimization purpose. This value is assigned as fitness in GA, where the
55
individual with best fitness is obtained. A tournament selection is implemented in GA.
Before each selection cycle for a new generation, a shuffle is first performed on the
parent population. Then each tournament selects one parent with better fitness from two
possible parents for mating. After all the parent pairs for the creation of the child
population have been selected, a uniform crossover is performed between each randomly
selected pair, where each bit in the chromosome of one parent is exchanged with the bit
from the other parent under some random probability.
Following the mating process, two kinds of mutations are performed. One is
called jump mutation, in which some randomly selected bits jump from 1 to 0 or from 0
to 1. Jump mutation is performed on binary code directly while the other type of
mutation, creep mutation is performed on the decimal number, where each randomly
selected parameter is moved one discrete position away within the imposed limits. After
all the children have been generated, a child array is written back into the parent array for
a new generation. Finally, a check is performed to verify whether the best parent was
replicated. If not, GA reproduces the best parent into a random slot. After the main loop
has been executed for a specified number (we have used 2000) of generations, the
optimized parameters are obtained.
For the purpose of our computer code, the Input File to the system (initial
configuration) contains the following information.
a. The DH Table: The DH Table provides all necessary information to model the
human body including joints and dimensions.
b. Neutral Positions: These are set constants (constant joint angles) that characterize the
most comfortable position for each joint. The variable is measured from home
configuration (i.e., wherefrom the posture has started). For this human model, the
neutral positions (Figure 3.4) are as follows:
0; 1,...,9,11,12,14,15Niq i= = , 10 2Nq π= , and 13 2Nq π=-
56
c. Initial Positions: This is the last known configuration of the model prior to executing
the command. The initial positions are used to evaluate the effort function.
d. Joint Weights: It is natural that some human joints tend to be activated more than
others (passive versus active). To reflect that in the model, we have assigned a scalar
number to each joint therefore setting more importance on certain joints (Table 3.1).
Figure 3.4 Neutral position
Joint variable Joint Weight Comments
1 2,q q 10 A weight multiplying the Cost Function, for both negative and positive values of N
i iq q-
3 6...q q 10 100
When 0Ni iq q- >
When 0Ni iq q- <
7q 50 For both negative and positive values of Ni iq q-
9q 50 When 0Ni iq q- >
Table 3.1 Joint weights used in cost function
57
3.4 Predicting Postures and Validation
To demonstrate our formulation and computer implementation, the 15-DOF
model shown and dimensioned in Figure 3.5 and associated DH Table shown in Table 3.2
have been used. Note that the model shown is in home configuration.
Figure 3.5 Modeling of the torso, shoulder, and arm as a 15-DOF system
58
Table 3.2 DH Table of the 15-DOF model
In order to obtain a better understanding of our results, we have compared the
calculated postures of our 15-DOF model with those produced by another posture
prediction module called IKAN (Inverse Kinematics using ANalytical Methods) (Tolani
et al., 2000), which is based on a 7-DOF biomechanical model. For a given target
position defined below, we have optimized a posture for a minimum discomfort function
and have asked a person to reach the same target (with thumb touching the target point).
We have also verified that the calculated postures are close (but not identical) to those
assumed by a human subject.
For each set of figures below, Figure (a) is as a result of our computational
algorithm, (b) and (c) are obtained from IKAN, where (b) is the solution when the goal is
59
position only (some are missing because no solutions can be obtained) and (c) is the
solution when the goal includes position and orientation, (d) is a photograph of a human
subject with a natural posture. The vector of joint variables q for the 15-DOF model and
a value for the discomfort function are also listed.
(a) (c) (d)
Figure 3.6 Target Point 1 (41.2, -57, 31.5) , Discomfort 2.2022= [.0847, -.0007,.0407,.0091,.0567,.0820, -.0019,.0075,
.0110,.3465,.5328, -.4244, -1.4772, -.1081,.1557]T=q
(a) (b) (c) (d)
Figure 3.7 Target Point 2 (40, 0, 36) , Discomfort 7.38= [.1022, -.1310, -.0235,.0198,.0014,.0072, -.0112,.0444,
- .7829, -.1346,1.3475, -1.2451, -1.4099, -.1625, -.3101]T=q
60
(a) (c) (d)
Figure 3.8 Target Point 3 (20, 35, 50) , Discomfort 12.8254= [.3087, -.2618,.0510,.0843,.0416,.0020,.0022,.2022,
- .2543, -1.4352,.3640,-1.1986, -1.2240, -.3469,.3487]T=q
(a) (b) (c) (d)
Figure 3.9 Target Point 4 (-30, 10, 20) , Discomfort 2.0873= [-.0713,.0075,.1673,.0884,.1153,.0497,.0097,.0760,
- .6950,1.9193, -.2518,.0000, -2.3357,.4159, -.2897]T=q
(a) (b) (c) (d)
Figure 3.10 Target Point 5 (-40, 0, 36) , Discomfort 1.5824= [.0135,.0206,.1265,.0720,.1204,.0805,.0021, -.0005,
- .8226,1.9184, -.5517, -.0003, -1.7336,.1164,-.1126]T=q
61
(a) (b) (c) (d)
Figure 3.11 Target Point 6 (-50, -20, 20) , Discomfort 1.0783= [-.0871, -.0053,.0546,.0661,.0340,.0676,.0074,.0200,
- .7906,1.5271, -.4086, -.1063, -1.5175,.0095,.2451]T=q
(a) (b) (c) (d)
Figure 3.12 Target Point 7 (0, -60, 5) , Discomfort 0.7253= [-.0120, -.0348,.0052,.0096,.0344,.0022,.0016,.0265,
- .0643,.9742, -.1020, -.5219, -1.7381, -.0602,.1526]T=q
(a) (b) (c) (d)
Figure 3.13 Target Point 8 (30, -40, 60) , Discomfort 3.8352= [.1634, -.2485,.0292,.0468,.0703,.0418, -.0048,.0400,
- .1602,.3555,.1484, -1.0230, -1.0510,.0806,-.0517]T=q
62
(a) (b) (c) (d)
Figure 3.14 Target Point 9 (30, -40, 0) , Discomfort 0.4966= [.0568, -.0618,.0135,.0030, -.0030, -.0056, -.0103,.0795,
- .0314,1.2111,.3822, -.3199, -1.7188, -.0346, -.0813]T=q
(a) (c) (d)
Figure 3.15 Target Point 10 (60, 0, 0) , Discomfort 3.3709= [.2311,.0104,.2571,.0849,.1081, -.0150,.0203, -.3032,
- .0402,1.9117,1.2721, -.0113, -1.6577,.0791,.3176]T=q
For the first posture in Figure 3.6, the calculated cost function is 2.2022, a
relatively low discomfort. This is due to the fact that the calculated posture has only a
very small deviation from the neutral position (recall that the neutral position is perceived
to yield a comfortable posture). Therefore, the arm fully extended provides for most of
the angles being close to zero. Indeed, a similar situation occurs for postures assumed in
Figures. 3.12 and 3.14, where the calculated discomfort values are also relatively small,
0.72 and 0.496, respectively. Difficult postures (most uncomfortable) are observed in
63
Figure 3.8 (discomfort = 12.825), and Figure 3.7 (Discomfort = 7.38). Note that these
postures contain several joint angles that are far from neutral, therefore adding to the cost
function.
Because we have opted to use genetic algorithms for optimization, the stopping
criteria are not well defined (an inherent characteristic of genetic algorithms). The
selection, mutation, and crossover processes continue to run indefinitely over the
population. Therefore, time (in terms of computational complexity) must be defined as a
criterion for stopping the calculations coupled with an error estimate of the distance from
the tip of the finger to the target point. Of course, any posture that has zero distance error
and that satisfies the constraints is indeed a solution, but is not the optimum solution.
Therefore, although the distance is zero between the tip of finger and target point, the
program must continue to minimize the discomfort cost function until an acceptable low
value is achieved.
3.4.1 Comparison with IKAN
It can be found from (b) of Figures 3.6 to 3.15 that 3 out of 10 target points are
not reachable by IKAN and no solutions were found for target points 1, 3 and 10 (The
absence of (b) in Figures 3.6, 3.8 and 3.15 are due to the fact that no solution can be
found by IKAN). This is due to the fact that IKAN only uses a 7-DOF model of the arm,
and no upper body motion is involved. It is obvious that the movement of upper body is
very important during a reaching task and our approach gave more realistic solutions than
IKAN. Moreover, the two translational joints at the shoulder also play a very important
role. IKAN is only suitable for 7-DOF model of the arm and for the position task only 4
DOF’s are considered by IKAN (the hand is an extension of the wrist). This limits IKAN
to be applied to the general reaching problems because, in most of the cases, a target
point is presumably reached by the hand, not the wrist.
64
In order to compare the results specifically for a 7-DOF model from the two
approaches, the transformation matrix of the final wrist frame relative to the shoulder
frame after the two translations of the shoulder was extracted from our solution for the
15-DOF model and provided as a goal matrix to IKAN. With the goal set to the position
and orientation of the wrist, the 7 joint variables were computed by IKAN where the
swivel angle was selected as the midpoint of the largest valid intervals derived from the
joint limits. The solutions are shown in (c) from Figures. 3.6 to 3.15. The solutions for
the 7 joint variables are very close to what we have obtained using our approach. This is
not surprising since for this comparison, we have used a simple representation of the
discomfort function in the optimization where the discomfort is computed as the
deviation of the joint variables from the neutral position. The procedure of minimizing
the discomfort in our approach is coincident with the strategy of selecting the swivel
angle inside IKAN that tries to maximize the clearance of the joint variables from the
joint limits. Although we provide the target position and orientation of the wrist to IKAN
for the comparison, this is usually not possible in practice for a reach task while only the
position of a target point to be reached by hand is specified.
Generally speaking, IKAN uses a 7-DOF model for position and orientation
problems and actually uses a 4-DOF model (where wrist is fixed) for position, position
and partial orientation and aiming problems. However, a model with a larger number of
DOF’s is necessary to obtain a realistic posture. Solutions obtained by IKAN are
consistent with our solutions when the simple discomfort cost function was applied. The
IKAN may not give consistent solutions when a more complex form of discomfort cost
function is implemented. For the position, position and orientation problems, it is
possible to apply optimization with the same cost function used by our algorithm to select
the swivel angle. In this case, optimization problem needs to be divided into several sub-
optimization problems depending on how many valid intervals of swivel angles exist,
still, this is only for 4- or 7-DOF model. In position and partial orientation problems and
65
aiming problems, IKAN already uses unconstrained optimization to enforce the joint
limits without considering any discomfort cost functions and it is only for a 4-DOF
model. On the other hand, our approach yields successful results for a 15-DOF model
and can be easily extended to models with even larger number of DOF’s. The issue of
computational complexity, typically associated with optimization problems will be
addressed in subsequent sections.
3.4.2 Validation against Experimental Data
Figure 3.16 Marker placement on front of subject
66
Figure 3.17 C7 to suprasternal notch Figure 3.18 1H and 2H measurements
We have also validated our results with experimental data obtained through
motion capture by HUMOSIM (HUman MOtion SIMulation at the Center for
Ergonomics of University of Michigan). From their anthropometry data file, the
following data were used to determine the dimension of the digital human to be input to
our optimization program: Height To Electromagnetic (EM) Marker at Suprasternale
(Figure 3.16), C7 to L5, Vertical Distance from C7 to Suprasternal Notch (Figure 3.17),
Suprasternal Notch to Right Acromion Process, Upper Arm Length, Forearm Length,
Tower Block H1 (from the radial stylion to a point in line with the center of EM marker
on the right hand), and H2 (from the level of the radial stylion to the center of EM marker
in the direction of the fingers). The derived data from the measurements of the subject 19
which are used in our program are 1 2 3 5 12.125L L L L= = = = , 4 18.6L = , 6 32.7L = ,
7 26.2L = , 1 2.7H = and 2 3.5H = (Figure 3.18).
Michigan data files are of the following five types: .evt (event records), .gan
(global angles), .lan (local angles), .loc (location of joint centers and landmarks), and .uvt
(unit vector for rotation matrix). Event records tell us the status of each data entry in the
other tracking data files. There are two types of reach, Go To and Return. For our
purpose of posture validation, we only need to extract the last row of Go To type and the
first row of Return type from the data where the subject was in stable contact with the
67
target. To know the position and orientation of the target, the data from files with .loc
and .uvt extensions were extracted. We chose the location of the electromagnetic marker
on the hand back as our end-effector position.
All the tracking data are given for subject 19, and there are 480 different targets
obtained by counting all the combinations of the following: Left 45 Degrees, Forward,
Right 45 Degrees or Right 90 Degrees, Push or Pull, Top Pod, Middle Pod or Lowest
Pod, Center Surface, Left Surface, Top Surface, Right Surface or Bottom Surface of the
tower block, Thumb on Top, Thumb on Right, Thumb on Bottom or Thumb on Left of
each surface. We took twelve files for validation (reach type is Go To, Return type files
will give the same target information):
(1) O19g003h: Left 45 Degrees, Push, Top Pod, Center, Thumb on Bottom
(2) O19g004h: Left 45 Degrees, Push, Top Pod, Center, Thumb on Left
(3) O19g007h: Left 45 Degrees, Push, Top Pod, Left, Thumb on Bottom
(4) O19g097l: Left 45 Degrees, Pull, Middle Pod, Bottom, Thumb on Top
(5) O19g122h: Forward, Push, Top Pod, Center, Thumb on Right
(6) O19g170h: Forward, Push, Lowest Pod, Top, Thumb on Right
(7) O19g218l: Forward, Pull, Middle Pod, Bottom, Thumb on Right
(8) O19g244h: Right 45 Degrees, Push, Top Pod, Center, Thumb on Left
(9) O19g340l: Right 45 Degrees, Pull, Middle Pod, Bottom, Thumb on Left
(10) O19g349l: Right 45 Degrees, Pull, Lowest Pod, Top, Thumb on Top
(11) O19g363h: Right 90 Degrees, Push, Top Pod, Center, Thumb on Bottom
(12) O19g458l: Right 90 Degrees, Pull, Middle Pod, Bottom, Thumb on Right
The extracted location (from .loc) files have 119 columns where columns 73-75
are for the right hand, i.e., location of the electromagnetic marker on the hand back. The
data of unit vector were actually derived from the tracked location data. Among the 155
columns of the extracted unit vector data file, columns 46-54 are the right hand
68
orientation. The position and orientation of the right hand were obtained by re-extracting
the above columns from the extracted location and unit vector files.
The coordinates used in the Michigan data are different from those in our model
(shown in Figure 3.19). To validate and compare, transformations were made within our
program to allow for a direct comparison of two equivalent models (Figure 3.20).
Michigan human measurements, hand position and orientation were given into our
optimization algorithm as input. Joint variables were obtained for the above 12 targets
with position and orientation while using discomfort as cost function.
Figure 3.19 15-DOF model with Michigan measurements
69
Figure 3.20 Our 15-DOF model (left) and Michigan model (right)
Results are shown from Figure 3.21 to Figure 3.32. These are the posture
prediction using our digital human modeling environment, where postures for our 15-
DOF model are shown, together with the wireframe model representing the Michigan
data. Figures’ names correspond to the Michigan data file names.
Figure 3.21 g003h Figure 3.22 g004h Figure 3.23 g007h
70
Figure 3.24 g097l Figure 3.25 g122h
Figure 3.26 g170h Figure 3.27 g218l
71
Figure 3.28 g244h Figure 3.29 g340l
Figure 3.30 g349l Figure 3.31 g363h
72
Figure 3.32 g458l
It can be seen from the figures that overall our optimization algorithm gives good
results by using a simple discomfort cost function. The postures obtained from our
program match those assumed by a real human subject as measured by Michigan
experiments (see Figures 3.21, 3.23, 3.25, 3.26, and 3.28 to 3.31). In some cases, our
results (Figures 3.22, 3.24, and 3.27) predicted arms stretched out, while the experimental
data did not. This is related to the use of our simple discomfort cost function, which only
measures the deviation of the arm from the neutral joint angles (e.g., straight elbow).
This also suggests that potential energy should be included into the cost function. The
above reason also results in the incorrect posture shown in Fig. 3.32. Another influential
factor causing some joints to be actuated before others are the weighted scale factors
corresponding to different joints. From all the figures, we can also observe that vision
factor plays a very important role on the final posture, and this should be considered in
our cost function, which has been added to the discomfort cost function being developed
now in DHL (Digital Humans Lab). More accurate and complex discomfort cost
function can be developed by considering all the factors mentioned above and by careful
evaluation of the weight factors when calculating the discomfort. Another important
method to improve the accuracy of the prediction is to provide more accurate joint limits
given as constraints and to use human models with more DOF’s.
73
3.5 Multi-Objective Optimization
Consider now a task that requires the evaluation of several cost functions
simultaneously as a multi-objective optimization problem. For a target point of
(41.2, -57, 31.5) , the discomfort function is optimized to 3.3345discomfortf = , the effort
function 1.8088effortf = , and the potential function as 8.4358potentialf = . The posture is
predicted as [0.1129,0.1866,0.0797,-0.0006,0.0474,-0.0077,-0.0217,-0.0122,-0.0625,0.4389,0.6292,-0.6812,-0.8163,-0.1358,-0.1039]T=q
and shown in Figure 3.33 (a). Note that in comparison with the previous posture using
one cost function (Figure 3.33 (b)), the joints have slightly changed.
(a) Multi-objective optimization (b) Single cost function
Figure 3.33 Posture prediction for (41.2, -57, 31.5)
(a) Multi-objective optimization (b) Single cost function
Figure 3.34 Posture prediction for (40, 0, 36)
74
Similarly, compare the posture predicted and shown in Figure 3.34 (b) with that in
Figure 3.34 (a), for which the target point is (40, 0, 36), joint variables are calculated to a
similar posture but one that has slightly different joint values
[0.1869,0.0032,-0.0208,-0.0127,-0.0002,0.0305,0.0072,-0.0341,-0.8887,-0.2311,1.3968,-1.2101,-0.8163,-0.0260,-0.0839]T=q .
Note that effort, potential energy, and discomfort values
( 8.4510, 4.9700, 9.4109discomfort effort potentialf f f= = = ) evaluate to non-zero numbers
indicating that all cost functions play an important role in predicting this posture.
To demonstrate the applicability of various cost functions, consider two different
initial configurations, i.e., initial postures from which the arm and torso will move to
reach a desired target point specified by (41.2, -57, 31.5) . We shall use the effort
function, which depends upon the initial configuration (more effort is needed if each joint
travels longer distance).
(a) Initial configuration is neutral position (b) Initial configuration has joints at zero
Figure 3.35 Prediction postures based on two different initial postures
Case I will start from an initial posture that has all joints in their defined neutral
configuration (i.e., most relaxed position) and given by
0 [0,0,0,0,0,0,0,0,0, / 2,0,0, / 2,0,0]Tπ π= −q . The calculated final posture is
75
[-.0653,.0743,.1385,-.0459,.0225,-.1745, -.0175,.0263,.5866,.9365,.5199, -.3124, -1.5222, -.1161,.1194]T=q
and shown in Figure 3.35 (a) where the effort is calculated as 1.2075effortf = .
Case II will start from an initial posture that has all joints at zero (i.e., home
posture where we start our simulations from) 0 [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]=q . The
calculated effort is 0.3541effortf = and the calculated posture is
[.2650,-.0571,.0717,.0635,.1327,.2367,-.0022,.0049,-.1905,.1457,.3418,-.0426,-.0003,-.0500,-.1384]=q T ,
and shown in Figure 3.35 (b).
The important result from the above two postures is that they are different, which
is a behavior observed in humans.
3.6 Real-Time Algorithm
Previously, GA-GBA method was used to calculate the posture for a given target
point by using some task-driven cost functions. Genetic algorithm (GA) is a global
optimization method, but it takes a lot of computation time. Gradient-based algorithm
(GBA) is very fast, but it only searches for an optimization point in a local area. GA-
GBA combines them together; it uses GA with some cost function to search for a point in
global area and gives its result during each searching step to GBA as the initial point;
GBA will then refine the search locally to find the best point with minimized distance to
the target point. Through the combination, good results can be obtained. Also, the
computation time is much improved and reduced to 15 minutes from several hours by
using GA itself; however, it is still too expensive for real-time prediction. A much faster
and still accurate method is needed. In order to utilize the fast property of GBA method,
a workspace of our 15-DOF human model is pre-calculated and divided into 16 sections
(Figure 3.36). A middle point is chosen within each section and is given to GA-GBA as
the target point for preprocessing. Results from GA-GBA are used as starting point
respectively for each section in which the target points drop. On the basis of the above
information, four faster methods were developed and tested. Each of the four methods
76
makes justifications at the beginning of the algorithm; it will terminate right away if the
target point is outside the workspace. The main difference among the four methods lies
in that they use different optimization strategies. The flowcharts for the four methods are
shown in Figures 3.37, 3.38, 3.39 and 3.40 respectively.
Starting Point 1
Figure 3.36 A Partitioned Reach Envelope
77
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call BFGS with discomfort as cost function
Call BFGS with distance as cost function
Distance < 0.01
Y
Compute discomfort
Discomfort 10000
NWrite to data file
End
Write message
N
Figure 3.37 BFGS-BFGS method
78
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call SQP with discomfort as cost function and distance as constraint
Write to data file
End
Write message
N
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call SQP with discomfort as cost function and distance as constraint
Write to data file
End
Write message
N
Figure 3.38 DIS-CONS method
79
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call BFGS with combined discomfort and distance as cost function
Write to data file
End
Write message
N
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call BFGS with combined discomfort and distance as cost function
Write to data file
End
Write message
N
Figure 3.39 MOO method
80
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call SQP with discomfort as cost function and distance as constraint
Write to data file
End
Call BFGS with distance as cost function
Write results to initial values
Write message
N
Input position of target point
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call SQP with discomfort as cost function and distance as constraint
Write to data file
End
Call BFGS with distance as cost function
Write results to initial values
Write message
N
Figure 3.40 CONS-SQP method
81
GA-GBA Point Distance Discomfort
1 0.0000 2.2022 2 0.0002 7.3800 3 0.0003 12.8254 4 0.0002 2.0873 5 0.0005 1.5824 6 0.0001 1.0783 7 0.0003 0.7253 8 0.0007 3.8352 9 0.0005 0.4966 10 0.0008 3.3709
Table 3.3 Distance and Discom- fort obtained from GA-GBA
Method 1
(BFGS-BFGS) Method 2
(DIS-CONS) Method 3 (MOO)
Method 4 (CONS-SQP)
Point Distance Discomf Distance Discomf Distance Discomf Distance Discomf1 0.0000 3.2911 146.2477 1.1662 0.0001 3.6126 0.0001 3.6507 2 0.0003 8.2012 0.0096 4.6063 0.0026 8.6291 0.0002 8.5723 3 0.0010 15.1879 54.8682 7.9918 0.3558 15.3929 0.0026 8.0009 4 0.0014 3.2147 0.0234 1.8227 0.0001 4.2680 0.0009 4.3472 5 0.0006 1.6103 0.0041 1.5178 0.0008 1.6217 0.0004 1.6103 6 0.0005 3.2093 54.0299 0.7429 0.0009 2.9490 0.0001 3.2093 7 0.0000 0.6639 0.0117 0.6097 0.0030 0.6954 0.0031 0.7002 8 0.0000 4.9564 176.6827 2.2751 0.0000 4.5921 27.9220 2.6904 9 0.0002 0.5771 7.3989 0.3014 0.0000 0.9554 0.0000 0.9576 10 0.0006 8.3686 221.7487 0.8704 0.0003 7.0304 0.0290 8.4179
Table 3.4 Distance and Discomfort obtained from the four faster methods
82
CPU Time (Seconds) on a 400 MHz PA-RISC 8500 CPU with 1.5 GB RAM
Point Method 1 (BFGS-BFGS)
Method 2 (DIS-CONS)
Method 3 (MOO)
Method 4 (CONS-SQP)
1 0.74 3.05E-07 2.21E-11 2.21E-11 2 0.74 4.03E-05 2.05E-13 1.24E-12 3 0.82 2.08E-05 3.05E-07 2.73E-03 4 102.4 6.71E-06 3.52E-14 3.52E-14 5 8.96E-07 1.75E-06 3.57E-12 3.57E-12 6 8.96E-07 2.21E-06 2.05E-13 2.05E-13 7 3.38E-02 3.43E-06 3.52E-14 2.05E-13 8 3.62E-02 5.24E-11 3.57E-12 3.22E-08 9 0.21 1.75E-06 3.52E-14 2.05E-13 10 2.10E-04 1.96E-07 2.05E-13 3.57E-12
Mean 10.49802118 7.75E-06 3.05E-08 2.73E-04
Table 3.5 CPU time of computations on a HP-UX workstation
BFGS-BFGS method uses two layers of optimization. Inner BFGS works as a
distance constraint. It is used to find the optimal point with minimal distance to target
point and sends the result to outer BFGS, which is to minimize some cost function, e.g.,
discomfort. The real discomfort is calculated and regarded as the value of the cost
function only if the distance satisfies some tolerance. Otherwise, a big penalty is given as
the cost function value. This way, the search is driven to the point that has low cost and
guarantees the end-effector reaches the given target point. DIS-CONS method uses
traditional constrained optimization method that minimizes discomfort with the distance
to the target point as constraint. MOO method is a non-constrained optimization method
that actually does multi-objective optimization. The cost function for MOO combines
discomfort and distance together with some weights so as to realize finding the point with
minimal discomfort and satisfying distance. The motivation for proposing CONS-SQP
method comes from the limitation of normal gradient-based optimization method with
our special problem here. Normally, for each target point, the starting point hardly
83
satisfies the distance constraint, i.e., the end-effector will not be on the target point at the
beginning of the optimization process. So there will always be violated constraint at the
first iteration inside the optimization procedure. Since the problem here is highly
nonlinear with 15 variables, the search of feasible design becomes extremely difficult.
GBA will terminate its optimization process if some number (for example, 20 used in
DOT) of iterations pass without overcoming the constraint violations. Thus providing a
starting point with no violated constraint will have a strong influence on the efficiency
and reliability of the result. CONS-SQP method calls BFGS first to look for a new
starting point with the given target point and a starting point, by only minimizing distance
so that the new starting point satisfies the distance constraint when SQP is called. Then
SQP optimizes the cost with distance constraint but with this new starting point.
Results from the previous global optimization method GA-GBA are listed in
Table 3.3 for comparison. Results obtained by using the four methods and the best
combination of the parameters are listed in Table 3.4. CPU times needed by the four
methods are listed in Table 3.5. From Table 3.4 and Table 3.5 we can see BFGS-BFGS
gives us the most accurate results, but is the slowest just as what was expected. It uses
inner BFGS guaranteeing the end-effector is right on the target point so that outer BFGS
is able to search for an optimum result well within all the points with end-effector on the
target. However, the two-layer search brings too much cost on the computation time.
DIS-CONS gives us the worst results which is mainly due to the fact that it always fails
to overcome the constraint violations during 20 iterations and terminates the optimization
process. CONS-SQP improves the reliability of the algorithm and verifies the
importance of providing an initial feasible design to the optimization process. It found
good results except for point 8. Moreover, it is fast. Although it takes more average time
than DIS-CONS, we can see that generally it takes much less time. The reason for this is
that since a new starting point satisfying the constraint is provided to SQP, the time
associated with searching a direction back toward a feasible region is saved in the
84
optimization process. However, as we can see, this method is not robust enough and will
give bad result for certain target point. MOO gives us results with acceptable accuracy at
every point and it is the fastest due to that it only searches for a point with minimum cost
function without considering any constraint, thus avoids the cost of the iterations related
to the constraint. In practice, since MOO runs very fast and provides reliable results, it
was selected and implemented into a posture prediction plug-in to 3D Studio MAX.
3.7 Conclusions
A general task-based formulation for predicting human postures has been
proposed, demonstrated and validated. Each task is comprised of one or more human
performance measures. These measures are rigorously defined and are used within an
optimization algorithm to iteratively calculate the joint variables that would be assumed
by a human in forming a posture. The discomfort measure was used as a cost function in
a real-time optimization algorithm to be used on-line. It was shown that a combination of
genetic algorithm (used to calculate a global solution but computationally intensive) and
gradient-based methods (local but efficient) yields a fast method of prediction. Results of
our studies to various optimization schemes have also been reported. The modeling
method was not restricted to any number of degrees of freedom.
Perhaps the most important aspect of this method is that it does not employ the
Jacobian in the numerical evaluation of a posture, typically associated with the inverse
kinematics algorithms. This fact has enabled us to surpass the traditional limitation of the
6 degrees of freedom. Indeed, the biomechanical model used in this work is a 15-DOF
human model from the torso to the hand (seated reach).
Benefits of this method are also evident in its ability to represent tasks in terms of
one or more cost functions. As demonstrated, more realistic posture prediction of human
models is possible, one that depends on the initial configuration, on the range of motion,
and on the exact dimensions.
85
Validation of the method against a well-known inverse kinematics algorithm
IKAN and confirmation against data from human subjects testing were presented. It is
evident that the proposed method yields postures that minimize the specified cost
function and render a realistic posture. Our method can predict more realistic postures
and can be applied to more general situation than IKAN (limited to 7 DOF’s). However,
it is also evident that many more cost functions are needed and more elaborate
mathematical descriptions of human performance measures are required for various tasks.
Nevertheless, this method provides a robust approach to realistic posture prediction that
can handle a biomechanically accurate model.
86
CHAPTER 4
HUMAN UPPER EXTREMITY PATH TRAJECTORY DESIGN
In many robotic applications, it is important to plan smooth path trajectories that
would enable the end-effector to complete the motion uninterrupted. Similarly, in reality
humans move smoothly and naturally. We define a kinematically-smooth trajectory as
one that does not admit changes in inverse kinematic solutions during its motion.
Although robots are controlled by programming, humans psychologically determine how
they move. Moreover, it is our supposition that humans determine a posture at the onset
of motion such that the path can be followed in space, uninterrupted. This initial
configuration is chosen based on task-specific criteria that include crossability, comfort,
dexterity, etc. In simulating path trajectories for human motion, it is important to
rigorously identify which starting configuration will yield a kinematically-smooth path,
one that would not be interrupted during its motion.
This chapter presents a general methodology and accompanying formulation for
simulating kinematically-smooth path trajectories for humans. Starting from an initial
point on a given path, it is required to traverse a path trajectory without halting the
motion (typically due to switching from one inverse kinematic solution to another). The
study focuses on determining a starting configuration at the initial point on the path. The
problem is formulated in terms of a constraint function and characterized by a differential
algebraic system of equations (DAE) of index 2. An iterative numerical algorithm is
implemented using the Runge-Kutta method and a cost-function driven optimization
method. The formulation is demonstrated using a 3 degree-of-freedom (DOF) model of
human arm and a 4-DOF spatial manipulator.
The method presented in this chapter is based on recent results by Yeh (1996),
Abdel-Malek and Yeh (1997, 2000) and Abdel-Malek et al. (1997, 2001) where singular
87
surfaces in human workspaces were delineated and acceleration-based crossability
criteria were defined.
4.1 Non-crossable Surfaces
In this section we describe the analytical method for crossability analysis first
introduced by Abdel-Malek and Yeh (2000) and used to find all non-crossable surfaces
within the manipulator’s workspace. For a position vector ( )x = Φ q , where n∈q R is
the vector of generalized coordinates of an n-DOF manipulator, the singular surfaces
resulting from the rank deficiency of the Jacobian or joint limits are parameterized by
( )Ψ u , where [ ]Tu v=u . Consider the end-effector at a point on a singular surface with
radius of curvature 0ρ , with a normal acceleration na , and a tangential velocity tν . This
manipulator will admit motion in one normal direction or another subject to
2
0
| |tn
vaρ
− (4.1.1)
where 01 ρ is the normal curvature of the singular surface with respect to the tangent
direction of tv . The sign of Eq. (4.1.1) establishes the admissible direction of motion.
On a singular surface due to Jacobian singularity without joint limits, the normal
acceleration is derived as
[ ]T T Tna = = qqn x q n Φ q (4.1.2)
where n is the normal vector to the singular surface and
T =qΦ n 0 (4.1.3)
88
From the theory of differential geometry (Farin, 1993), the normal curvature 0K of a
parametric singular surface at a configuration 0q , can be defined as the ratio
00
1 p
p
Kρ
Π= =
Ι (4.1.4)
where the First and Second Fundamental Forms (denoted by pΙ and pΠ , respectively) of
a parametric geometry entity ( )Ψ u , where [ ]Tu v=u , are defined as
[ ]
T Tp
T Tp
δ δ
δ δ
Ι =
Π =u u
uu
u Ψ Ψ u
u n Ψ u (4.1.5)
Define the Modified First and Second Fundamental Forms as
[ ]
T Tp
T Tp
′Ι =
′Π =
u u
uu
u Ψ Ψ u
u n Ψ u (4.1.6)
Since the tangent velocity in terms of Ψ or Φ can be written as
=u qΨ u Φ q (4.1.7)
Eq. (4.1.1) can be rewritten as
2
2 2
0
| | | | | |p ptn n t n t n p
p p
va a v a v aρ
′Π Π ′− = − = − = −ΠΙ ′Ι
(4.1.8)
It was shown (Yeh, 1996) that the velocity vector on a singular surface can be written as
1[ ]−= u qu EΨ EΦ q (4.1.9)
89
where
1 0 00 1 0⎡ ⎤
= ⎢ ⎥⎣ ⎦
E (4.1.10)
if the first and second rows of uΨ are independent, and
1 0 00 0 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
E (4.1.11)
if the first and third rows of uΨ are independent, and
0 1 00 0 1⎡ ⎤
= ⎢ ⎥⎣ ⎦
E (4.1.12)
For an end-effector on a singular surface, the crossability criteria was then expanded into
a quadratic form written as
2
0
| | Ttn n p
va aρ
′− = −Π = q Qq (4.1.13)
where
[ ] [ ]T T T T= −qq q uu qQ n Φ Φ B n Ψ BΦ (4.1.14)
and B is a generalized inverse of uΨ defined by
1[ ]−= uB EΨ E (4.1.15)
90
Definiteness property of the quadratic form in Eq. (4.1.13) indicates the crossability of a
singular surface.
The above formulation can be extended to the crossability analysis on
singularities with imposed joint limits. Joint limits imposed in the form of inequality
constraints such as min maxi i iq q q≤ ≤ , are parameterized into equalities by introducing the
new generalized coordinates iλ such that
sini i i iq a b λ= + (4.1.16)
where max min( ) / 2i i ia q q= + and max min( ) / 2i i ib q q= − . The constraint function is then
written in terms of the extended vector 1 2[ ... ]Tnλ λ λ=λ such that
( ) ( ( ))= =x Φ λ Φ q λ (4.1.17)
Similar to the formulation above not considering joint limits, Eq. (4.1.1) can be written as
*Tn pa ′−Π = λ Q λ (4.1.18)
where
* *
*
1
[ ]
( )[ ] .[ ]
T T T T
TnT T
ii i
d qdq=
= −
= +∑
λ q uu q λ
λ qq λ λλ
Q H q Φ B n Ψ BΦ q
n ΦH q n Φ q (4.1.19)
and n is the vector normal to the singular surface which can be determined by
[ ]T =q λΦ q n 0 (4.1.20)
91
Definiteness property of the quadratic form of Eq. (4.1.18) defines the crossability of a
singular surface.
For a singular surface that is due to singularities where joints are at the upper or
lower bounds of an inequality constraint, the surface will be crossable if *Q has both
positive and negative eigenvalues. However, when *Q is either positive semi-definite or
negative semi-definite, the singular surface may still be crossable because joint velocities
are zero on those surfaces. In this case, the projection of a variational movement
iq iqδ δ=x x due to iqδ onto the normal direction n such that the normal component
i
Tq iqσ δ= n x (4.1.21)
determines admissible normal movement, where iqδ is given a magnitude of 1± as
follows
1 if is at lower bound1 if is at upper bound
ii
i
qδ
+⎧= ⎨−⎩
(4.1.22)
Positive values of σ in Eq. (4.1.21) indicate that the end-effector can admit movement in
the positive direction of n.
Another situation arises when the normal vector n is perpendicular to iqx , where
σ in Eq. (4.1.21) evaluates to zero. Then the normal curvature ( 1K ) of the singular
surface with respect to the tangent direction of iq iqδx is compared with the normal
curvature ( 2K ) of the trajectory curve when only iq is varying and 2 1K K− is derived as
2
2 1i i
iT Ti q q i
K qK Kq q
δδ δ
− =x x
(4.1.23)
where
92
[ ] [ ]i i i i
T T T Tq q q qK = −uux B n x Bx n x (4.1.24)
Therefore, if 0K > , the end-effector can admit movement into the positive direction of
n. If 0K < , the end-effector admits movement into the negative direction of n. The
crossability criteria are summarized in the following:
(1) If *Q is indefinite, i.e., has both positive and negative eigenvalues, the end-effector
admits normal movements along either positive or negative directions of n. The
surface is crossable.
(2) If *Q is semi-definite, i.e., all nonzero eigenvalues have the same sign, the following
additional criteria must be evaluated for each joint variable that is at lower or upper
bound:
a. If 0i
Tq ≠n x , then nσ in Eq. (4.1.21) is calculated.
b. If 0i
Tq =n x , then K in Eq. (4.1.24) is calculated.
If any of nσ or K has a different sign than the nonzero eigenvalues of *Q , the
singular surface is crossable. If all values of nσ or K for joints at their limits have
the same sign as the nonzero eigenvalues of *Q , the singular surface is non-
crossable. The movable normal direction is determined by this common sign.
By using the above method, non-crossable surfaces can be readily identified and
the singular configurations on the surfaces can be obtained. Now we will be able to find
a start configuration that will admit a kinematically-smooth trajectory for a human arm
by utilizing the information of non-crossable surfaces.
4.2 Problem Definition
The objective of this work is to identify starting configurations of the human arm
in such a way to allow for a kinematically-smooth path trajectory (a solution will be
called admissible if a complete execution of the task is guaranteed). It will be assumed
93
that the complete characteristics of the human arm, its joint limits, and a desired path are
given.
From the Denavit-Hartenberg (DH) representation method, we obtain the position
vector of a point on the end-effector (hand) as
( ) ( ))t tp = Φ q( (4.2.1)
where p represents the coordinates of a path as a function of time t, Φ stands for the
position coordinates of the point on the hand and q is the vector of joint variables.
Differentiating Eq. (4.2.1) with respect to time yields
qp = Φ q (4.2.2)
where qΦ is the (3 n× ) Jacobian of Φ with respect to q , where n is the number of
DOF’s of the arm.
Our non-crossable surfaces here mean within the human arm’s workspace those
surfaces that cannot be crossed in at least one direction, though they may be crossed in
some particular direction. For the points on those non-crossable surfaces, the arm can
admit many different configurations with hand right on the surface points. The arm will
be unable to cross the surface only when it assumes the same configuration as that of the
non-crossable surface.
Now it is possible to better define the notion of delineating a kinematically-
smooth trajectory. It is to determine an appropriate initial configuration that would
enable the end-effector to continuously move from this configuration along the path,
uninterrupted. We also define an interruption as a singular configuration on a singular
surface resulting from the intersection of the path trajectory with a barrier, and where this
surface is non-crossable. If above conditions are met, the trajectory is kinematically-
smooth (i.e., the end-effector can move smoothly across a non-crossable barrier). We
94
will then use optimization methods to calculate the optimum start configuration that will
allow for the completion of the path trajectory.
4.3 Problem Formulation
We first develop the constraint equations necessary to ensure that the inverse
kinematic solution will constrain the end-effector to remain on the path. From Eqs.
(4.2.1) and (4.2.2), we implement the pseudoinverse #qΦ such that
#qq = Φ p (4.3.1)
where -1( )T T#q q q qΦ = Φ Φ Φ (4.3.2)
Since qΦ is an m n× matrix, where m n< , and suppose q is not a singular
configuration, then Rank( qΦ ) = m and thus we have
-1( )T Tq q qq = Φ Φ Φ p (4.3.3)
Introduce a vector variable z , the above equation can be converted to the equivalent
system of equations
( ) ( , )
( ) ( ) ( , )
T
t t= − =qq = Φ q z = f q z
0 Φ q p g q (4.3.4)
Differentiate the second equation of Eq. (4.3.4) with respect to t and substitute the first
equation of Eq. (4.3.4) into q , we have
( , ) ( ) ( ) ( ) ( ) ( )Tt t t t= − = − =q q qg q Φ q q p Φ q Φ q z p 0 (4.3.5)
95
Equation (4.3.5) is characterized as a constraint to be satisfied while planning the motion.
The above equations (Eq. (4.3.4)) are indeed DAE’s of index 2 and can be addressed
using Runge-Kutta methods.
4.4 Runge-Kutta Method for DAE of Index 2
We shall present the methodology used to solve the problem characterized by
Eq.(4.3.4). Consider the DAE system defined by
( , , )( , )
tt
′ ==
y f y z0 g y
(4.4.1)
where f and g are sufficiently differentiable and 1|| ( ( , ) ( , , )) ||t t M− ≤y zg y f y z in a
neighborhood of the exact solution. Assume the initial values 0y , 0z are consistent with
Eq.(4.4.1), such that
0 0
0 0 0, 0 0 0 0
( , )( , ) ( , ) ( , )t
tt t t
=
+ =y
g y 0g y f y z g y 0
(4.4.2)
The Runge-Kutta (RK) method (Hairer et al., 1989) applied to Eq. (4.4.1) yields
1
1
11
s
n n i nii
s
n n i nii
h b
h b
+=
+=
′= +
′= +
∑
∑
y y Y
z z Z (4.4.3)
where
( , , )
( , )ni ni ni n i
ni n i
t c ht c h
′ = += +
Y f Y Z0 g Y
(4.4.4)
96
and the internal s steps are given by
1
1
s
ni n ij njj
s
ni n ij njj
h a
h a
=
=
′= +
′= +
∑
∑
Y y Y
Z z Z (4.4.5)
RK methods are characterized by the set of RK coefficients , 1( )sij i ja = and they are
based on a quadrature formula (for example, Gauss, Radau and Lobatto) given by
coefficients 1( )sj j jb c =, as shown below
ic ija
jb
Lobatto IIIA method is used here since the nature of its coefficients enables some
simplification in the computation that will be shown later. The order of convergence of
Runge-Kutta methods is p if the error, i.e., the difference between the exact and the
numerical solution, is bounded by pConst h⋅ uniformly on bounded intervals for
sufficiently small step size h. The s-stage Lobatto IIIA method is of order 2 2s − (Hairer
et al., 1989; Hairer and Wanner, 1996) for y-component. Thus, if convergence of order 2
of y-component is wanted, two internal stages are needed, i.e. 2s = . The coefficients of
Lobatto IIIA method of order 2 are listed below as
0 012
12
0
1
12
12
97
For Lobatto IIIA method the Runge-Kutta matrix A ( ija ) is singular. The method
can be applied to the problem defined by the DAE system of Eq. (4.4.1) by setting
1n n=Y y and n nZ z=1 . We then use Eq. (4.4.4) and the first formula of Eq. (4.4.5) to
compute niY and niZ for 2,...,i s= , since the matrix , 2( )sij i ja = is non-singular. Instead of
computing ni′Y and ni′Z from Eq. (4.4.5) and substituting into Eq.(4.4.3), we directly set
1n ns+ =y Y and 1n ns+ =z Z because si ia b= for 1,...,i s= and this enables the
simplification.
4.5 Iteration Formulation
For Eq. (4.3.4), we use the Lobatto IIIA coefficients of order 2 shown in section
4.4, and we have
1 0 11 1 1 12 2 0
2 0 21 1 1 22 2 2 0 1 1 2
1 0 1 1 1 2 2 0 1 1 2 2 2
( ( , ) ( , ))1 1( ( , ) ( , )) ( ( , ) ( , ))2 2
1 1( ( , ) ( , )) ( ( , ) ( , ))2 2
h a a
h a a h
h b b h
= + + =
= + + = + +
= + + = + + =
2
2
2
Q q f Q Z f Q Z q
Q q f Q Z f Q Z q f Q Z f Q Z
q q f Q Z f Q Z q f Q Z f Q Z Q
(4.5.1)
subject to the following constraints
1 0
2 0
( , )( , )
tt h
=+ =
g Q 0g Q 0
(4.5.2)
where 1Q and 2Q are internal stages, 0q is the solution for time 0t and 1q is the solution
for the step 1 0t t h= + .
In the first step, assume the initial values of 0q and 0z are consistent with Eq.
(4.3.4), i.e.
0 0( , )t =g q 0 and 0 0
-10 0
T= q qz (Φ Φ ) p
98
For each step that follows, we set
1 0Q = q and 1 0Z = z (4.5.3)
of which the first formula has been justified by Eq. (4.5.1). Thus, the first equation in Eq.
(4.5.2) is automatically satisfied in each step. It is only necessary to solve for 2Q and
2Z in each iteration by using second equations in Eqs. (4.5.1) and (4.5.2). Next, we set
0 1 2 0 1 2, = =q = q = Q z z Z (4.5.4)
and proceed to the subsequent step. To solve 2Q and 2Z , the following nonlinear
equations need to be simultaneously solved.
2 0 1 1 2 2
2 0
1 1( ( ) ( ) )2 2
( ) ( )
T Th
t h
− − + =
− +
q qQ q Φ Q Z Φ Q Z 0
Φ Q p = 0 (4.5.5)
The Newton-Raphson iterative method is implemented at each step to solve
Eq.(4.5.5). For convenience, we rewrite Eq. (4.5.5) as
( )f x = 0 (4.5.6)
where 2 2{ , }T=x Q Z . The iterative formula is
( 1) ( ) ( ) -1 ( )( ) ( )k k k k+ − xx = x f x f x (4.5.7)
where xf is the Jacobian matrix
2 2 2
2
1 1( ) ( )2 2
( )
T Th h⎡ ⎤′− −⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
qq qx
q
I Φ Q Z Φ Qf
Φ Q 0 (4.5.8)
99
and an initial guess (0)0 0{ , }=x q z . In order to avoid the calculation of a matrix inverse,
rewrite Eq. (4.5.7) as:
( ) ( )
( 1) ( )
( ) ( )k k
k k+
=
−xf x δ f x
x = x δ (4.5.9)
Using this approach, Gaussian elimination can be used to solve the linear system of
equations (first equation in Eq. (4.5.9)) and greatly improves the computational
efficiency.
4.6 Optimization
In general, given a desired position of a redundant system, inverse kinematics is
implemented to determine an infinite number of configurations that yield the correct
answer, and one of these solutions is typically selected. Given a path and the barrier
information of the system, we first use the method described in section 4.3 to transform
the problem into a DAE system of equations, then use Runge-Kutta method to find an
initial configuration that will admit a kinematically-smooth trajectory. However, due to
the redundancy property of the system of human arm, there will still be many feasible
initial configurations. How to choose the best initial configuration is indeed an
optimization problem. Our purpose becomes to find a best start configuration, not only to
simulate smooth motion, but also to minimize a cost function during the movement of the
arm. The concept of optimizing a given cost function towards calculating an appropriate
solution in a computationally effective manner was addressed by Abdel-Malek et al.
(2001).
To simplify the optimization process, some points on the path are chosen, where
the cost function is calculated. Within a number of feasible initial configurations, which
100
will admit kinematically-smooth motion, the solution with the minimum cost is readily
identified.
4.7 Examples
In this section, an introductory example of a planar 3-DOF human arm model is
presented followed by an example of a spatial 4-DOF manipulator to demonstrate the
concepts of determining a solution for a kinematically-smooth trajectory.
4.7.1 A Planar 3-DOF Human Arm Model
Point of Interest P
z0z1
z0z1
q1q2z2q3
z2
q1
q2
q3
X
Y
4
2
1
Figure 4.1 A planar 3-DOF model of human arm
Consider a planar 3-DOF model of a human arm shown in Figure 4.1, which is
comprised of three links and three revolute joints. Inequality constraints are imposed on
the three joints as follows:
, 1, 2,33 3iq iπ π
− ≤ ≤ =
We will follow the position of a point on the hand with the following position
vector
101
1 1 2 1 2 3
1 1 2 1 2 3
4cos( ) 2cos( ) cos( )( )
4sin( ) 2sin( ) sin( )q q q q q qq q q q q q
+ + + + +⎡ ⎤= ⎢ ⎥+ + + + +⎣ ⎦
Φ q (4.7.1)
where 1 2 3[ , , ]Tq q q=q . The Jacobian matrix qΦ is
1 1 2 1 2 3
1 1 2 1 2 3
1 2 1 2 3 1 2 3
1 2 1 2 3 1 2 3
4sin( ) 2sin( ) sin( )( )
4cos( ) 2cos( ) cos( )
2sin( ) sin( ) sin( )
2cos( ) cos( ) cos( )
q q q q q qq q q q q q
q q q q q q q qq q q q q q q q
− − + − + +⎡= ⎢ + + + + +⎣
− + − + + − + + ⎤⎥+ + + + + + ⎦
qΦ q (4.7.2)
As first identified by Yeh (1996), there are three types of singular behavior used
to delineate the barriers to motion of such a kinematic system: (1) Jacobian singularities;
(2) Joint limit singularities; (3) Coupled singular behavior. The singularity sets for this
example were obtained by Yeh (1996) using closed form methods (introduced in section
4.1) and by Haug et al. (1995) independently, using numerical marching methods, and are
shown in Figure 4.2. From these sets, non-crossable surfaces (curves in this case) are
identified (Yeh, 1996; Abdel-Malek and Yeh, 1997) and one of them is shown in Figure
4.3.
Indeed, singular curves can be divided into crossable and non-crossable curves.
Crossable curves indicate that the end-effector with this singular configuration can admit
motion across the curve, i.e., no need to go back to change the starting configuration and
return. On the contrary, when the end-effector reaches the non-crossable curve with the
specific singular configuration, the motion will be interrupted and the system will have to
switch to another configuration and attempt the crossing again. Figure 4.3 shows a non-
crossable curve and a path (from A to B), which the end-effector must follow. As shown
by the arrow, the non-crossable curve sC can only be traversed in one direction.
Therefore, when the arm starts from point A following the path to point B, it will be
102
unable to cross this curve if it assumes the singular configuration at the intersection point
C, which means an inadmissible solution.
To calculate an admissible initial configuration at point A, a procedure will be
developed below. More generally, for any given path trajectory, the path must first be
evaluated to determine if it intersects with any non-crossable singular curve. If such an
intersection is encountered, the non-crossable singular configuration at the intersection
point will be determined. For the 3-DOF example, the path trajectory is specified as
( )3
tt
t
⎡ ⎤= ⎢ ⎥⎣ ⎦
p (4.7.3)
1 2 3 4 5 6 7
-6
-4
-2
2
4
6
1 2 3 4 5
4.5
5
5.5
6
6.5
A
B
C
sC
Path
(3.3, 5.7)
(2.9, 5.0)
Figure 4.2 Singular curves Figure 4.3 A non-crossable singular curve and a path
The coordinates of the initial point A and end point B on the path are specified as
(3.3, 5.7) and (2.9, 5.0), respectively. The path intersects with a non-crossable curve
103
2 3 1: / 3, / 3, / 3 / 3sC q q qπ π π π= = − − ≤ ≤
The singular configuration at the intersection point C is readily determined as
[0.7662, / 3, - / 3]s Tπ π=q
At the initial point, the optimization algorithm is used to find an inverse kinematic
solution for the initial configuration 0q . Subsequently, Runge-Kutta method for DAE of
index 2 described in section 4.4 is applied to perform the constrained integration to
calculate the configuration cq of the arm at point C. This configuration is compared with
the singular configuration sq to check if
2c s ε− <q q (4.7.4)
where ε is a small positive number. If the inequality (4.7.4) is true, we then return to the
initial point A to calculate a different initial configuration. We repeat the procedure until
a satisfying initial configuration is calculated. During the integration, if any
configuration on the path violates the joint limit constraints, the algorithm is interrupted
immediately and the loop is repeated to re-integrate.
The second order Jacobian required in the iteration procedure of Eq. (4.5.8) is
obtained as
104
1 1 2 1 2 3
1 2 1 2 3
1 2 3
1 1 2 1 2 3 1 2 1 2 3
1 2 1 2 3 1 2
1 2 3
4cos( ) 2cos( ) cos( )( ) 2cos( ) cos( )
cos( )4sin( ) 2sin( ) sin( ) 2cos( ) cos( )
2sin( ) sin( ) 2cos( )sin( )
q q q q q qq q q q q
q q qq q q q q q q q q q q
q q q q q q qq q q
− − + − + +⎡⎢= − + − + +⎢⎢ − + +⎣
− − + − + + − + − + +− + − + + − +
− + +
qqΦ q
1 2 3
1 2 3
1 2 1 2 3 1 2 3 1 2 3
1 2 1 2 3 1 2 3 1 2 3
1 2 3 1 2 3 1 2 3
cos( )cos( )
2sin( ) sin( ) cos( ) sin( )2sin( ) sin( ) cos( ) sin( )
sin( ) cos( ) sin( )
q q qq q q
q q q q q q q q q q qq q q q q q q q q q q
q q q q q q q q q
− + +− + +
− + − + + − + + − + + ⎤⎥− + − + + − + + − + + ⎥⎥− + + − + + − + + ⎦
(4.7.5)
For 2 [ , ]Ta b=Z , 2′Z in Eq. (4.5.8) is
2
0 0 0 00 0 0 00 0 0 0
Ta ba b
a b
⎡ ⎤⎢ ⎥′ = ⎢ ⎥⎢ ⎥⎣ ⎦
Z
At the initial point A, 3.3t = , [1, 3]T=p , 0q is calculated to be
[0.8030, 0.7816, 0.5946]T− , when substituted into Eq. (4.3.5) yields
0 [1.1276, 2.1970]T=z . Step length h is selected to be -0.0005. At the following step,
we calculate 1 0 3.3 0.0005 3.2995t t h= + = − = , and the results of the integration yields
1 2 [0.8036, 0.7803, 0.5945]T= = −q Q and 2 [ 3.9613, 7.7424]T= − −Z . We can make
1 2z = Z , which may yield larger errors to z , but have reduced influence on the accuracy
of q. This will be shown in the final results.
To make z more accurate, Eq. (4.3.5) is used to solve for 1z each step after 1q has
been calculated, we call this the correction of z . Substituting for 1q into Eq. (4.3.5) we
calculate 1 [1.1321, 2.2054]T=z . Let 0 1t t= , 0 1=q q , 0 1=z z , we proceed to the next
step where 1 0 3.2995 0.0005 3.2990t t h= + = − = . Upon the end of this second step, we
have 1 2 [0.8032, 0.7814, 0.5946]T= = −q Q and 2 [1.1284, 2.1987]T=Z . The procedure
is repeated through each step until the iteration comes to step 314, when
1 2 [0.6968, 1.0478, 0.6088]T= = −q Q where 2q has violated the joint limit of / 3π ,
105
therefore, the integration returns and is initiated with a different initial configuration.
The second iteration assumes 0 [0.8516, 0.2157, 0.8078]T=q , and integrates till the
intersection point C with [0.7556, 0.4152, 1.0037]c T=q . Since cq is far from the
singular configuration sq , the initial configuration that admits a kinematically-smooth
trajectory is calculated to be [0.8516, 0.2157, 0.8078]T .
Having solved the problem by the above method, an optimization-based method is
used to find the best initial configuration that will admit a kinematically-smooth motion
for the arm with minimum discomfort. For this example, we define a simple discomfort
cost function that evaluates displacement of each joint away from its neutral position
such that
discomfort2N= −q q (4.6.6)
where Nq represents the vector of neutral positions. For this example, Nq is chosen to
be [0, 0, 0]T . To simplify the calculations, only the initial configuration is considered
here towards calculating the discomfort. This is a natural choice since the arm is moving
continuously from the initial to the end configurations, which means the value of the cost
function on initial configuration roughly reflects the values along the entire path.
Tables 4.1 and 4.2 present the configuration q, 2Z , z as the arm is moving
through one unsuccessful trial and the final successful trial of the example of Figure 4.3
without optimization. Figures 4.4 and 4.5 illustrate the movements of the arm
accordingly.
For both cases, it can be observed that q is changing continuously, which means
that the arm is admitting kinematically-smooth motion. Moreover, observing the
difference between the two positions, the movement is following the given path exactly.
But during the unsuccessful trial, when the process comes to step 314, one joint variable
2q has violated joint limit constraint at 1.0472. This halts the integration process and
106
causes the loop to iterate back in search of a feasible initial posture. In the successful
trial, joint variables change slowly and keep within the joint limits. The process
continues until an appropriate initial posture is calculated.
Step t Real Position Point on Path0 3.3000 0.8030 0.7816 -0.5946 1.1276 2.1970 (3.2990,5.7137) (3.3000,5.7158)1 3.2995 0.8036 0.7803 -0.5945 -3.9613 -7.7424 1.1321 2.2054 (3.2995,5.7149) (3.2995,5.7149)2 3.2990 0.8032 0.7814 -0.5946 1.1284 2.1987 1.1284 2.1987 (3.2990,5.7140) (3.2990,5.7140)3 3.2985 0.8028 0.7824 -0.5947 1.1247 2.1920 1.1247 2.1920 (3.2985,5.7132) (3.2985,5.7132)
50 3.2750 0.7844 0.8290 -0.6000 0.9727 1.9163 0.9727 1.9163 (3.2750,5.6725) (3.2750,5.6725)100 3.2500 0.7660 0.8754 -0.6039 0.8475 1.6883 0.8475 1.6883 (3.2500,5.6292) (3.2500,5.6292)150 3.2250 0.7485 0.9192 -0.6065 0.7487 1.5076 0.7487 1.5076 (3.2250,5.5859) (3.2250,5.5859)200 3.2000 0.7320 0.9606 -0.6081 0.6691 1.3613 0.6691 1.3613 (3.2000,5.5426) (3.2000,5.5426)250 3.1750 0.7161 1.0000 -0.6088 0.6036 1.2406 0.6036 1.2406 (3.1750,5.4993) (3.1750,5.4993)300 3.1500 0.7009 1.0376 -0.6089 0.5489 1.1395 0.5489 1.1395 (3.1500,5.4560) (3.1500,5.4560)313 3.1435 0.6971 1.0471 -0.6088 0.5361 1.1159 0.5361 1.1159 (3.1435,5.4447) (3.1435,5.4447)314 3.1430 0.6968 1.0478 -0.6088 0.5352 1.1141 0.5352 1.1141 (3.1430,5.4438) (3.1430,5.4438)
2q (Q ) 2Z z
Table 4.1 Traced results for one unsuccessful trial
Step t Real Position Point on Path0 3.3000 0.8516 0.2157 0.8078 0.8982 1.7536 (3.3007,5.7151) (3.3000,5.7158)1 3.2995 0.8515 0.2162 0.8083 0.4802 0.8513 0.8961 1.7502 (3.2995,5.7149) (3.2995,5.7149)2 3.2990 0.8511 0.2168 0.8090 0.8937 1.7457 0.8937 1.7457 (3.2990,5.7140) (3.2990,5.7140)3 3.2985 0.8508 0.2174 0.8097 0.8912 1.7412 0.8912 1.7412 (3.2985,5.7132) (3.2985,5.7132)
50 3.2750 0.8360 0.2465 0.8408 0.7898 1.5555 0.7898 1.5555 (3.2750,5.6725) (3.2750,5.6725)100 3.2500 0.8210 0.2763 0.8717 0.7048 1.3991 0.7048 1.3991 (3.2500,5.6292) (3.2500,5.6292)150 3.2250 0.8069 0.3051 0.9006 0.6366 1.2731 0.6366 1.2731 (3.2250,5.5859) (3.2250,5.5859)200 3.2000 0.7935 0.3330 0.9278 0.5808 1.1696 0.5808 1.1696 (3.2000,5.5426) (3.2000,5.5426)250 3.1750 0.7807 0.3602 0.9535 0.5343 1.0830 0.5343 1.0830 (3.1750,5.4993) (3.1750,5.4993)300 3.1500 0.7685 0.3867 0.9780 0.4951 1.0096 0.4951 1.0096 (3.1500,5.4560) (3.1500,5.4560)350 3.1250 0.7568 0.4126 1.0014 0.4615 0.9466 0.4615 0.9466 (3.1250,5.4127) (3.1250,5.4127)355 3.1225 0.7556 0.4152 1.0037 0.4585 0.9408 0.4585 0.9408 (3.1225,5.4083) (3.1225,5.4083)
2q (Q ) 2Z z
Table 4.2 Traced results for the successful trial
107
Figure 4.4 Movement of the arm for the unsuccessful trial
Figure 4.5 Movement of the arm for the successful trial
108
By correcting z in each step, 2Z converges to z very quickly, indeed, both
Tables 4.1 and 4.2 show that 2Z begins to converge to z from the second step and keeps
the convergence process thereafter. In fact, the numerical results suggest that the
correction of z has no influence on the accuracy of the final result of q . Table 4.3 lists
the intermediate results for the successful trial while no correction of z at each step has
been made, z is simply made to equal to 2Z . Although during the integration process,
z from Tables 4.2 and 4.3 have large differences, q from both methods are exactly
same. Therefore, it is possible to solve the problem without correcting for z , if
computational time is an issue. Table 4.4 shows the summarized final results. The
calculated start configuration is [0.8516, 0.2157, 0.8078]T . When the arm comes to the
intersection point, it has the configuration [0.7556, 0.4152, 1.0037]T , which is different
from the singular configuration [0.7662, 1.0472, 1.0472]T− , so the start configuration is
accepted and the procedure terminates.
Step t Real Position Point on Path0 3.3000 0.8516 0.2157 0.8078 0.8982 1.7536 (3.3007,5.7151) (3.3000,5.7158)1 3.2995 0.8515 0.2162 0.8083 0.4802 0.8513 (3.2995,5.7149) (3.2995,5.7149)2 3.2990 0.8511 0.2168 0.8090 1.3089 2.6434 (3.2990,5.7140) (3.2990,5.7140)3 3.2985 0.8508 0.2174 0.8097 0.4767 0.8446 (3.2985,5.7132) (3.2985,5.7132)
50 3.2750 0.8360 0.2465 0.8408 1.1755 2.4042 (3.2750,5.6725) (3.2750,5.6725)100 3.2500 0.8210 0.2763 0.8717 1.0645 2.2052 (3.2500,5.6292) (3.2500,5.6292)150 3.2250 0.8069 0.3051 0.9006 0.9743 2.0432 (3.2250,5.5859) (3.2250,5.5859)200 3.2000 0.7935 0.3330 0.9278 0.8993 1.9086 (3.2000,5.5426) (3.2000,5.5426)250 3.1750 0.7807 0.3602 0.9535 0.8360 1.7951 (3.1750,5.4993) (3.1750,5.4993)300 3.1500 0.7685 0.3867 0.9780 0.7819 1.6979 (3.1500,5.4560) (3.1500,5.4560)350 3.1250 0.7568 0.4126 1.0014 0.7349 1.6139 (3.1250,5.4127) (3.1250,5.4127)355 3.1225 0.7556 0.4152 1.0037 0.1863 0.2755 (3.1225,5.4083) (3.1225,5.4083)
2q (Q ) 2Z (z )
Table 4.3 Traced results for the successful trial without z correction
Table 4.5 presents results obtained from the optimization algorithm, where it is
shown that properties of the original method are maintained, i.e., smooth movement,
convergence of 2Z to z , and strictly following the path. Table 4.6 lists the results
obtained from the first method without optimization and the modified method with
109
optimization. It can be seen through the comparison that better initial configurations with
less bias to the point on the path have been calculated with the optimization algorithm.
Moreover, the discomfort cost function evaluated at both the initial and intersection
positions of the result with optimization are much less than those without optimization. It
is concluded that the cost function for the motion along the entire path starting from the
calculated configuration by optimization would be less than that starting from the
configuration calculated using the first method. Optimization indeed helps determine an
optimum solution admitting a kinematically-smooth motion and minimizing the specified
cost function. From Table 4.6 it is also observed that the singular configuration has the
largest value of the cost function as expected, since this singular configuration here
actually has two joint variables at their limits, which significantly contributes to the value
of the cost function. Figure 4.6 shows the singular configuration at the intersection point
and the neutral configuration used for calculating the cost function. Figure 4.7 shows the
movement of the arm from the results obtained with optimization. Figure 4.8 illustrates
configurations calculated from the original method without optimization and those from
the modified method with optimization.
t Real Position Point on Path
Start 3.3000 0.8516 0.2157 0.8078 (3.3007,5.7151) (3.3000,5.7158)
Intersection Point 3.1225 0.7556 0.4152 1.0037 (3.1225,5.4083) (3.1225,5.4083)
Singular Configuration 3.1225 0.7662 1.0472 -1.0472 (3.1225,5.4083) (3.1225,5.4083)
q
Table 4.4 Summarized final result without optimization
During the optimization process, only the initial configuration is considered
towards calculating the cost function and has a positive implication on the value of the
cost function along the entire path trajectory. However, this is only true for very short
110
path trajectories. Generally, the configurations of at least several critical points on the
path need to be used to measure the overall cost function. Efficiency has not been
considered much here, since the problem is for a very simple model with only 3
variables. For both cases with and without optimization, the computational time is
limited to a few seconds. However, for a large DOF system, computational complexity
becomes a critical problem that must be addressed. Fortunately, the potential for
improving the efficiency with this method is high. While we have used the standard
Newton method to solve Eq. (4.5.5), it could have been possible to use the simplified
Newton method to solve most of the nonlinear equations. It has been demonstrated that
the simplified Newton method (Stewart, 1996) can solve this kind of nonlinear equations
and yield acceptable results with rather good convergence. In the simplified Newton
method, ( )kx in ( ) -1( )kF x of the iteration formula (4.5.7) will be replaced by (0)x , which
means that the Jacobian matrix needs only be calculated once and consequently the
computational time can be reduced.
Step t Real Position Point on Path0 3.3000 0.7615 0.5989 0.2234 0.6838 1.3966 (3.3005,5.7155) (3.3000,5.7158)1 3.2995 0.7611 0.5996 0.2238 0.6986 1.4172 0.6820 1.3933 (3.2995,5.7149) (3.2995,5.7149)2 3.2990 0.7608 0.6004 0.2241 0.6802 1.3867 0.6802 1.3900 (3.2990,5.7140) (3.2990,5.7140)3 3.2985 0.7604 0.6011 0.2245 0.6784 1.7412 0.6784 1.3867 (3.2985,5.7132) (3.2985,5.7132)
50 3.2750 0.7445 0.6343 0.2405 0.6049 1.2490 0.6049 1.2490 (3.2750,5.6725) (3.2750,5.6725)100 3.2500 0.7285 0.6678 0.2567 0.5421 1.1308 0.5421 1.1308 (3.2500,5.6292) (3.2500,5.6292)150 3.2250 0.7133 0.6997 0.2722 0.4909 1.0342 0.4909 1.0342 (3.2250,5.5859) (3.2250,5.5859)200 3.2000 0.6988 0.7304 0.2870 0.4484 0.9536 0.4484 0.9536 (3.2000,5.5426) (3.2000,5.5426)250 3.1750 0.6849 0.7598 0.3013 0.4125 0.8855 0.4125 0.8855 (3.1750,5.4993) (3.1750,5.4993)300 3.1500 0.6716 0.7882 0.3151 0.3819 0.8271 0.3819 0.8271 (3.1500,5.4560) (3.1500,5.4560)350 3.1250 0.6588 0.8157 0.3284 0.3554 0.7764 0.3554 0.7764 (3.1250,5.4127) (3.1250,5.4127)355 3.1225 0.6576 0.8184 0.3297 0.3530 0.7717 0.3530 0.7717 (3.1225,5.4083) (3.1225,5.4083)
2q (Q ) 2Z z
Table 4.5 Traced optimized result with optimization
111
t Real Position Point on Path Discomfort
Start (without optimization) 3.3000 0.8516 0.2157 0.8078 (3.3007,5.7151) (3.3000,5.7158) 1.4243
Start (with optimization) 3.3000 0.7614 0.5982 0.2245 (3.3005,5.7155) (3.3000,5.7158) 0.9883
Intersection Point (without optimization) 3.1225 0.7556 0.4152 1.0037 (3.1225,5.4083) (3.1225,5.4083) 1.7507
Intersection Point (with optimization) 3.1225 0.6576 0.8184 0.3297 (3.1225,5.4083) (3.1225,5.4083) 1.2109
Singular Configuration 3.1225 0.7662 1.0472 -1.0472 (3.1225,5.4083) (3.1225,5.4083) 2.7803
q
Table 4.6 Comparison of results without and with optimization
C (Intersection Point)
Singular Configuration
Neutral Configuration
Path
Figure 4.6 Singular configuration and neutral configuration
112
Figure 4.7 Movement of the arm obtained with optimization
C (Intersection Point)A (Start Point)
InitialConfiguration
IntersectionConfiguration
Path
A (Start Point)C (Intersection Point)
InitialConfiguration
IntersectionConfiguration
Path
Figure 4.8 Configurations obtained without (left) and with optimization (right)
113
4.7.2 A Spatial 4-DOF Manipulator
The proposed method can also be applied to trajectory planning for robots,
especially for welding robots or surgical robots that require smooth motion. A spatial 4-
DOF manipulator depicted in Figure 4.9 will be used to demonstrate the proposed
concept of kinematically-smooth trajectory in 3D space. The manipulator whose DH
Table is shown has two revolute and two prismatic joints. Imposed joint limits are
1
2
3
4
0 220 50
/ 410 20
π
π π
≤ ≤≤ ≤
− ≤ ≤≤ ≤
0X
0Y
0Z1Z
1X
2X
2Z
3X
3Z 4Z
4X
30
DH Table
Figure 4.9 A spatial 4-DOF RPRP manipulator
For the joint frames selected in Figure 4.9, the position vector of the end-effector
is formulated as
114
4 1 3 1
4 1 3 1
4 3 2
cos cos 30cos( ) sin cos 30sin
sin
q q q qq q q q
q q q
+⎡ ⎤⎢ ⎥= = +⎢ ⎥⎢ ⎥+⎣ ⎦
x Φ q (4.7.6)
The Jacobian matrix qΦ is derived as
4 1 3 1 4 1 3 1 3
4 1 3 1 4 1 3 1 3
4 3 3
sin cos 30sin 0 cos sin cos cos( ) cos cos 30cos 0 sin sin sin cos
0 1 cos sin
q q q q q q q q qq q q q q q q q q
q q q
− − −⎡ ⎤⎢ ⎥= + −⎢ ⎥⎢ ⎥⎣ ⎦
qΦ q (4.7.7)
(a) (b)
Figure 4.10 Singular surfaces
115
:
Crossable Surfaces
:
Movable Directions
(a) (b)
Figure 4.11 Crossable and non-crossable surfaces
Figure 4.12 Path AB and a non-crossable surface S
The singular surfaces and non-crossable surfaces were already identified by Yeh
(1996). Figure 4.10 (a) shows a view of the workspace and Figure 4.10 (b) shows a ½
section of the workspace by an imaginary cutting plane. Figure 4.11 (a) depicts the
116
crossable (dotted) and non-crossable (solid) surfaces. Admissible normal movements of
the non-crossable surfaces are indicated in Figure 4.11 (b).
For the purpose of demonstration, a path that will intersect with a non-crossable
surface will be chosen. The path trajectory AB (Figure 4.12) within the workspace of the
manipulator is defined by
45
( ) 1035
tt
−⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦
P (4.7.8)
where 0t = at initial point A, and 15t = at end point B. The coordinates of A and B are
specified as (45, 10, 35) and (30, 10, 35) respectively. The path intersects with a non-
crossable surface (Figure 4.12)
3 4 1 2: / 4, 10, 0 2 , 20 50S q q q qπ π= − = ≤ ≤ ≤ ≤
at point (35.7, 10, 35) and the singular configuration at the intersection point C (Figure
4.13) is found to be
[0.2731, 42.071, - / 4,10]s Tπ=q
The second order Jacobian is obtained as
117
4 1 3 1 4 1 3 1 3
4 1 3 1 4 1 3 1 3
4 1 3 4 1 3 1 3
4 1 3 4 1 3 1 3
4 3
cos cos 30cos 0 sin sin sin cossin cos 30sin 0 cos sin cos cos
0 0 0 00 0 0 00 0 0 00 0 0 0
( )sin sin 0 cos cos cos sincos sin 0 sin cos sin sin
0 0 sin co
q q q q q q q q qq q q q q q q q q
q q q q q q q qq q q q q q q q
q q
− − −− − −
=− −
− − −−
qqΦ q
3
1 3 1 3
1 3 1 3
3
ssin cos 0 cos sin 0
cos cos 0 sin sin 00 0 cos 0
qq q q qq q q q
q
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥− −⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
(4.7.9)
For 2 [ , , ]Ta b c=Z in Eq. (4.5.1), 2′Z in Eq. (4.5.8) is
0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0
Ta b ca b c
a b ca b c
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
Figure 4.13 Singular configuration at intersection point
118
The problem was solved by using Runge-Kutta method, where the standard
Newton method was used to solve Eq. (4.5.5), i.e., Jacobian matrix required for the
numerical iteration was obtained from Eq. (4.5.8) and step size h was chosen to be 0.005.
Before a successful planning was accomplished, some unsuccessful planning was
experienced. Figure 4.14 shows the snapshots of the spatial manipulator during an
unsuccessful planning. The configuration at time 7.79t = was planned to be
[0.2625, 43.5343, - 0.7856, 12.0665]T , where joint 3 violates its joint limit [-0.7854,
3.1416] (Figure 4.14 (c)).
(a) (b) (c )
Figure 4.14 Snapshots of the spatial manipulator of an unsuccessful planning
The start configuration that will allow a smooth trajectory was found to be
[0.2187, 28.2049, 0.3994, 17.4686]T and is shown in Figure 4.15 (a). Figure 4.15 (b)-(d)
give the intermediate configurations during the motion. Figure 4.15 (d) shows the
configuration when the manipulator comes to the intersection point being found to be
[0.2731, 25.2098, 0.9448, 12.0814]T , which is far from the singular configuration
[0.2731, 42.071, 0.7854,10]T− as shown in Figure 4.13.
119
(a) (b) (c)
(d) (e)
Figure 4.15 Snapshots of the spatial manipulator during movement of a successful planning
In order to study the effect of step size to the final result, numerical experiments
on step size 0.0005h = , 0.05h = , 0.1h = and 0.5h = were conducted and the results
are shown in Table 4.7. Same results were obtained for 0.0005h = , 0.005h = , 0.05h =
and 0.1h = . A different result was obtained when 0.5h = , which is mainly due to the
fact that the integration was conducted at a different time because of the large time
interval.
Step Size h Time at Intersection Point t Real Position0.0005 9.3 0.2731 25.2084 0.9451 12.0797 (35.7,10,35)0.005 9.3 0.2731 25.2084 0.9451 12.0797 (35.7,10,35)0.05 9.3 0.2731 25.2084 0.9451 12.0797 (35.7,10,35)0.1 9.3 0.2731 25.2084 0.9451 12.0796 (35.7,10,35)0.5 9.5 0.2746 25.1547 0.9608 12.0119 (35.5,10,35)
Configuration on Insersection Point (q)
Table 4.7 Comparison of results for different integration step sizes
120
The correction of z at each step has the same effects to the final results as the
planar example, i.e., it only affects the accuracy of z, same results of q can be obtained
even without the correction. It is also shown that using simplified Jacobian, i.e., using I
to replace the original 2 21 ( )2
Th ′− qqI Φ Q Z in the Eq. (4.5.8) will not affect the accuracy
of the final results. Furthermore, numerical experiments also show that using modified
Newton method, i.e., using (0) -1( )xf x to replace ( ) -1( )kxf x in the iteration formula of Eq.
(4.5.7), the iteration converges to the same results.
The CPU time used for each method was also compared with respect to different
step size and is shown in Table 4.8. In Table 4.8, RKCS_Y2_Z represents the method
with z correction, RKCS_Y2 is the method without z correction, SN_RKCS_Y2 is the
method using simplified Jacobian without z correction and SSN_RKCS_Y2 is the
method using simplified Jacobian and modified Newton method without z correction.
The results suggest that CPU time decreases when step size is larger, but the decreasing
rate is not linear, it becomes lower when the step size is larger than some certain number
(e.g., 0.05 in this example). Also both simplified Jacobian and modified Newton method
can speed up the calculation.
Method Step Size (h ) Elapsed Total Time (s ) User Mode Time (s )RKCS_Y2_Z 0.0005 8.2318 6.53E-04RKCS_Y2_Z 0.005 0.8913 6.20E-10RKCS_Y2_Z 0.05 0.1402 2.07E-13RKCS_Y2_Z 0.1 0.1102 4.91E-15RKCS_Y2_Z 0.5 0.0501 1.39E-16
RKCS_Y2 0.1 0.1102 4.91E-15SN_RKCS_Y2 0.1 0.1001 4.91E-15
SSN_RKCS_Y2 0.1 0.0801 1.39E-16
Table 4.8 Comparison of CPU time on a 1.8GHz processor with 512MB memory
121
4.8 Conclusions
A general method and accompanying formulation for designing kinematically-
smooth path trajectories of human arm have been presented. The rigorous formulation is
then implemented into code towards calculating an initial configuration (an inverse
kinematic solution) of the arm that would admit a smooth motion throughout the path,
without an interruption that is caused due to a switching of inverse kinematic solutions
when traversing a singular barrier. This work was possible because of the closed form
nature of the crossable barriers.
It was shown that the problem could be formulated as a set of differential
algebraic equations. It was also shown that well-established numerical methods called
the Runge-Kutta index 2 iterative numerical algorithm can be used to solve the problem.
Optimization using a cost function that drives the arm towards an inverse solution was
implemented. It was shown that an initial configuration for the starting point of the path
can be readily calculated. Indeed, observations regarding the correction terms in the
DAE index 2 solutions were also made and the numerical algorithm was demonstrated in
detail for a 3-DOF human arm model and a spatial 4-DOF manipulator.
122
CHAPTER 5
UPPER BODY MOTION PREDICTION
This chapter presents a methodology to predict and simulate the path generated by
humans in a natural motion of the torso and upper extremity. While this work has been
limited to 15 degrees of freedom of the upper body, the theory presented herein is
expandable to any part of the body that can be represented as segmental links of a
kinematic chain. The work is based on a mathematical postulate that allows for the
prediction of naturalistic human motion using an optimization-based approach.
This problem is defined as follows: Given the anthropometry of a human model
and given two points in space, where a path (a curve) is to be traversed by the human’s
hand, it is required to predict the human joint variables as a function of time. While this
is a typical problem in the field of robotics, this problem becomes more challenging in
the case of humans for two reasons (a) natural human motions are different than robotic
motions and (b) human models have significantly larger number of degrees of freedom
than that typically used for robotic manipulators.
Our optimization based approach has been met with success in predicting human
static postures (ref. Chapter 3), particularly when validated with experimental data.
Given a cost function to be optimized (minimized or maximized), the objective is to
determine the joint displacements (joint variables) as a function of time that would insure
that the hand remains on the given path, yet executes the motion in the most natural way.
Since the minimum jerk model for predicting point-to-point motion trajectories is
well accepted and experimentally verified (Flash and Hogan, 1985), we will use it as a
cost function to quantify, measure, and subsequently optimize human performance in
combination with a 15-degree-of-freedom human model. We will first adopt the
minimum jerk mathematical model to get a desired Cartesian path, and then convert it to
joint coordinates with the objective to address the problem in joint space. Furthermore,
123
because joint displacements as a function of time are non-uniform (free-form) curves, we
will use the concept of B-spline curves (Tiller, 1996) because of their many robust
properties such as differentiability, local control and convex hull. We will then
implement a numerical optimization algorithm to compute the control points
characterizing the B-spline curves, where we will utilize discomfort and smoothness as
cost functions while using distances to the desired path at selected points as a set of
constraints. The end result is an optimization-based method using human performance
measures as an effective method for calculating joint path trajectories that look and feel
most natural.
5.1 Path in Cartesian Space
First we will use the concept of minimum jerk introduced by Flash and Hogan
(1985) to predict unconstrained or curved point-to-point movements of the hand in
Cartesian space.
5.1.1 Unconstrained Point-to-Point Movements
The first objective of this section is to define an expression for quantifying jerk
along a path trajectory. While jerk was defined in the literature, there is no evidence that
it has been implemented in predicting three-dimensional motion of large DOF human
models. Given a path trajectory as a parametric curve in space such as
[ ]( ) ( ) ( ) ( ) Tt x t y t z t=x
the first derivative is the velocity and the second derivative is the acceleration. The third
derivative was introduced by Flash and Hogan (1985) to be the jerk along a path and is
best measured by an integration over the time of motion along the path such that
124
3 3 3
2 2 23 3 30
1 (( ) ( ) ( ) )2
ft d x d y d zC dtdt dt dt
= + +∫ (5.1.1)
In order to include the concept of minimum jerk as a driving function in the design (or
prediction) of a path trajectory, we will adapt some mathematics to allow for the
calculation of minima and maxima. Generally, for any function ( )x t , which is
sufficiently differentiable in the interval 0 ft t≤ ≤ , and for any performance index
[ , , , ,..., / ]n nL t x x x d x dt , which is integrable over the same interval, the unconstrained cost
function
0
( ( )) , , , ,...,fnt
nd xC x t L t x x x dtdt
⎡ ⎤= ⎢ ⎥
⎣ ⎦∫ (5.1.2)
assumes an extremum when ( )x t is the solution of Euler-Poisson equation
... ( 1) 0( )
nn
n nL d L d Lx dt x dt x∂ ∂ ∂⎛ ⎞− + − =⎜ ⎟∂ ∂ ∂⎝ ⎠
(5.1.3)
Since in our case,
2 2 212 (( ) ( ) ( ) )L x y z= + + (5.1.4)
and the Euler-Poisson equation
3 2 3 2 3 2
3 3 3 0d x d y d zdt x dt y dt z
⎛ ⎞ ⎛ ⎞ ⎛ ⎞∂ ∂ ∂= = =⎜ ⎟ ⎜ ⎟ ⎜ ⎟∂ ∂ ∂⎝ ⎠ ⎝ ⎠ ⎝ ⎠
(5.1.5)
we can get
6 6 6
6 6 60 0 0d x d y d zdt dt dt
= = = (5.1.6)
125
Assume the movement to start and end with zero velocity and acceleration, then we have
4 5 30 0
4 5 30 0
4 5 30 0
( ) ( )(15 6 10 )
( ) ( )(15 6 10 )
( ) ( )(15 6 10 )
f
f
f
x t x x x
y t y y y
z t z z z
τ τ τ
τ τ τ
τ τ τ
= + − − −
= + − − −
= + − − −
(5.1.7)
where ft tτ = , 0 0 0, ,x y z are the initial hand position coordinates at 0t = , and , ,f f fx y z
are the final hand position coordinates at ft t= . We shall use the above formulation in
the optimization implementation, which is applied to find joint movements in terms of B-
splines allowing for the unconstrained point-to-point motion.
5.1.2 Curved Point-to-Point Movements
In this section, we consider motion along a curve, where the hand has to traverse a
specified point (called a via point) during its motion. Study of this kind of movements
will provide a way to model obstacle-avoidance motions. For example, if there is an
obstacle in the path between two end points, by examining the largest diameter of the
obstacle, an artificial intelligence engine can determine and introduce a via point to pass
through for going around the obstacle. Our objective becomes to generate the smoothest
motion to bring the hand from the initial position to the final position in a given time
while the hand must move to the final position through a via point at an unspecified time.
The requirement that the hand should move through a specified via point defines equality
constraints on the hand position coordinates ( ) [ ( ), ( ), ( )]Tt x t y t z t=x ; i.e., if the location of
the via point with respect to a Cartesian coordinate system is given by 1 1 1 1[ , , ]Tx y z=x ,
the equality constraints are
1 1( )t =x x (5.1.8)
126
where the time 1t at which the hand has to pass through the via point is not specified
apriori but is derived form the optimization procedure to minimize the jerk function.
Problems of this kind are known as dynamic optimization problems with interior point
equality constraints and techniques have been established for their solution (Bryson and
Ho, 1975).
Now we will introduce the dynamic optimization method. Generally,
optimization problems similar to the problem solved here, involve a system which can be
described by a set of nonlinear differential equations
[ ( ), ( ), ]t t t=s f s u (5.1.9)
where s(t) is an n vector function of state variables and u(t) is an m vector control
function. The problem is to find the control u(t), whereby carrying the system from an
initial state s(0) to a final state s( ft ), the cost function C(t) is optimized. C(t) is defined
as
0
( ) [ ( ), ( ), ]ftC t L t t t dt= ∫ s u (5.1.10)
where [ ( ), ( ), ]L t t ts u is the performance index. This problem can be solved using the
Pontryagin method (Pontryagin et al., 1962). One defines an n component co-state
(Lagrange multipliers) vector ( )tλ and a scalar Hamiltonian
[ ( ), ( ), )] [ ( ), ( ), ] ( ) [ ( ), ( ), ]TH t t t L t t t t t t t= +s u s u λ f s u (5.1.11)
The following differential equations define the necessary conditions for a minimum to
exist
[ ( ), ( ), ]t t t=s f s u (5.1.12)
127
( ) Ht ∑=-
∑λ
s (5.1.13)
0H∑=
∑u (5.1.14)
For optimal control problems with interior point equality constraints, there are a
set of constraints at some time 1t
1 1( ( ), )t tN s 0= (5.1.15)
where N is a p-component vector function. These interior point constraints can be
augmented to the cost function by a Lagrange multiplier vector π so that the new cost
function is
0
[ ( ), ( ), ]ftTC L t t t dt= +Ûπ N s u (5.1.16)
The solution is obtained by allowing discontinuities in the co-state variables (Lagrange
coefficients) ( )tλ ’s and in the Hamiltonian [ , ( ), ( )]H t t tλ s . One can define a vector of
Lagrange coefficients ( )t+λ and Hamiltonian ( )H t+ for 1t t¥ and a vector ( )t-λ and
Hamiltonian ( )H t- for 1t t§ . At time 1t these variables satisfy the equations
1 11
( ) ( )( )
Tt tt
- + ∑= +
∑Nλ λ π
s (5.1.17)
and
1 11
( ) ( ) TH t H tt
- + ∑= -
∑Nπ (5.1.18)
128
The p components of π are determined by the constraint equations (5.1.15) while time 1t
is fully determined by Eq. (5.1.18).
For our problem we define a state vector ( ) [ , , , , , , , , ]Tt x y z u v w a b c=s and a
control vector ( ) [ , , ]Tt δ γ η=u and the components of these vectors are defined by the
system equations
x
y
z
x uy vz wu x av y bw z ca x jerkb y jerk
c z jerk
δγ
η
==== == == == = == = =
= = =
(5.1.19)
and the Hamiltonian is
2 2 21 ( )2x y z u v w a b cH u v w a b cλ λ λ λ λ λ λ δ λ γ λ η δ γ η= + + + + + + + + + + + (5.1.20)
The necessary conditions for a minimum to exist are
0
0
0
x
y
z
ddt
ddt
ddt
λ
λ
λ
− =
− =
− =
ux
vy
wz
ddt
ddt
ddt
λ λ
λ λ
λ λ
− =
− =
− =
au
bv
cw
ddt
ddt
ddt
λ λ
λ λ
λ λ
− =
− =
− =
(5.1.21)
The necessary conditions on the control variables are
129
0
0
0
a
b
c
H
H
H
δ λδ
γ λγ
η λη
∂= + =
∂∂
= + =∂∂
= + =∂
(5.1.22)
For our specific problem the constraints are at the hand position at time 1t
1 1
1 1
1 1
( )( )( )
x t xy t yz t z
===
(5.1.23)
The Hamiltonian H − for all times 1t t≤ is
2 2 21 (( ) ( ) ( ) )
2
x y z u v w a b cH u v w a b cλ λ λ λ λ λ λ δ λ γ λ η
δ γ η
− − − − − − − − − − − − − − − − − − −
− − −
= + + + + + + + +
+ + + (5.1.24)
and the Hamiltonian H + for all times 1t t≥ is
2 2 21 (( ) ( ) ( ) )
2
x y z u v w a b cH u v w a b cλ λ λ λ λ λ λ δ λ γ λ η
δ γ η
+ + + + + + + + + + + + + + + + + + +
+ + +
= + + + + + + + +
+ + + (5.1.25)
Since the constraint equations only relate to position, the only discontinuities are in xλ ,
yλ and zλ , therefore, according to Eq. (5.1.17), we get
1
2
3
x x
y y
z z
λ λ π
λ λ π
λ λ π
− +
− +
− +
= +
= +
= +
(5.1.26)
130
While all the other Lagrange coefficients are continuous at 1t t=
u u
v v
w w
a a
b b
c c
λ λ
λ λ
λ λ
λ λ
λ λ
λ λ
− +
− +
− +
− +
− +
− +
=
=
=
=
=
=
(5.1.27)
Since time 1t is not explicitly specified, the Hamiltonian must be continuous at 1t
according to Eq. (5.1.18)
1 1( ) ( )H t H t+ −= (5.1.28)
Now we can derive the necessary conditions for the existence of a minimum
separately for 1t t≤ and 1t t≥ as shown in Eqs. (5.1.21) and (5.1.22). In addition we
require continuity of velocities and accelerations at 1t , so that
1 1
1 1
1 1
1 1
1 1
1 1
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
u t u t
v t v t
w t w t
a t a t
b t b t
c t c t
+ −
+ −
+ −
+ −
+ −
+ −
=
=
=
=
=
=
(5.1.29)
Applying the following boundary conditions
131
0
0
0
(0) ( )(0) ( )(0) ( )
(0) 0 ( ) 0(0) 0 ( ) 0(0) 0 ( ) 0
(0) 0 ( ) 0(0) 0 ( ) 0(0) 0 ( ) 0
f f
f f
f f
f
f
f
f
f
f
x x x t xy y y t yz z z t z
u u tv v tw w t
a a tb b tc c t
= == == =
= == == =
= == == =
(5.1.30)
we can obtain an expression for x(t) for times 1t t≤
54 4 3 3 3 4 3 2 4 51 1 1 1
4 3 50
( ) ( ( (15 30 ) (80 30 ) 60 30 6 )720
(15 10 6 ))
ftτ τ τ τ τ τ τ τ τ τ τ τ
τ τ τ
− = − + − − + −
+ − − +
x π
c x (5.1.31)
and for times 1t t≥ the expression is
5 5
1( )( ) ( )
120ft τ τ
τ τ+ − −= +x x π (5.1.32)
where ( ) [ ( ), ( ), ( )]Tt x t y t z t=x , 1 2 3[ , , ]Tπ π π=π and 1 2 3[ , , ]Tc c c=c are vectors of
constants, ft tτ = and 1 1 ft tτ = . From Eq. (5.1.23), we have
1 1 1( ) ( )t t+ −= =x x x (5.1.33)
where 1 1 1 1[ , , ]Tx y z=x . Substituting Eqs. (5.1.31) and (5.1.32) into (5.1.33), we obtain
the following
( )5 4 30 1 1 1 1 05 5 5
1 1
1 ( )(120 300 200 ) 20( )(1 ) f
ftτ τ τ
τ τ= − − + − −
−π x x x x (5.1.34)
132
(
)
5 4 30 1 1 15 2 5
1 1
21 1 0 0 1 1
1 ( )(300 1200 1600 )(1 )
( 720 120 600 ) ( )(300 200)
ff
f
tτ τ τ
τ τ
τ τ
= − − +−
+ − + + + − −
c x x
x x x x x (5.1.35)
where 0 0 0 0[ , , ]Tx y z=x and [ , , ]Tf f f fx y z=x .
Next we substitute Eqs. (5.1.34)and (5.1.35) into Eq. (5.1.28) which reduces to
1 1 2 1 3 1( ) ( ) ( ) 0u t v t w tπ π π+ + = (5.1.36)
and we can obtain a polynomial equation in 1 1 ft tτ =
* * 3 2 * * 3 21 1 1 1 1 1( )(2 7 8 3) ( )( 2 ) 0τ τ τ τ τ τ− + − + − + − =π π π ci i (5.1.37)
where
* 5 4 30 1 1 1 1 0( )(6 15 10 ) ( )f τ τ τ= − − + − −π x x x x (5.1.38)
* 5 4 3 2
0 1 1 1 1 1 0
0 1 1
( )(15 60 80 ) ( 36 6 30 )
( )(15 10)f fx x xτ τ τ τ
τ
= − − + + − + +
+ − −
c x x
x x (5.1.39)
We then find the real roots of this polynomial which has only one acceptable root lying
between 0 and 1. Substituting this value for 1τ in the expressions for π and c (Eqs.
(5.1.34) and (5.1.35)) and we can finally get the expressions for x(t) during the entire
motion.
As we have already obtained the path in Cartesian space for unconstrained or
curved (for obstacle avoidance) trajectories by minimizing the jerk during the motion, we
are now able to find joint profiles in terms of B-splines in joint space that will allow for
the desired motion of the hand while simultaneously minimizing jerk in Cartesian space.
133
5.2 B-Spline Functions for Joint Variables
B-splines have many important properties such as continuity and differentiability,
endpoint interpolations, local control, and convex hull. These properties, especially
differentiability and local control, make B-splines competent to represent joint
trajectories, which require smoothness and flexibility.
We will use B-splines to represent joint displacements as a function of time, one
for each joint. In the following subsections, we will first introduce basic concepts of B-
splines followed by expressions of joint B-spline functions used in our formulation.
5.2.1 Definition of B-Spline Curves
There are a number of ways to define the B-spline basis functions, where the most
useful for computer implementation is the recurrence formula. We shall use the
recurrence formula to represent a B-spline, such that its control points will be calculated
as a result of the iterative numerical algorithm and based on optimizing the minimum jerk
formula above. Let 0{ ,..., }mu u=U be a non-decreasing sequence of real numbers, i.e.,
1, 0,..., 1i iu u i m+≤ = − . The iu are called knots, and U is the knot vector. The ith B-
spline basis function of p-degree (order p+1), denoted by , ( )i pN u , is defined as
1,0
1, , 1 1, 1
1 1
1 if ( )
0 otherwise
( ) ( ) ( )
i ii
i pii p i p i p
i p i i p i
u u uN u
u uu uN u N u N uu u u u
+
+ +− + −
+ + + +
≤ <⎧= ⎨⎩
−−= +
− −
(5.2.1)
A pth-degree B-spline curve is defined by
,0
( ) ( ) n
i p ii
C u N u P a u b=
= ≤ ≤∑ (5.2.2)
134
where the { }iP are the control points, and the ,{ ( )}i pN u are the pth-degree B-spline basis
functions defined on the non-periodic knot vector
1 11 1
{ ,..., , ,..., , ,..., }p m pp p
U a a u u b b+ − −+ +
= (m+1 knots)
The polygon formed by the { }iP is called the control polygon, and its calculation is the
objective of this work. Three steps are required to compute a point on a B-spline curve at
a fixed u value: (1) find the knot span in which u lies; (2) compute the nonzero basis
functions; (3) multiply the values of the nonzero basis functions with the corresponding
control points. A degree 3 B-spline with 7 control points is shown in Figure 5.1.
Figure 5.1 A B-spline
Since in our formulation, the first and second derivatives of the joint profiles (in
terms of B-splines) are required, now we introduce the formulation for calculating the kth
derivative of the basis function , ( )i pN u in terms of the functions , ,,...,i p k i j p kN N− + − which
is
( ), , ,
0
!( )!
kk
i p k j i j p kj
pN a Np k + −
=
=− ∑ (5.2.3)
with
135
0,0
1,0,0
1
1, 1, 1,
1
1, 1,
1
1
1,..., 1
kk
i p k i
k j k jk j
i p j k i j
k kk k
i p i k
aa
au u
a aa j k
u u
aa
u u
−
+ − +
− − −
+ + − + +
− −
+ + +
=
=−
−= = −
−
−=
−
(5.2.4)
then the kth derivative of a pth-degree B-spline curve is given by
( ) ( ),
0( ) ( )
nk k
i p ii
C u N u P a u b=
= ≤ ≤∑ (5.2.5)
where k should not exceed p (all higher derivatives are zero).
5.2.2 Joint B-Spline Functions
There are 15 joints in our digital human model (Figure 5.2), and each joint will be
represented by a B-spline function. A total of 11 distinct knots are used, which are
0, 0.1, 0.2,...,1.0 . If the final time ft is not 1.0, then each knot will be scaled by
multiplying ft . The degree of the B-spline curve determines the continuity and
differentiability. The curve is infinitely differentiable in the interior of knot intervals, and
it is at least p k− times continuously differentiable at a knot of multiplicity k. A smooth
joint motion will require the continuity in acceleration, which will in turn require the joint
B-spline curve to be at least of degree 3. In the case of degree 3, the knot vector is
{0, 0, 0, 0, 0.1, 0.2, 0.3,..., 0.9, 1.0, 1.0, 1.0, 1.0} , where the multiplicity at the start
and the end enforces endpoint interpolation. The degree p, number of the control points,
n+1, and number of knots, m+1, are related by
1m n p= + + (5.2.6)
136
Figure 5.2 Modeling of the torso, shoulder, and arm as a 15-DOF system
For the knot vector above, 16, 3m p= = , therefore 12n = , i.e., each joint B-spline
curve has 13 control points. Then the B-spline curve of each joint j can be obtained as
12
,3 ,0
( ) ( ) 0 , 1,2,...,15j i i j fi
q u N u P u t j=
= ≤ ≤ =∑ (5.2.7)
5.3 Illustration of Motion Prediction Method
In this section, we shall illustrate our general method for motion prediction. The
overall procedure is presented in Figure 5.3, and the motion prediction module is refined
and shown in Figure 5.4. The input to the algorithm are the start and end points of the
motion, the position of the via point for a curved path in case of obstacle avoidance, DH
137
parameters of the human model and the time desired to travel along the path. The
absolute time is not very important here, it is the relative time at that instant that
determines the shape of the velocity. The planning in Cartesian space is to find a 3-D
path by minimizing jerk according to those introduced in section 5.1, the output from this
sub-module would be a path described in Cartesian space like what are shown in Eqs.
(5.1.7), (5.1.31) and (5.1.32). The path then is forwarded to the optimization module in
joint space, which is to find a set of control points for the joint B-splines that minimize
discomfort and maximize the smoothness of joint movements with the hand moving
along the path. This module actually does the transformation from Cartesian space to
joint space.
Motion Prediction
Module
End Points A and B, Via Point C
DH parameters of human model
Travel Time ft
Profile of Joint Motion, i.e., q(t) as B-spine for each joint
Input Output
Figure 5.3 Motion prediction illustration
Planning in Cartesian
Space
Optimization in Joint Space
Parametric path p(t) in Cartesian space based on a Minimum Jerk Model
Figure 5.4 Refined motion prediction module
138
5.4 Optimization
Having obtained the desired path in Cartesian space and represented each joint
motion by a B-spline curve with unknown parameters (i.e., control points), optimization
is used to calculate control points of each joint spline curve.
First, we need to define a cost function. We used a discomfort cost function that
evaluates displacement of each joint away from its neutral position with a weight
function wi to stress the importance of one joint versus another. The total discomfort of
all joints has been defined in Chapter 3 and is characterized by the function
12
,3 ,1 1 0
( ) ( )n n
N Ndiscomfort j j j j i i j j
j j i
f w q q w N t P q= = =
= − −=∑ ∑ ∑q (5.4.1)
where Njq is the neutral position of a joint measured from the starting home
configuration, jw is a weight function assigned to each joint for the purpose of giving
importance to joints that are typically more affected than others, , , 0,...,12i jP i = are the
control points of jth joint spline function (our design variables), and the ,3( )iN t are the
3rd-degree B-spline basis functions. The above discomfort measurement (Eq. (5.4.1))
only considers the joint angles for specific posture, so it is more suited to evaluating the
discomfort for a static posture. However, our objective in this section is to find joint
trajectories while the hand is traversing along some specific path, a simple cost function
like Eq. (5.4.1) will be inadequate. For example, the joint trajectories where joints move
back and forth will not predict natural human motion even if the hand follows the desired
path exactly with low discomfort. Moreover, for the smooth movement of each joint, the
second derivative of the joint trajectory needs to be minimized to avoid the abrupt change
of the joint velocity. To consider the first scenario above, given the start and end points,
the posture prediction algorithm introduced in Chapter 3 is used first to predict the natural
postures at end points. By comparing the two postures, an overall changing trend of each
139
joint (increasing or decreasing) can be predicted. As a result, the consistency between the
joint rate change (first derivative) and predicted overall trend is evaluated and will be
added to the cost function. The detailed formulation of this consistency is as follows
0 0
00
1 if ( ) 01 if ( ) 0
fi i
if ff i i
q qtrend
q q⎫→ ⎧ − ≥⎪→ =⎬ ⎨→ − − <⎪ ⎩⎭
x qx q
(5.4.2)
and
15
1( ( ( )) 1) ( )inconsistency i i i
if sign q t trend q t
=
= − +∑ (5.4.3)
where
1 if ( ) 0
( ( ))1 if ( ) 0
ii
i
q tsign q t
q t≥⎧
= ⎨− <⎩ (5.4.4)
The (+1) in Eq. (5.4.3) is to make the amplitude of the joint rate change still has an effect
towards optimizing a smooth joint trajectory when the first term within the parenthesis is
evaluated to be zero. The multiplication with the amplitude of this joint change rate is to
enforce the underlying assumption that the smaller the joint angle change rate is, the
smoother the joint trajectory will be. It also has significant effect on the optimization
process, by not only qualifying the consistency, but also quantifying it so as to avoid the
zero gradient of this objective, which is the characteristic of an ill-stated optimization
problem statement. The second derivative of the joint trajectory is considered in a
smoothness (or non smoothness) cost function as
15
2
1( ( ))nonsmoothness i
if q t
=
=∑ (5.4.5)
140
In order to emphasize smooth starting and ending conditions, the amplitudes of joint
angle rates at the start and end points are added to the final cost function again with
appropriate weights. The final cost function is formulated as
15 15
01 2 3 4
1 1Cost ( )f
discomfort inconsistency nonsmoothness i ii i
w f w f w f w q q= =
= + + + +∑ ∑ (5.4.6)
where 1w , 2w , 3w and 4w are the weights added to each performance index.
With Eq. (5.4.6) as the cost function, the distance from the calculated path to the
desired path in the Cartesian space is enforced as a constraint, i.e.,
12
,30
|| ( ( )) - ( ) || || ( ( ) ) - ( ) || ε=
= <∑x q p x P pi ii
t t N t t (5.4.7)
where 0 ft t≤ ≤ , ε is a small positive number as the tolerance and ( )tp is the path
obtained from the planning in Cartesian space phase.
Once the control points of joint curves are selected by the iterative optimization
algorithm, the cost function of Eq. (5.4.6) can be integrated (we integrate the first three
terms and add the fourth term to it) to obtain the total cost of the trajectory. The same
principle applies to the distance, where the total deviation along the path can be obtained
by the integration of the distance between the calculated and desired paths from the start
to the end points. In our algorithm, for simplicity, the cost function and distance
constraints are evaluated by selecting representative points on the path where higher
density is distributed close to the ends (total number of 43 have been selected). Since
each joint curve has 13 control points, the total number of the design variables will be
195 initially. In our calculation, the joint values at the start and end have been obtained
directly using the posture prediction algorithm, by the endpoint interpolation property of
the B-spline curve, these will be the control points at the ends, where we only need to
141
calculate the remaining 11 control points for each joint, i.e., the design variables for the
optimization are reduced to 165. Each joint value needs to be constrained within the
specified joint limits. By utilizing the convex hull property of the B-spline curve, each
joint curve can be guaranteed to satisfy the joint limit constraints by setting the joint
limits directly to the control points. The overall strategy of the optimization is shown in
Figure 5.5.
Predicted path in Cartesian space with minimum jerk
Path constraints
15 joint B-splines with control points obtained by optimization
Figure 5.5 Path design with control points prediction
5.5 Results and Discussion
Based on simulation experiments, a set of weights (50, 100, 1, 1000) have been
selected for 1w , 2w , 3w and 4w and modified feasible direction method has been used
for the optimization. The overall calculation takes about 17 to 18 seconds on a 1.8GHz
142
Pentium4 CPU with 512M RAM, which makes it possible to be used in real time on a
high-end workstation with dual processors that DHL (Digital Humans Laboratory)
currently has.
An interface has been implemented in 3D Studio Max, which can interact with
user, call the motion prediction algorithm for background calculation, show results and
animate human motions in real time. The detailed implementation of this computer
interface will be presented in Chapter 7.
Figure 5.6 Predicted motion 1 at time 0
143
Figure 5.7 Predicted motion 1 at time 0.25 ft
Figure 5.8 Predicted motion 1 at time 0.5 ft
144
Figure 5.9 Predicted motion 1 at time 0.75 ft
Figure 5.10 Predicted motion 1 at time ft
145
Figure 5.11 Predicted joint splines for motion 1
Figures 5.6 to 5.10 show the snapshots of a predicted point-to-point motion,
where Figure 5.6 shows the start position (all joint angles are at zero) and Figure 5.10
shows a final position at end point (the big red sphere indicates the target/end point). The
line shown on Figure 5.6 to 5.10 is the path predicted in Cartesian space based on the
minimum jerk model. The small spheres on the path are where path constraints enforced
on the hand position when predicting the joint B-splines. From the time stamps of the
snapshots, it is easy to observe that hand moves more slowly at the start and end than in
the middle. This is so-called bell shape velocity profile, a characteristic of the smooth
and natural human arm movement (Flash and Hogan, 1985) and predictability of this
profile is actually the strength of the minimum jerk model. The predicted joint profiles
for the 15 joints are shown in Figure 5.11, from which we can see each joint moves
smoothly towards the final position.
146
Figure 5.12 Predicted motion 2 at time 0
Figure 5.13 Predicted motion 2 at time 0.35 ft
147
Figure 5.14 Predicted motion 2 at time 0.5 ft
Figure 5.15 Predicted motion 2 at time 0.65 ft
148
Figure 5.16 Predicted motion 2 at time ft
Figure 5.17 Predicted joint splines for motion 2
149
Figures 5.12 to 5.16 are snapshots of another predicted motion, where the digital
human starts from a different point and goes to a different target. The predicted joint
profiles are shown in Figure 5.17. Similar to Figure 5.11, each joint moves smoothly, but
the overall spline shapes are different from those shown on Figure 5.11. This is due to
the fact that in motion 1, the digital human moves from an initial position while all the
joints are at zero, while in motion 2, it moves from a different posture with most of the
joints at some initial angles other than zero.
Figure 5.18 Predicted motion 3 with a via point at time 0
150
Figure 5.19 Predicted motion 3 with a via point at time 0.25 ft
Figure 5.20 Predicted motion 3 with a via point at time 0.5 ft
151
Figure 5.21 Predicted motion 3 with a via point at time 0.75 ft
Figure 5.22 Predicted motion 3 with a via point at time ft
152
Figure 5.23 Predicted joint splines for motion 3 with a via point
For curved and obstacle avoidance movements, it is assumed that the hand is
required, in the motion between the end points, to pass through a third specified point (for
example, an artificial intelligence engine can provide a via point to pass so as to go
around the obstacle by examining the diameter of the obstacle). So given start and end
points, and a third via point, a curved path in Cartesian space can be first generated by the
method introduced in section 5.1, while the time passing through the via point is first
solved. Figures 5.18 to 5.22 give the snapshots of such movement while the digital
human begins moving from an initial posture with all the joint angles at zero. The big
green sphere is the via point to be passed through, the curve is the Cartesian path
predicted using minimum jerk model. As before, the small spheres are where distance
constraints are enforced during the optimization for joint splines. The straight line is
shown just for easy comparison with the curved path. As shown from the figures, the
proposed method and algorithm can predict smooth and graceful movements of upper
153
body even for a nonlinear (curved) path. The joint profiles shown in Figure 5.23 also
indicate the smooth movement of each joint.
Figure 5.24 Predicted motion 4 with a via point at time 0
Figure 5.25 Predicted motion 4 with a via point at time 0.3 ft
154
Figure 5.26 Predicted motion 4 with a via point at time 0.5 ft
Figure 5.27 Predicted motion 4 with a via point at time 0.7 ft
155
Figure 5.28 Predicted motion 4 with a via point at time ft
Figure 5.29 Predicted joint splines for motion 4 with a via point
156
Another curved movement with different start, end and via points are shown in
Figures 5.24 to 5.28. The required via point (big green sphere) is more close to the end
and the predicted joint profiles clearly show this asymmetry (Figure 5.29). Overall, the
proposed method can predict upper body movement both in Cartesian and joint spaces
successfully.
5.6 Conclusions
Our proposed method for predicting joint profiles is general and is broadly
applicable to any type of path, linear (straight) or nonlinear (curved) path trajectories.
Nonlinear paths are applicable to obstacle avoidance problems, where trajectories are
deviated from the typical linear point to point motion with minimum jerk. It was shown
that a mathematical formulation applicable to any number of DOF’s has been developed
and demonstrated, where the joint profiles as a function of time are predicted. Each joint
profile has been defined by a smooth B-spline, where control points are calculated using
a novel optimization-based algorithm.
It was shown that given any start or end points, or given a via point (a predefined
intermediary point), our algorithm will first check and determine if these points fall
within the reachable workspace of the digital human model. Once deemed within reach,
a Cartesian path (including the time to traverse through the via point) is first predicted
based on a minimum jerk cost function, followed by the calculation of joint profiles
characterized by B-splines within an iterative optimization algorithm, where the objective
is to minimize a discomfort function and maximize a smoothness function. The
experimental code associated with this formulation was implemented in a graphical real-
time simulation interface and the algorithm is shown to be robust and can be extended to
a real-time environment.
157
CHAPTER 6
OPTIMIZATION-BASED LAYOUT DESIGN
The field of ergonomics has received a considerable attention after the appearance
of digital human modeling software, where digital mannequins are manipulated to answer
specific questions such as: “Is this target point reachable? Is this load too heavy? Is this
posture comfortable?” While this type of software has provided ergonomists and
designers a valuable tool, it has not been able to provide a best design scenario. The
objective of our series of studies has been aimed at adding functional capability to digital
human modeling code by implementing a new type of answers to the above questions
such as: “No, I cannot reach this target point, but here is the best design that would allow
the most comfortable reach”. The layout problem is defined as the method whereby
positions of target points are specified in the environment surrounding a human. The
problem is of importance to ergonomists, vehicle/cockpit packaging engineers, designers
of manufacturing assembly lines, and designers concerned with the placement of levers,
knobs, controls, etc. in the reachable workspace of a human, but also to users of digital
human modeling code, where digital prototyping has become a valuable tool.
In this chapter, we present a method and accompanying code to address the layout
problem from an optimization point of view. The general question is: where to locate
specific target points in the reachable space while optimizing for one or more cost
functions and while subject to a number of constraints. A global optimization method
simulated annealing has been used to implement the algorithm and to yield global
solutions. Because layout design must take into consideration a relatively large number
of issues, some are subjective while others are more objective, we believe that our
proposed optimization-based solution introduces a new method in making more educated
decisions towards a design. We have successfully implemented this design for a small
158
project at Oral-B Laboratories, where a manufacturing cell involving an operator who
handles 3 objects some with the left hand while others with the right hand.
In this chapter, we will use the same human model introduced in Chapters 2 and
3. We then develop the formulation in terms of cost functions and constraints necessary
to address the underlying problem. The constrained optimization problem is then
numerically solved using a hybrid optimization method (gradient-based optimization and
simulated annealing) and an example is illustrated.
6.1 Problem Definition
Targetpoints
WCS
Specifying Controls, Levers, Pedals, etc.
Figure 6.1 A layout problem
Given the dimensions and joint ranges of motion of a human situated in a pre-
specified position (e.g., seated in the workplace or in the cockpit of an aircraft), it is
required to determine (or calculate) the coordinates of a number of target points that will
be designed into the environment (Figure 6.1). These target points could be any
159
combination of levers, buttons, control knobs, switches, etc., while optimizing an
objective function. Such objective functions could be discomfort, energy, effort, torque,
dexterity or any combination of these cost functions that have been developed in Chapter
3.
6.2 Human Model
Figure 6.2 15-DOF model for upper body from waist up to hand
160
(a) (b)
Table 6.1 DH Table for upper body
We will use the 15-DOF model (from waist up to the right hand) developed in
Chapter 2. Since in layout design, some target points are required to be reached by the
right hand, while others by left hand, we also need to model left arm. Same as the right
arm, left arm (from shoulder to hand) is also modeled as 9 DOF. With a symmetric
model to that shown in Figure 6.2 (with axes 9 10 11 13 14, , , ,z z z z z having opposite directions
to those shown in Figure 6.2), we have a DH Table for left arm as shown in Table 6.1 (b),
where joint variables 16q to 24q are symmetric to 7q to 15q of the right arm. The DH
Table for right arm is again shown in Table 6.1 (a). So total we have a 24 DOF model for
upper body including both right and left arms. In real calculation, however, since we
don’t require both the right hand and left hand to reach their respective target points at the
same time, at any single calculation (predicting posture), only 15-DOF model either for
right arm or for left arm is used. When calculating the posture with left arm, we can
either use the DH Table shown in Table 6.1 (b), or we can still use the DH Table for right
161
arm and then map those joint variables of right arm to left arm by utilizing the symmetry
property between the right and left arms.
6.3 Layout Design
Because many parameters enter into the layout problem and because there is no
single solution that is better than others, the problem lends itself for optimization. We
shall develop a formulation suitable for implementation into numerical optimization
algorithms. To this end, an objective function also called a cost function must be
optimized (maximized or minimized) subject to imposed constraints.
6.3.1 Cost Functions and Constraints
The cost functions could be discomfort, energy, effort, torque, dexterity or any
combination of them that have been developed in Chapter 3. In particular, discomfort
and potential energy cost functions will be repeated here briefly.
Discomfort cost function measures the level of discomfort from the most neutral
position of a given joint. Let qiN be the neutral position of a joint measured from the
starting home configuration (i.e., from the position and orientation specified in the DH
Table). Then the displacement from the neutral position is given by q qi iN- . Because
the discomfort is usually felt higher in some joints, we also introduce a weight function
wi to stress the importance of one joint versus another. The total discomfort of all joints
is then characterized by the function
1
( )n
Ndiscomfort i i i
i
f w q q=
= −∑q (6.3.1)
where wi is a weight function assigned to each joint for the purpose of giving importance
to joints that are typically more affected than others.
162
Now consider the potential energy exerted by a limb. Each link (e.g., the
forearm) has a specified center of mass. The vector from the origin of the link’s
coordinate system to the center of mass is given by iir , where similar superscript and
subscript indicate that the vector is resolved in the link’s coordinate system.
The total potential energy f potential is the sum of all individual potential energies Pi .
In order to determine the position and orientation of any one part of the arm, we shall use
the transformation matrices ( 1)ii
− A that relate one part to another using the ( )4 4μ
transformation matrix. Let g be the gravity vector, then for the first body part in the
chain, the potential energy is 0 11 1 1 1P m= − g A r . The energy contribution by the second
body part in the chain is 0 1 22 2 1 2 2P m= − g A A r . For a complete chain, the total potential
energy is given by
( )0
1 1
( ) ( )n n
ipotential i i i i
i i
f P m= =
= = −∑ ∑q g A r (6.3.2)
where [ ]0 0 Tg= −g is the gravity vector.
In layout problem, it is usual that the positions of some target points are
constrained to some sub region, e.g., a line, a surface, or some specified region. So we
will have target region constraints to constrain each target point to some specified space.
Some target points are also required to be close to each other, which are formulated as
distance constraints. In the process of optimizing the positions of several target points,
we also need constrain them not to overlap. Moreover, each target point needs to be
reachable with each joint within its joint limits. Typically, we have the following types
of constraints:
(1) Target Region Constraints
i i∈p R (6.3.3)
163
where iR is a sub region within the Cartesian space.
(2) Distance Constraints
i j kd− ≤p p , , , 1, 2,3,...i j k = ; i j≠ (6.3.4)
where kd is some positive number.
(3) Overlapping Constraints
i j kμ− ≥p p , , , 1, 2,3,...i j k = ; i j≠ (6.3.5)
where kμ is some positive number.
(4) Reachability Constraints
( )i iε− ≤p x q (6.3.6)
where iε is some small positive number.
(5) Joint Ranges of Motion
L Ui i iq q q≤ ≤ , i = 1,…,n (6.3.7)
where the superscripts L and U denote the lower and upper limits, respectively.
The above constraints are enforced on different design variables during the
optimization. For example, constraints shown in Eqs. (6.3.3) to (6.3.5) are constraints on
target point positions, while Eq. (6.3.6) can be constraints on the positions of target
points, as well as constraints on joint variables. Eq. (6.3.7) are constraints only on joint
variables. And also, some constraints may appear in cost functions depending on the
optimization scheme, which will be introduced next.
164
6.3.2 Optimization Scheme
We use a combination of gradient-based optimization method BFGS and global
optimization method simulated annealing to obtain an optimized layout. The overall
optimization flowchart is shown in Figure 6.3, where the main loop is in simulated
annealing, and posture prediction is done by BFGS.
Initialize parameters
No. cycles sN≥
Adjust step vector v.Reset no. cycles to 0.Adjust step vector v.Reset no. cycles to 0.
No. step adjustments TN≥
Reduce temperature.Reset no. adjustments to 0.
Set current point to the optimum.
Stopping criterion satisfied?
End
yes
yes
yesno
no
no
For each target, call posture prediction alg.
to predict a best posture
Target points
Distance Discomfort
Energy
Metropolis criterion:
If , then
else accept with probability:
where , T is temperature
0fΔ ≤ 1i+ ′=x x
Perform a cycle of random moves, each along onecoordinate direction. Accept or reject each pointaccording to the Metropolis criterion. Record the
optimum point reached so far.
′xPerform a cycle of random moves, each along one
coordinate direction. Accept or reject each pointaccording to the Metropolis criterion. Record the
optimum point reached so far.
′x
′x
( ) exp( / )p f f TΔ = −Δ
( ) ( )if f f′Δ = −x x
Terminating criterion:* *
*
| | , 1,...,
k k u
k opt
f f u N
f fεε
ε−− ≤ =
− ≤
Figure 6.3 Optimization scheme
165
Simulated annealing (SA) is another popular global optimization method. Similar
to the genetic algorithm that we used in Chapter 3, it is an iterative random search
procedure with adaptive moves along coordinate directions (Corana et al., 1987). It
permits uphill moves under the control of a probabilistic criterion, thus tending to avoid
the local minima encountered. The SA optimization algorithm can be considered
analogous to the physical process by which a material changes state while minimizing its
energy. A slow, careful cooling brings the material to a highly ordered, crystalline state
of lowest energy. A rapid cooling instead yields defects and glass-like intrusions inside
the material. As shown in Figure 6.3 (main loop is simulated annealing), this algorithm
proceeds iteratively: Starting from a given point 0x , it generates a succession of points:
0 1, ,..., ,...ix x x tending to the global minimum of the cost function. New candidate points
are generated around the current point ix applying random moves along each coordinate
direction, in turn. The new coordinate values are uniformly distributed in intervals
centered around the corresponding coordinates of ix . Half the size of these intervals
along each coordinate is recorded in the step vector v. If the point falls outside the
definition domain of the cost function f(x), a new point is randomly generated until a
point belonging to the definition domain is found. A candidate point ′x is accepted or
rejected according to the Metropolis criterion:
If 0fΔ ≤ , then accept the new point: 1i+ ′=x x
else accept the new point with probability:
( ) exp( / )p f f TΔ = −Δ
where ( ) ( )if f f′Δ = −x x and T is a parameter called temperature.
The SA algorithm starts at some “high” temperature 0T given by the user. A
sequence of points is then generated until a sort of “equilibrium” is approached; that is a
sequence of points ix whose average value of f reaches a stable value as i increases.
During this phase the step vector mv is periodically adjusted (every sN cycles) to better
follow the function behavior. The best point reached is recorded as optx . After thermal
166
equilibration, the temperature T is reduced (every TN steps) and a new sequence of
moves is made starting from optx , until thermal equilibrium is reached again, and so on.
The process is stopped at a temperature low enough that no more useful improvement can
be expected, according to a stopping criterion.
The combination of SA and BFGS is as follows: In SA (main loop), design
variables are the positions of target points, for each iteration in SA, a set of positions of
target points are generated within the target region (ranges of design variables), if some
of them don’t satisfy the overlapping constraints, a large penalty is given as the value of
cost function. If they don’t overlap, the target points are given as the input of posture
prediction algorithm (BFGS optimization method for fast prediction as developed in
Chapter 3, while combining discomfort and distance as a cost function, joint limits as
constraints with reach envelope divided into 16 sections and an initial point within each
section is pre-calculated using GA). If the target is reachable, a set of joint variables are
found and corresponding discomfort, potential energy can be calculated and a weighted
summation of each measurement along with distance to the target point is evaluated as
the cost function in SA. If posture prediction algorithm finds the target is not reachable,
another large penalty is given as the cost function value. So SA is used to find an
optimized layout of target points globally, BFGS (posture prediction algorithm) is used to
find a natural posture associated with reaching specific target. Although BFGS is itself a
local optimization method, since we already picked an initial posture globally (Chapter
3), it can provide a fast and near global solution.
6.3.3 Comparison of GA and SA
Since both genetic algorithm (GA) and simulated annealing (SA) have been used
to solve optimization problems (GA is used in Chapter 3), this section will briefly
compare the two popular global optimization methods.
167
Theoretically, SA and GA are quite close relatives, and much of their difference is
superficial. The two approaches are usually formulated in ways that look very different
and using very different terminology. With SA, one usually talks about solutions, their
costs, and neighbors and moves; while with GA, one talks about individuals (or
chromosomes), their fitness, and selection, crossover and mutation. While in GA, a
chromosome is thought of as a “genotype” that only indirectly expresses a solution,
which has traditionally specific to GA, there is no reason why a similar approach could
not be applied in SA as well.
Basically, SA can be thought as GA where the population size is only one. The
current solution is the only individual in the population. Since there is only one
individual, there is no crossover, but only mutation. This is in fact the key difference
between SA and GA. While SA creates a new solution by modifying only one solution
with a local move, GA also creates solutions by combining two different solutions.
Whether this actually makes the algorithm better or worse, is not straightforward, but
depends on the problem and the representation.
It should be noted that both SA and GA share the fundamental assumption that
good solutions are more probably found “near” already known good solutions than by
randomly selecting from the whole solution space. If this were not the case with a
particular problem or representation, they would perform no better than random
sampling. What GA does differently here is that it treats combinations of two existing
solutions as being “near”, making the assumption that such combinations (children)
meaningfully share the properties of their parents, so that a child of two good solutions is
more probably good than a random solution. Again, if for a particular problem or
representation this is not the case, then GA will not provide an advantage over SA.
Meanwhile, it should be noted that the relative weight given to mutation and
recombination is a crucial parameter affecting what a GA actually does. If mutation is
the dominant way of creating new solutions, then GA in fact acts as a paralleled version
168
of SA, where several solutions are being independently improved. From the practical
viewpoint, it should be noted that for some problems, evaluating solutions that are near
an existing solution may be very efficient, which may give a big performance advantage
to SA, when compared to GA, if evaluating recombined solutions is not so efficient.
For more practical comparison, an empirical comparison should be performed
when solving same problem given same execution time, which is not conducted here.
6.4 An Example
X
YZ
Figure 6.4 A manufacturing cell
We now consider the layout of target points in the reachable workspace of a
human. This could be the placement of control knobs in an aircraft, levers in an assembly
line, or knobs in a vehicle. In this example, we have a manufacturing cell involving an
169
operator who handles 3 objects, tuff bin, button and pliers. As shown in Figure 6.4, the
operator will be sitting in the chair, and handle the products passing by through conveyer
belt on the table. During the operation, the operator needs to press the button, pick up the
parts from a tuff bin, and get pliers to assembly parts on the pass-by unfinished products.
The tuff bin is supposed to be reached by the left hand, while the button and pliers are to
be reached by the right hand. We shall calculate the spatial positions of these three
objects, tuff bin, button and pliers (shown on the table in Figure 6.4) in the reachable
workspace of a human, while minimizing a cost function comprised of discomfort,
potential energy and reachability.
The table is of 152 cm in length, 62 cm in width and the height is 107 cm from the
floor and 11 cm from the waist of the operator. The operator (waist) is 15 cm away
(horizontally) from the table’s edge. The origin of the coordinate system is located at the
center of the table with the directions shown in Figure 6.4. There are several target
region constraints: Since there is a conveyer belt on the table, the tuff bin has to be in the
space above the table. And also because the tuff bin is to be reached by the left hand, it is
constrained to the left half space. The button and pliers are required to be on the edge of
the table in front of the operator and to be reached by the right hand, so they are
constrained to the right half of the table edge. It is also required that the distance between
button and pliers is no less than 10 cm (an overlapping constraint). Total we have 5
design variables for SA, which are the x, y, z coordinates of the bin, y coordinate of the
button and pliers with the following constraints:
0 310 76
28 4876 0
76 0
10
x
y
z
y
y
Button Pliers
BinBin
BinButton
Pliers
≤ ≤
≤ ≤
≤ ≤− ≤ ≤
− ≤ ≤
− ≥P P
(6.4.1)
170
where ( , , )TBin x y zBin Bin Bin=P , ( 31, ,0)T
Button yButton= −P and ( 31, ,0)TPliers yPliers= −P
are the positions of bin, button and pliers respectively. In each iteration, when the new
design variables attempted by SA satisfy the constraints of Eq. (6.4.1), positions of these
three objects are sent to posture prediction algorithm as target points, where design
variables are the 15 joint variables either with right arm or left arm. Postures with
minimized distance to the target and minimized discomfort are predicted, and the values
of associated distance to the target, discomfort and potential energy are returned to SA.
Now SA is able to evaluate its cost function, a combination of distance, discomfort and
potential energy (values returned by posture prediction algorithm), where the reachability
constraints have been enforced by converting them to distance measurements and
including them in a cost function with the following format:
3 3 3
1 1 11000 distancei i
discomf potential ii i i
f f f= = =
= + + ×∑ ∑ ∑ (6.4.2)
where idiscomff , i
potentialf and distancei are the discomfort, potential energy and distance to
the specific target of each posture reaching bin, button and pliers respectively.
Results of this layout design are shown in Figure 6.5 where
[1.3940,40.1072,28.8529]
4.3448
10.7160
distance 0.0000
TBinBin
discomf
Binpotential
Bin
f
f
=
=
=
=
P
[-31.0, -58.4235,0.0]
1.1521
8.8921
distance 0.0006
TButtonButton
discomf
Buttonpotential
Button
f
f
=
=
=
=
P
171
[-31.0, -48.4231,0.0]
2.2602
9.4202
distance 0.0003
TPliersPliers
discomf
Plierspotential
Pliers
f
f
=
=
=
=
P
and the total cost value for this layout is 37.68. The postures reaching the tuff bin and
button are shown in Figures 6.6 and 6.7 respectively. It is observed that each object can
be touched easily and comfortably and a successful layout design has been created by a
numerical optimization algorithm.
Figure 6.5 Designed layout
172
Figure 6.6 Posture reaching tuff bin
173
Figure 6.7 Posture pressing button
6.5 Conclusions
A rigorous formulation for optimization-based layout has been presented. The
layout problem occurs in the field of ergonomic design and encompasses many solution
sets, one of which is to be selected. We have presented a method for calculating a best
solution based on minimizing or maximizing a cost function or a combination of cost
functions.
It was shown that this formulation is implementable in computer code. Indeed,
we believe this type of formulation augments capabilities offered by digital human
modeling software, where it facilitates digital prototyping, shortens lead times, and saves
costs.
174
CHAPTER 7
COMPUTER INTERFACE DESIGN
While not the central objective of our research, but for the sole purpose of
visualizing our results and to validate these results against experimental data (as shown in
Chapters 3, 5 and 6), we have developed a computer interface for visualizing our digital
humans. Although there are a handful of commercial software systems (Jack, Safework,
Ramsis) that have the ability to visualize and manipulate a human model, it was evident
that these systems are incapable of predicting postures and creating motions
automatically since they all require users to manipulate the model. The system that we
have developed is a posture and motion prediction plug-in to 3D Studio Max (3DS),
where a 24-DOF model (with two arms, 15 DOF’s for upper body with one arm) of a
seated operator can be loaded into an environment and show posture reaching a target and
simulate motion from one target to another automatically just by clicking a button. 3D
Studio Max software system is well established and used by many industries, including
automotive, Hollywood motion film, and the military to recreate photo realistic scenes.
We have used this software system to model our human and allow for the real-time
interactivity with our Fortran-based code. A user of our system is able to predict and
visualize postures and motions of a human through masked calls to Fortran codes.
Through the interface, the interactions with the posture prediction and motion prediction
algorithms for background computation give the intelligence to the digital operator to
automatically create postures and motions.
This chapter will present the details of the interface and its implementation. The
overall interface is shown in Figure 7.1. The interface includes 30 buttons in total which
can be mainly divided into four modules: posture prediction, motion prediction,
visualization and layout. In the following, we will first introduce the modeling, after
which the four main modules will be introduced in details.
175
Figure 7.1 Posture and motion prediction computer interface in 3D Studio MAX
7.1 Modeling
Based on our 15-DOF human model developed in Chapter 2 (shown in Figure
7.2), the bones and joints for upper body and right arm was first created. The 15-DOF
model created in 3D Studio Max is shown in Figure 7.3. The model creation procedure
in 3D Max is as follows: First four body parts called pelvis, stomach, lower chest and
upper chest (visible boxes inside the upper body shown in Figure 7.3) were created to
represent the spine of the upper body. The big box (pelvis) resides at the waist has 3
DOF, standing for the joints 1 to 3 shown in Figure 7.2 with axes 0z , 1z and 2z . The
other three boxes (stomach, lower chest and upper chest) stand for the joints 4 to 6 with
axes 3z , 4z and 5z . A link was created from neck to shoulder. Then a joint called
shoulder was created (big sphere on the shoulder), which has 5 DOF, representing joints
176
7 to 11 with axes 6z to 10z (two translational and three rotational) shown in Figure 7.2.
Two more joints called elbow and wrist were created (spheres sit at elbow and wrist in
Figure 7.3), where each has 2 DOF representing joints 12 to 15 with axes 11z to 14z in
Figure 7.2. Two links representing upper arm and lower arm were created to connect
shoulder to elbow and elbow to wrist. An end-effector was also created on the thumb of
the hand for better visualizing its position. Each joint or link is located such that the
model in 3D Max has exactly the same dimensions as those shown in Figure 7.2. To this
point, all the bones of the model (upper body with right arm) have been created. Same
procedure applied to the creation of the left arm.
Figure 7.2 15-DOF model of the torso, shoulder, and arm
177
Figure 7.3 15-DOF model in 3D Max
178
Figure 7.4 Hierarchy of the bone structure
After bones of both left and right arms were created, we linked bones into a
forward kinematics hierarchy as shown in Figure 7.4, where pelvis bone is at the root of
the hierarchy and “L” at the end of the bone names refers to the bones of left arm.
Therefore, a child object inherits the transforms of its parent, and the parent inherits the
transforms of its ancestors all the way up the hierarchy to the root object.
179
Skin was created on top of the bone structure. In the first step, objects mimicking
the shapes of upper body, neck, head, left and right arms and hands were created around
the bones. Then the above objects were unionized into a single object, after which this
single object was converted to editable mesh called skin. Follows was the checking and
repairing holes in the skin to make sure this 3D surface is continuous. Finally a skin
modifier was applied to the skin, where all the bones were attached to the skin so that the
skin can deform according to the motion of the bone structure.
Once we have created a model in 3D Studio Max to represent our digital human,
four modules of posture prediction, motion prediction, visualization and layout were
written to give intelligence to this human so that it can interact with users to predict and
show postures and motions. The interface was written in MAXScript and it can call the
corresponding executable files (written in Fortran and have been compiled) to execute in
run time. Next we will introduce the implementation of each module.
7.2 Posture Prediction
The function of this module is to predict and show postures in real time. As
shown in Figure 7.5, all the buttons of posture and motion prediction interface are
displayed under a tab called posture prediction. The buttons belong to the posture
prediction module are Load, Load2Arms, Cost Function, Predict, LeftPredict, Animate,
Home, RightHome and LeftHome.
Load and Load2Arms are to load the model developed in above section into the
environment, where Load loads the 15-DOF model with right arm, Load2Arms loads the
24-DOF model with two arms.
Cost Function opens up a floater (Figure 7.6) where there is a dropdown list
enabling users to select different cost function to use when predicting a posture.
Whenever a selection is made, a number representing the choice of cost function will be
180
written into a file called choice.dat, which will be read when posture prediction algorithm
is executed.
Predict and LeftPredict are the main functions within this module. As shown in
Figure 7.5, there are two spheres in the scene to represent the target points, where the red
sphere stands for the target for right hand and the blue one stands for the target for left
hand. Users can move the targets to anywhere in 3D space, just by dragging it either
along the x, y or z direction or in any plane of xy, yz or zx (see the Transform Gizmo
shown on red sphere in Figure 7.5). When Predict is clicked, the interface will output the
current position of the target point into the target.inp file. Then it will execute the posture
prediction algorithm developed in Chapter 3 for real-time prediction. The algorithm will
read the target position, along with the choice of cost function to do real-time calculation.
The interface then checks the flag until it finds the flag has been updated which means
the calculating process has terminated. If the flag shows a solution has been found, joint
variables are read from the output of the posture prediction algorithm and posture is
updated by translating or rotating corresponding joints. If the target point is outside the
reachable workspace of this human, the flag will notify this and a message like
“Unreachable target point!” will be prompted to the user with the posture unchanged.
The flowchart of graphics interface is shown in Figure 7.7 and the posture prediction
algorithm is shown in Figure 7.8. A scenario with an unreachable target point is shown
in Figure 7.9. Predict predicts and shows the posture for right hand to touch a target,
while LeftPredict predicts for left hand. A predicted posture for left hand to reach a
target is illustrated in Figure 7.10.
Animate animates the motion from the home configuration (all joints are at 0,
shown in Figure 7.5) to the predicted posture by linearly interpolating the joint variables
at the ends for creating in-between keyframes.
181
Home, RightHome and LeftHome are to set the posture to home configuration, set
only the right arm to home position, set only the left arm to home position, and delete
corresponding keyframes.
Figure 7.5 Posture prediction interface
182
Figure 7.6 Cost function interface
183
-1 flag-1 flag
Run posture prediction algorithm
Checkflag < 1?
N
Y
flag > 3?
N
Y
Output message:Unreachable target point!
End
Output positionof target point
Output positionof target point
Click to predict
Write 0 to flag and read joint angles
Update posture
Figure 7.7 Posture prediction interface
184
Read position of target pointand the choice of cost function
Inside Reach Envelope?
Y
Decide which region the target point belongs to and give
the initial values accordingly
Call BFGS with combined discomfort, or energy or discomfort + energy
and distance as cost function
Write to data fileand flag=2
End
Write flag=4
N
Begin
Figure 7.8 Real-time posture prediction algorithm
185
Figure 7.9 Unreachable target point
186
Figure 7.10 Prediction for left arm
7.3 Motion Prediction
Figure 7.11 shows the interface for motion prediction module, which includes
MotionPredict, LeftMotion, ViaPoint, CurveMotion, LeftCvMotion, JointSpline, Joint,
ShowHuman, ShowCurves, DeletePath, DelViaPoint, DeleteCurves and DeleteHuman.
The main function of this module is to predict a path between a start and an end point and
the corresponding joints profiles which will allow the hand to follow the path.
MotionPredict predicts a path with minimum jerk from any initial position to the
target and the joint profiles enabling the hand to follow the path for upper body with right
arm. Users just need to click this button to initiate the prediction. Once the button is
clicked, the current hand position (represented by end-effector) and target position is
187
written to a file called start_end.inp. Then this interface calls the motion prediction
algorithm developed in Chapter 5 to execute. It then checks the flag until it finds the flag
has been set showing the calculation is done. If the target point is outside the reachable
workspace of this human, the flag will notify this and a message like “Unreachable target
point!” will be prompted to the user. Otherwise a solution has been found, by reading
from the output of the motion prediction algorithm, a predicted path from the initial point
to target point can be drawn with the actual traveled path shown by small spheres. The
keyframes of each joint B-spline are also read so that an animation of the upper body
motion while hand is following the path is created. The motion prediction interface is
shown in Figure 7.12 and a brief flowchart of motion prediction algorithm is shown in
Figure 7.13. The update flag used in the interface is to guarantee the position of end
points have been output into the data file before the motion prediction algorithm opens
the file and reads data from it. As MotionPrediction, LeftMotion predicts the motion
performed by upper body with left arm.
ViaPoint provides a via point whose position can be changed later. The via point
is for predicting curved motion where the hand has to pass by a via point.
CurveMotion and LeftCvMotion are to predict a curved path to pass a via point
and corresponding joint profiles with upper body and right arm or left arm. As
mentioned in Chapter 5, this type of motion is of importance when an obstacle is present.
Basic flowcharts for interface and curved motion prediction are similar to those shown
for MotionPrediction in Figures 7.12 and 7.13 with some differences such as, outputting
and inputting position of the via point, prediction of a curved path with minimum jerk
(see Chapter 5), creating and outputting curved path data and drawing curved path (where
in MotionPrediction, path is straight and can be known directly from the end points). A
predicted curved path by left hand passing by a via point (green sphere) is illustrated in
Figure 7.14, where the interface also draws a straight path from start point to end point
for comparison with the curved path.
188
Figure 7.11 Motion prediction interface
189
-1 flag
Run motion prediction algorithm
Checkflag < 1?
N
Y
flag > 3?
N
Y
Output message:Unreachable target point!
Output message:Unreachable target point!
End
Output positions of end effector and target point
Click to predict
1 updateflag
Write 0 to flagRead key frames of joint profiles
Read predicted path data andactual pass by positions
Create motion animationDraw predicted and traveled paths
Figure 7.12 Motion prediction interface flowchart
190
Read positions of end points
Inside Reach Envelope?
Y
Predict a path with minimum jerk
Optimize control points for 15 joint B-splines for maximum smoothness
with path constraints on the handCalculate key frames on joint B-splines from
control points
End
Write flag=4
N
Begin
Checkupdateflag < 1?
Checkupdateflag < 1?
N
Y
Output key frames of joint profilesOutput predicted path data and actual
pass by positionsWrite flag=2
Figure 7.13 Motion prediction algorithm
191
Figure 7.14 Predicted curved motion of upper body with left arm
192
Figure 7.15 Predicted joint profiles for a curved motion
JointSpline is to show the predicted joint profiles. It changes the viewport and
reads the keyframes of each joint profile and draws continuous joint B-splines. The joint
splines for the above predicted curved motion (Figure 7.14) is shown in Figure 7.15.
A floater opens up when users click Joint. Users can select all or some specific
joint (from 1 to 15) to see the corresponding joint profiles or profile (Figure 7.15).
ShowHuman and ShowCurves toggles between the view of the human
environment and the joint splines. ShowCurves can also be used to update the profiles
when the new option of joint is made, the difference between ShowCurves and
JointSplines is that JointSplines reads the joint spline keyframes from the data file every
193
time, while ShowCurves updates the display according to the user selected joint splines
which are already in memory.
DeletePath, DeleteViaPoint, DeleteCurves and DeleteHuman delete the path, via
point, joint profiles or human respectively.
7.4 Visualization
The posture prediction module and motion prediction module introduced above
are for real-time prediction. This module is for visualizing pre-calculated results. The
functions belonging to this group are ShowPosture, PathAnim, DrawPath and
DrawSpline.
As shown in Figure 7.16, clicking any of the buttons will prompt a window for
users to pick a data file. ShowPosture is to visualize any posture, where the posture data
file should include the values for the 15 joint variables and/or position of target in the
coordinate system 0 0 0x y z as shown in Figure 7.2.
PathAnim reads the name and location of a user selected data file and calls an
executable file to calculate corresponding joint B-splines by telling it where to load the
data file. The executable file (written in Fortran and compiled) reads the order of B-
splines and all the control points of 15 joint B-splines, then calculate joint splines and
output the keyframes of each B-spline. The interface then reads the keyframes and
creates animation for the motion by animating each joint. The data file name should
begin with “cpv” and the data should have the order of B-splines as the first value, then
all control points for first joint, followed by all the control points for the second joint, and
so on.
DrawPath reads the user selected path data file and draws a straight or a curved
path with actually traveled path shown in small spheres. The data file name should begin
with “v” to differentiate itself with data files of control points. For a straight path, first
value is 0 means “straight”, second value is the number of spheres (actual passed points)
194
need to be drawn. Then are positions of the start and end points, followed by the
positions of passed points. For a curved path, first value should be 1 meaning “curved”,
then are the positions of 99 intermediate points on the curved path (for drawing this
curved path), followed by the number of spheres, positions of end points and passed
points.
DrawSpline performs the same way as PathAnim, except that instead of animating
joints, it draws the joint B-splines for all joints or some specific joint (specified by Joint)
on the display.
Figure 7.16 Visualization interface
195
7.5 Layout
Figure 7.17 Layout interface
The layout interface is shown in Figure 7.17. This module includes LayoutEnv,
ShowLayout, SetTarget and DelLayoutEnv. The main function of this module is to load
a layout environment, and show the layout design results.
196
LayoutEnv prompts a window to let user select a .max file with layout
environment to be loaded. As shown in Figure 7.17, a layout environment with an
assembly table and three objects bin, button and pliers has been loaded.
ShowLayout illustrates the layout by reading the results from layout design, for
the environment shown in Figure 7.17, the three objects will be located.
SetTarget enables users to visualize the postures or motions reaching the target
points in a layout surrounding. As shown in Figure 7.17, users can set target to be any
object in the layout environment, and specify using left or right hand to reach the object,
then click the corresponding button (from the three modules introduced earlier) to
automatically create the posture or motion.
DelLayoutEnv simply deletes the previously loaded layout environment.
197
CHAPTER 8
CONCLUSIONS AND RECOMMENDATIONS
8.1 Conclusions
A general method has been presented to predict realistic postures and motions of
humans in a virtual world. Kinematic modeling of realistic human anatomy are
presented. The concept of task-based posture prediction is introduced and corresponding
algorithms are developed. Methods for realistic prediction of path trajectories and human
motions are proposed and rigorous algorithms are developed. Real-time algorithms for
predicting postures and motions are also studied and developed. Finally, based on the
concept of task-based posture prediction, a method for layout design is introduced.
A 15 degree-of-freedom (DOF) model of a human torso and arm is developed.
Detailed physics-based modeling of human joints may require knowledge far more than
what is currently available, however, for all practical purposes, it has been shown that
approximate modeling of gross human motion, for the purpose of human motion
simulation or ergonomic analysis is possible. The DH approach, adapted from the field
of kinematics has been used in our development of human modeling, and shown to be a
systematic method and easy to be implemented.
A general methodology and associated computational algorithm for predicting
realistic postures of digital humans are presented. The basic postulate for this is a task-
based approach, where we believe that humans assume different postures for different
tasks. The underlying problem is characterized by the calculation (or prediction) of the
joint displacements of the human body in such a way to accomplish a specified task.
Each task is comprised of a number of human performance measures that are
mathematically represented by cost functions. Cost functions are then optimized subject
to a number of constraints including joint limits. The formulation is demonstrated and
198
validated against existing software systems and experimental data. This method is not
restricted to any number of degrees of freedom and provides a robust approach to realistic
posture prediction that can handle a biomechanically accurate model.
A fast and efficient scheme which combines a global optimization method -
genetic algorithm and a gradient-based method is introduced for predicting postures in an
on-line algorithm suitable for real-time application where digital humans are typically
used to evaluate digital mockups in a computer aided engineering environment.
A concept of kinematically-smooth trajectory is introduced and a general method
and accompanying formulation for designing kinematically-smooth path trajectories of
human arm have been presented. It was shown that the problem can be formulated as a
set of differential algebraic equations of index 2. It was also shown that well-established
Runge-Kutta numerical method can be used to solve the problem. The rigorous
formulation is then implemented into code towards calculating an initial configuration
(an inverse kinematic solution) of the arm that would admit a smooth motion throughout
the path, without an interruption that is caused due to a switching of inverse solutions.
A methodology to predict and simulate the path generated by humans in a natural
motion of the torso and upper extremity is presented. A mathematical formulation
applicable to any number of DOF’s has been developed and demonstrated, where the
joint profiles as a function of time are predicted while jerk, discomfort and smoothness
being taken into account. The end result is an optimization-based method using human
performance measures as an effective method for calculating joint path trajectories that
look and feel most natural. While this work has been limited to a 15 degree of freedom
of the upper body, the theory presented herein is expandable to any part of the body that
can be represented as segmental links of a kinematic chain.
Our proposed method for predicting joint profiles is general and is broadly
applicable to any type of path, linear (straight) or nonlinear (curved) path trajectories.
Nonlinear paths are applicable to obstacle avoidance problems, where trajectories are
199
deviated from the typical linear point to point motion with minimum jerk. The
experimental code associated with this formulation was implemented in a graphical real-
time simulation interface and the algorithm is shown to be robust and can be extended to
a real-time environment.
A method and accompanying code to address the layout problem from an
optimization point of view has been presented. The layout problem occurs in the field of
ergonomics design and encompasses many solution sets, one of which is to be selected.
We have presented a method for calculating a best solution based on minimizing or
maximizing a cost function or a combination of cost functions. It was shown that this
formulation is implementable in computer code and introduces a new method in making
more educated decisions towards a design. Indeed, we believe this type of formulation
augments capabilities offered by digital human modeling software.
Finally, graphical interface for visualizing digital humans and predicting postures
and motions in real time has been implemented.
8.2 Recommendations
Difficulties encountered and potential research topics for expansion of this work
are addressed below.
(1) A 15-DOF model of human torso and arm has been developed and shown to be
capable of modeling gross human motion. However, a more accurate modeling of
human joints based on biomechanics is necessary for the human model being able to
handle various tasks. Especially, joint limits directly affect the realisticity of the
postures or motions, so their values should be further studied both theoretically and
experimentally.
(2) While this work has been limited to a 15 degree of freedom of the upper body, the
theory and methods presented herein are expandable to any part of the body that can
be represented as segmental links of a kinematic chain. It is possible to apply these
200
methods to and study posture and motion prediction of other parts of human body,
such as lower extremities, fingers, of which models need to be established.
(3) We have developed several simple cost functions to be used in our optimization
formulation to represent human performance measures. However, it is evident that
many more cost functions are needed and more elaborate mathematical descriptions
of human performance measures are required for various tasks and realistic
simulation. It is desired to have a database which relates different task to a set of
cost functions.
(4) A task comprises one or more cost functions, which renders the problem a multi-
objective optimization algorithm. How to normalize and add weight to each specific
cost function in a final representative multi-objective cost function and predicting the
postures or motions subject to a multitude of constraints needs to be further
investigated.
(5) For posture prediction, in order to implement the real-time algorithm, the workspace
of the human has been approximated and divided into 16 sections, where within each
section a point is selected and the corresponding posture is predicted by global
optimization, which works as initial point for the on-line posture prediction
algorithm. Although this method is proven to be effective, unrealistic postures could
be predicted some time. In overcoming this difficulty, a more accurate
approximation of the workspace needs to be developed. Moreover, the number of
the sections may need to be increased, and the handling of the situation where the
target points accidentally drop on the boundaries between two sections needs to be
carefully studied.
(6) Digital humans are typically used to evaluate digital mockups in a computer aided
engineering environment, which requires a real-time motion prediction. We have
endeavored to develop fast and efficient algorithms for predicting postures and
motions suitable for real-time implementation, but the motion prediction algorithm
201
developed in Chapter 5 still needs about 18 seconds for computation on a 1.8 GHz
CPU. So an improvement needs to be made to better fulfill the real-time
requirement.
(7) In this research, the rigid and flexible body dynamics of humans, walking, balancing,
modeling human cognitive behavior, and analysis of human motion subjected to
adversary conditions have not been considered and each of them could be an exciting
topic for future research.
(8) A human model with one dimension measurements has been used throughout this
work. However, for practical application, particularly in developing a large-scale
software system, a large database with dimensions of all percentiles needs to be built
and corresponding interface to the database and initial points used in the
optimization need to be implemented or established.
202
REFERENCES
Abdel-Malek, K., Adkins, F., Yeh, H.J. and Haug, E.J., 1997, “On the determination of boundaries to manipulator workspaces”, Robotics and Computer-Integrated Manufacturing, Vol. 13, No. 1, pp. 63-72.
Abdel-Malek, K., Yang, J., and Yeh, H.J., 2001, “Output control barriers of serial manipulators”, International Journal of Robotics and Automation, Vol. 16, No. 1, pp. 1-16.
Abdel-Malek, K. and Yeh, H.J., 1997, “Analytical boundary of the workspace for general three degree-of-freedom mechanisms”, International Journal of Robotics Research, Vol. 16, No. 2, pp. 198-213.
Abdel-Malek, K. and Yeh, H.J., 2000, “Crossable surfaces of robotic manipulators with joint limits”, ASME Journal of Mechanical Design, Vol. 122, pp. 52-60.
Abdel-Malek, K., Yu, W., Tanbour, E. and Jaber, M., 2001, “Posture prediction versus inverse kinematics”, Proceedings of the ASME Design Automation Conference, Pittsburgh, PA.
Asada, H. and Slotine, J.J.E., 1986, Robot Analysis and Control, Wiley, New York.
Badler, N., Manoochehri, K. and Walters, G., 1987, “Articulated figure positioning by multiple constraints”, IEEE Computer Graphics and Applications, Vol. 7, No. 6, pp. 28-38.
Barraquand, J. and Latombe, J.C., 1991, “Robot motion planning: a distributed representation approach”, International Journal of Robotics Research, Vol. 10, No. 6.
Beck, D.J. and Chaffin, D.B., 1992, “An evaluation of inverse kinematics models for posture prediction”, Computer Applications in Ergonomics, Occupational Safety and Health, Elsevier, Amsterdam, pp. 329-336.
Bobrow, J.E., 1988, “Optimal robot path planning using the minimum-time criterion”, IEEE Journal of Robotics and Automation, Vol. 4, No. 4, pp. 443-450.
Boulic, R. and Thalmann, D., 1992, “Combined direct and inverse kinematic control for articulated figure motion editing”, Computer Graphics Forum, Vol. 11, No. 4, pp. 189-202.
Boulic, R., Mas, R., and Thalmann, D., 1996, “A robust approach for the control of the center of mass with inverse kinetics”, Computers & Graphics, Vol. 20, No. 5, pp. 693-701.
Bryson, A.E., and Ho, Yu-Chi, 1975, Applied Optimal Control, Hempshire Publ. Co.
Brooks, R.A. and Lozano-Perez, T., 1983, “A subdivision algorithm configuration space for findpath with rotation”, Proceedings of the 8th International Joint Conference on Artificial Intelligence, Karlsruhe, FRG, pp. 799-806.
203
Burdick, J.W., 1991, “A classification of 3R regional manipulator singularities and geometries”, Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, pp. 2670-2675.
Chaffin, D.B. and Anderson, G.B., 1991, Occupational Biomechanics, 2nd edition, Wiley, New York.
Chevallereau, C. and Daya, B., 1994, “A new method for robot control in singular configurations with motion in any Cartesian direction”, Proceedings of the IEEE International Conference on Robotics and Automation, San Diego, CA, Vol. 4, pp. 2692-2697.
Chevallereau, C., 1996, “Feasible trajectories for a non-redundant robot at a singularity”, Proc. of IEEE International Conference on Robotics and Automation, Minneapolis, MN, pp. 1871-1876.
Chiaverini, S. and Siciliano, B., 1991, “Redundancy resolution for the human-arm-like manipulator”, Robotics and Autonomous Systems, Vol. 8, pp. 239-250.
Chin, K.W., Konsky, B.R. and Marriott, A., 1997, “Closed-form and generalized inverse kinematics solutions for the analysis of human motion”, Proceedings of the 19th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, Vol. 5, pp. 1911-1914.
Constantinescu, D. and Croft, E.A., 2000, “Smooth and time-optimal trajectory planning for industrial manipulators along specified paths”, Journal of Robotics Systems, Vol. 17, No. 5, pp. 233-249.
Corana, A., Marchesi, M., Martini, C. and Ridella, S., 1987, “Minimizing multimodal functions of continuous variables with the Simulated Annealing Algorithm”, ACM Transactions on Mathematical Software, Vol. 13, No. 3, pp. 262-280.
Cruse, H., Wischmeyer, E., Bruwer, M., Brockfeld, P. and Dress, A., 1990, “On the cost functions for the control of the human arm movement”, Biological Cybernetics, Vol. 62, pp. 519-528.
Das, B. and Behara, D.N., 1998, “Three-dimensional workspace for industrial workstations”, Human Factors, Vol. 40, No. 4, pp. 633-646.
Das, B. and Sengupta, A.K., 1995, “Computer-aided human modeling programs for workstation design”, Ergonomics, Vol. 38, pp. 1958-1972.
Das, B and Sengupta, A.K., 1996, “Industrial workstation design: a systematic ergonomics approach”, Applied Ergonomics, Vol. 27, No. 3, pp. 157-163.
Denavit, J. and Hartenberg, R.S., 1955, “A kinematic notation for lower-pair mechanisms based on matrices", Journal of Applied Mechanics, Vol. 77, pp. 215-221.
Dooley, M., 1982, “Anthropometric modeling program – A survey”, IEEE Computer Graphics and Applications, Vol. 2, pp. 17-25.
Dysart, M.J. and Woldstad, J.C., 1996, “Posture prediction for static sagittal-plane lifting”, Journal of Biomechanics, Vol. 29, No. 10, pp. 1393-1397.
204
Eksioglu, M., Fernandez, J.E. and Twomey, J.M., 1996, “Predicting peak pinch strength: Artificial neural networks vs. regression”, International Journal of Industrial Ergonomics, Vol. 18, pp. 431-441.
Faraway, J.J., Zhang, X.D. and Chaffin, D.B., 1999, “Rectifying postures reconstructed from joint angles to meet constraints”, Journal of Biomechanics, Vol. 32, pp. 733-736.
Farin, G., 1993, Curves and Surfaces for Computer-Aided Geometric Design, Academic Press Inc., San Diego, CA.
Fisher, D.L., 1993, “Optimal performance engineering, good, better, best”, Human Factors, Vol. 35, No. 1, pp. 115-139.
Flash, T. and Hogan, N., 1985, “The coordination of arm movements: an experimentally confirmed mathematical model”, The Journal of Neuroscience, Vol. 5, No 7, pp. 1688-1703.
Fletcher, R., 1987, Practical Methods of Optimization, 2nd edition, Wiley.
Gielen, C.C.A.M., Van Bolhuis, B.M. and Theeuwen, M., 1995, “On the control of biologically and kinematically redundant manipulators”, Human Movement Science, Vol. 14, pp. 487-509.
Goldberg, D., 1989, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley.
Goldenberg, A.A., Benhabib, B. and Fenton, R.G., 1985, “A complete generalized solution to the inverse kinematics of robots”, IEEE Journal of Robotics and Automation, Vol. 1, No. 1, pp. 14-20.
Groot, J.H. and Brand, R., 2001, “A three-dimensional regression model of the shoulder rhythm”, Clinical Biomechanics, Vol. 16, pp. 735-743.
Hairer, E. and Wanner, G., 1996, Solving Ordinary Differential Equations II, Second Revised Edition, Springer.
Hairer, E., Lubich, C. and Roche, M., 1989, The Numerical Solution of Differential-Algebraic Systems by Runge-Kutta Methods, Springer-Verlag.
Haug, E.J., Adkins, F.A., Qiu, C.C. and Yen, J., 1995, “Analysis of barriers to control of manipulators within accessible output sets”, Proceedings of the 20th ASME Design Engineering Technical Conference, Boston, MA, Vol. 82, pp. 697-704.
Haug, E.J., Luh, C.M., Adkins, F.A., and Wang, 1996, “Numerical algorithms for mapping boundaries of manipulator workspaces”, Transactions of the ASME Journal of Mechanical Design, Vol. 118, No. 1, pp. 228-234.
Hestenes, D., 1994, “Invariant body kinematics, reaching and neurogeometry”, Neural Networks, Vol. 7, No. 1, pp. 79-88.
Hogan, N. and Flash, T., 1987, “Moving gracefully: quantitative theories of motor coordination”, TINS, Vol. 10, No. 4, 170-174.
205
Hogan, N., Bizzi, E., Mussa-Ivaldi, F. and Flash, T., 1987, “Controlling multijoint motor behavior”, Exercise & Sports Sciences Rev., Vol. 15, pp. 153-190.
Hogfors, C., Sigholm, G. and Herberts, P., 1987, “Biomechanical model of the human shoulder-I: Elements”, Journal of Biomechanics, Vol. 20, No. 2, pp. 157-166.
Jung, E.S. and Choe, J., 1996, “Human reach posture prediction based on psychophysical discomfort”, International Journal of Industrial Ergonomics, Vol. 18, pp. 173-179.
Jung, E.S., Choe, J. and Kim, S.H., 1994, “Psychophysical cost function of joint movement for arm reach posture prediction”, Proceedings of the 38th Annual Meeting of the Human Factors and Ergonomics Society, Nashville, TN, Part 1, Vol. 1, pp. 636-640.
Jung, E.S., Kee, D. and Chung, M.K., 1992, “Reach posture prediction of upper limb for ergonomic workspace evaluation”, Proceedings of the 36th Annual Meeting of the Human Factors Society, Atlanta, GA, Part 1, Vol. 1, pp. 702-706.
Jung, E.S., Kee, D. and Chung, M.K., 1995, “Upper body reach posture prediction for ergonomic evaluation models”, International Journal of Industrial Ergonomics, Vol. 16, pp. 95-107.
Jung, E.S., Park, S., 1994, “Prediction of human reach posture using a neural network for ergonomic man models”, Proceedings of the 16th Annual Conference on Computers and Industrial Engineering, Ashikaga, Japan, Vol. 27, pp. 369-372.
Kavraki, L.E., Svestka, P., Latombe, J.C. and Overmars, M., 1996, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces”, IEEE Transactions on Robotics and Automation, Vol. 12, No. 4, pp. 566-580.
Kawato, M., Isobe, M., Maeda, Y. and Suzuki, R., 1988, “Coordinates transformation and learning control for visually-guided voluntary movement with iteration: a Newton-like method in a function space”, Biological Cybernetics, Vol. 59, pp. 161-177.
Kee, D., Jung, E.S., and Chang, S., 1994, “A man-machine interface model for ergonomic design”, Computers & Industrial Engineering, Vol. 27, pp. 365-368.
Kee, D. and Kim, S.H., 1997, “Analytic generation of workspace using the robot kinematics”, Computers & Industrial Engineering, Vol. 33, pp. 525-528.
Khatib, O., 1986, “Real-time obstacle avoidance for manipulators and mobile robots”, International Journal of Robotics Research, Vol. 5, No. 1, pp. 90-98.
Khatib, O., Yokoi, K., Brock, O., Chang, K. and Casal, A., 1999, “Robot in human environments: basic autonomous capabilities”, The International Journal of Robotics Research, Vol. 18, No. 7, pp. 684-696.
Kuusisto, A. and Mattila, M., 1990, “Anthropometric and biomechanical man models in computer-aided ergonomic design – Structure and experiences of some programs”, Computer-Aided Ergonomics, Taylor and Francis, London, pp. 31-56.
Lai, Z.C. and Yang, D.C.H., 1986, “A new method for the singularity analysis of simple six-link manipulators”, International Journal of Robotics Research, Vol. 5, No. 2, pp. 66-74.
206
Lepoutre, F.X., 1993, “Human posture modelization as a problem of inverse kinematic of redundant robots”, Robotica, Vol. 11, pp. 339-343.
Lim, J., and Hoffmann, E., 1997, “Appreciation of the zone of convenient reach by naive operators performing an assembly task”, International Journal of Industrial Ergonomics, Vol. 19, No. 3, pp. 187-199.
Lin, C.S., Chang, P.R. and Luh, J.Y.S., 1983, “Formulation and optimization of cubic polynomial joint trajectories for industrial robots”, IEEE Trans. Automatic Control, Vol. AC-28, No. 12, pp. 1066-1073.
Lipkin, H. and Pohl, E., 1991, “Enumeration of singular configurations for robotic manipulators”, ASME Journal of Mechanisms, Transmissions, and Automation in Design, Vol. 113, pp. 272-279.
Lozano-Perez, T., 1983, “Spatial planning: a configuration space approach”, IEEE Transactions on Computers, Vol. 32, No. 2, pp. 108-120.
Maurel, W., 1999, 3D Modelling of the Human Upper Limb Including the Biomechanics of Joints, Muscles and Soft Tissues, Ph.D. Thesis, Ecole Polytechnique Federale de Lausanne, France.
McKerrow, P.J., 1991, Introduction to Robotics, Addison-Wesley.
Nakamura, Y., 1991, Advanced Robotics Redundancy and Optimization, Addison-Wesley.
Nelson, W.L., 1983, “Physical principles for economies of skilled movements”, Biological Cybernetics, Vol. 46, pp. 135-147.
Nielsen, L., Dewit, C.C. and Hagander, P., 1991, “Controllability issues of robots in singular configurations”, Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA.
Oblak, D. and Kohli, D., 1988, “Boundary surfaces, limit surfaces, crossable and noncrossable surfaces in workspace of mechanical manipulators”, ASME Journal of Mechanisms, Transmissions, and Automation in Design, Vol. 110, pp. 389-396.
Pai, D.K. and Leu, M.C., 1992, “Genericity and singularities of robot manipulators”, IEEE Transactions on Robotics and Automation, Vol. 8, No. 5, pp. 545-559.
Pham, D.T. and Onder, H.H., 1992, “Knowledge-based system for optimizing workplace layouts using a genetic algorithm”, Ergonomics, Vol. 35, No. 12, pp. 1479-1487.
Plamondon, R., 1995, “A kinematic theory of rapid human movements: Part I. Movement representation and generation”, Biological Cybernetics, Vol. 72, pp. 295-307.
Plamondon, R., 1995, “A kinematic theory of rapid human movements: Part II. Movement time and control”, Biological Cybernetics, Vol. 72, pp. 309-320.
Plamondon, R., 1998, “A kinematic theory of rapid human movements: Part III. Kinetic outcomes”, Biological Cybernetics, Vol. 78, pp. 133-145.
207
Pontryagin, L.S., Boltyanskii, V., Gamkrelidze, R. and Mishchenko, E., 1962, The Mathematical Theory of Optimal Processes, Interscience Publishers Inc., New York.
Porter, J.M., Case, K., and Bonney, M.C., 1990, “Computer workspace modelling”, in: J. R. Wilson and E. N. Corlett (Eds.), Evaluation of Human Work, Taylor & Francis, London, UK, pp. 472-499.
Potkonjak, V., 1990, “Distributed positioning for redundant robotics systems”, Robotics, Cambridge, U.K., Cambridge Univ. Press, Vol. 8, No. 1, pp. 61-67.
Potkonjak, V., Popovic, M., Lazarevic, M. and Sinanovic, J., 1998, “Redundancy problem in writing: from human to anthropomorphic robot arm”, IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, Vol.28, No. 6, pp. 790-805.
Quinlan, S. and Khatib, O., 1993, “Elastic bands: Connecting path planning and control”, Proc. of the Int. Conf. on Robotics and Automation, Vol. 2, pp. 802-807.
Shu, M., Kohli, D. and Dwivedi, S.H., 1986, “Workspaces and Jacobian surfaces of regional structures of industrial robots”, Proc. of the 6th World Congress on Theory of Mach. and Mech, New Delhi, India, pp. 988-993.
Soylu, R. and Duffy, J., 1988, “Hypersurfaces of special configurations of serial manipulators and related concepts, Part II: Passive joints, configurations, component manifolds and some applications”, Journal of Robotic Systems, Vol. 5, pp. 31-53.
Spanos, J. and Kohli, D., 1985, “Workspace analysis of regional structure of manipulators,” ASME Journal of Mechanisms, Transmissions, and Automation in Design, Vol. 107, pp. 219-225.
Stewart, G.W., 1996, Afternotes on Numerical Analysis, SIAM, Philadelphia.
Taylor, R.H., 1979, “Planning and execution of straight line manipulator trajectories”, IBM J. Res. Devel., Vol. 23, No. 4, pp. 424-436.
Thalmann, M. and Thalmann D., 1990, Computer Animation Theory and Practice, Second Revised Edition, Springer-Verlag.
Tiller, Wayne, 1996, The Nurbs Book, second edition, Springer.
Tolani, D., Goswami, A. and Badler, N., 2000, “Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs”, Graphical Models, Vol. 62, No. 5, pp. 353-388.
Tourassis, V.D. and Ang, M.H., 1992, “Identification and analysis of robot manipulator singularities”, International Journal of Robotics Research, Vol. 11, pp. 248-259.
Tracy, M.F., 1990, “Biomechanical methods in posture analysis”, in: J. R. Wilson and E. N. Corlett (Eds.), Evaluation of Human Work, Taylor & Francis, London, UK, pp. 571-604.
Uon, Y., Kawato, M. and Suzuki, R., 1989, “Formation and control of optimal trajectory in human multijoint arm movement”, Biological Cybernetics, Vol. 61, pp. 89-101.
208
Verriest, J.P., Rezgui, M.A. and Wang, X.G., 1994, “Experimental validation of arm reach movement simulation”, Proceedings of IEA’94, Vol. 2: Occupational Health and Safety, International Ergonomics Association, Toronto, Canada, pp. 342–344.
Wang, X.G., 1999, “A behavior-based inverse kinematics algorithm to predict arm prehension postures for computer-aided ergonomic evaluation”, Journal of Biomechanics, Vol. 32, pp. 453-460.
Wang, X.G., Maurin, M., Mazet, F., De Castro Maia, N., Voinot, K., Verriest, J.P. and Fayet, M., 1998, “Three-dimensional modeling of the motion range of axial rotation of the upper arm”, Journal of Biomechanics, Vol. 31, pp. 899-908.
Wang, X.G. and Verriest, J.P., 1998, “A geometric algorithm to predict the arm reach posture for computer-aided ergonomic evaluation”, The Journal of Visualization and Computer Animation, Vol. 9, pp. 33-47.
Wolpert, D.M., Ghahramani, Z. and Jordan, M.I., 1995, “Are arm trajectories planned in kinematic or dynamic coordinates? An adaptation study”, Experimental Brain Research, Vol. 103, pp. 460-470.
Yeh, H.J., 1996, Singularity and Workspace Analyses of Serial Robot Manipulators, PhD Thesis, The University of Iowa, Iowa City, IA.
Yoshikawa, T., 1985, “Manipulability of Robotic Mechanisms”, International Journal of Robotics Research, Vol. 4, No. 2, pp. 3-9.
Yun, W.M. and Xi, Y.G., 1996, “Optimum motion planning in joint space for robots using genetic algorithms”, Robotics and Autonomous Systems, Vol. 18, pp. 373-393.
Zhang, X. and Chaffin, D.B., 1996, “Task effects on three-dimensional dynamic postures during seated reaching movements: an analysis method and illustration”, Proceedings of the 1996 40th Annual Meeting of the Human Factors and Ergonomics Society, Philadelphia, PA, Part 1, Vol. 1, pp. 594-598.
Zhang, X., Kuo, A. and Chaffin, D., 1998, “Optimization-based differential kinematic modeling exhibits a velocity-control strategy for dynamic posture determination in seated reaching movements”, Journal of Biomechanics, Vol. 31, pp. 1035-1042.
Zhao, J. and Badler, N., 1989, “Real time inverse kinematics with joint limits and spatial constraints”, Technical Report MS-CIS-89-09, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA.
Zhao, J. and Badler, N., 1994, “Inverse kinematics positioning using nonlinear programming for highly articulated figures”, Transactions on Computer Graphics, Vol. 13, No. 4, pp. 313-336.