[IEEE 2012 IEEE Systems and Information Engineering Design Symposium (SIEDS) - Charlottesville, VA...
Transcript of [IEEE 2012 IEEE Systems and Information Engineering Design Symposium (SIEDS) - Charlottesville, VA...
�
Abstract²Laborers in factories all across the world perform physically intensive tasks daily. With every lift they put themselves at risk of injury. Many still-frame modeling systems exist that can assess the different stresses and strains on the laborers body given his or her position. These models are only usable by experts, and do not allow for real-time alerts. In 1995, companies in the United States lost $50 billion due to injured employee absences and compensation settlements. Companies are not only eager to reduce their overhead costs, but also aim to better society by offering more robust worker safety practices.
The focus of this project was to design a system that can be used in a training environment. Our system is used to teach employees if their current lifting and carrying methods can be detrimental to their health. Our system is designed to be used for longstanding employees as well as new hires.
7KLV� SURMHFW¶V� SULPDU\� UHTXLUHPHQW� ZDV� WR� LPSOHPHQW� D�
motion sensing device to aid in the analysis of ergonomics in an industrial environment. To do this we proposed to make use of Microsoft Kinect© sensors. The Kinect© is able to provide skeletal tracking at 30 frames/second for two individuals in the field of view. To develop the system we selected the Microsoft software development kit (SDK) from a large variety of alternative professional and open source SDKs because of a variety of desirable features. A static ergonomic model was integrated with the Kinect© software. Multiple other software packages were assessed for compatibility with the Kinect© in an HIIRUW�WR�HQKDQFH�WKH�.LQHFWV¶��DELOLW\�WR�UHFRJQL]H�REMHFWV�DQG�
humans. After development was complete the system was tested E\� DQDO\]LQJ� RXU� V\VWHP¶V� RXWSXW� XVLQJ� GLIIHUHQW� VNHOHWDO� OLIW�
positions to compare to the real results.
Our system provides real-time ergonomic analysis of lifts performed by humans. This system lacks the ability to recognize specific individuals and objects necessary to customize the system to adequately evaluate a lift, and has not been tested in a factory environment. In the future we hope to implement a dynamic ergonomic model so that it can recognize whole movements or gestures which lead to injury, rather than recognizing a single position.
Our system successfully outputs a number for the recommended weight limit as well as other methods to measure WKH�VWUDLQ�RQ�D�ZRUNHU¶V�VNHOHWRQ��,Q�D�WUDLQLQJ�HQYLURQPHQW�WKH�
system will help individuals correct the problems with their lifting motions.
Manuscript received April 2, 2012.
C.C. Martin, D.C. Burkert, K.R. Choi, N.B. Wieczorek, P.M. McGregor,
and R.A. Herrmann are Fourth-Year students of Systems and Information
Engineering at the University of Virginia.
P. A. Beling is an Associate Professor in the Department of Systems and
Information Engineering at the University of Virginia.
I. INTRODUCTION
HIPBUILDERS at Newport News Shipbuilding (NNS)
build the United States Navy¶V warships and submarines.
Often this work is labor intensive and physically demanding;
design and construction limitations subject builders to
ergonomically poor working conditions and a high risk of
developing musculoskeletal disorders (MSDs). Søgaard has
LGHQWLILHG� WKH� FDXVH� RI� 06'¶V� DV� UHSHDWHG� SK\VLFDO�
movements, awkward or extreme posture, and large force
loads [1]. The result of MSDs on manufacturing workforces
is excessive sick leave, and the disorders even force some
workers to change jobs entirely. The enormous costs of
MSDs in shipbuilding and other industrial, labor intensive
processes have prompted companies to explore ways of
preventing these injuries and improve the wellbeing of their
employees.
II. BACKGROUND
NNS currently uses an expensive ergonomic analysis
process that requires experts to manually observe the
behavior of individual workers. The Occupational Safety &
Health Administration (OSHA) guidelines state that
shipbuilders should be trained in proper tool usage and
lifting technique, as well as the recognition of early stage
MSDs [2]. Ergonomic feedback to individual workers,
though, does not continue after training. The cost of
individual ergonomic analysis necessitates that individual
workers are not given feedback on their ergonomic
performance on a day-to-day basis. An automated ergonomic
monitoring system would help prevent MSDs by alerting
workers to the risks at the moment they occur.
A Real-time Ergonomic Monitoring System using the Microsoft Kinect
Chris C. Martin, Dan C. Burkert, Kyung R. Choi, Nick B. Wieczorek, Patrick M. McGregor, Richard
A. Herrmann, and Peter A. Beling, Member, IEEE
S
Proceedings of the 2012 IEEE Systems and InformationEngineering Design Symposium, University of Virginia,Charlottesville, VA, USA, April 27, 2012
FridayAMSystems Design.1
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 50
Fig. 1. Kinect design and hardware components.
Microsoft released the Kinect© in 2010, a revolutionary
consumer device which is able to capture depth data as well
as video and audio by incorporating all the necessary
hardware components into a very small device, as can be
seen in Fig. 1 above. The Kinect gathers depth data by
projecting a grid of infrared dots onto its surroundings, and
measuring the resulting position of the each dot with its
infrared camera. A display of these dots can be seen in Fig.
2 below. For the first time, human motion can be gathered
and analyzed from three-dimensional data inexpensively and
without the need for subjects to wear special accessories on
the body. The Kinect platform promises to redefine what is
possible in the space of ergonomic tracking and monitoring
by allowing employees to be monitored at all times, without
the need for manual analysis.
Fig. 2. Picture taken by an infrared camera showing the many infrared
dots projected by the Kinect to produce a depth map.
III. PURPOSE AND SCOPE
Developing a system to prevent the occurrence of MSDs is
critically important for increasing employee satisfaction and
decreasing employer costs. An automated observation
system, if developed correctly, can provide a less labor
intensive, more robust method to accomplish these
objectives. While workers are lifting, they often are unaware
of the risk they are placing upon themselves through
improper technique. We hypothesize that injuries could be
prevented if workers were provided with information that
would allow them to recognize dangerous body positions and
actions. An automated observation system could be used to
achieve this high level objective by notifying the worker in
real time at the lift location, or by assisting educators as they
try to engrain proper technique and lift recognition skills
during training.
The Kinect is the primary technology used to measure
body position. The system utilizes one stationary Kinect
sensor positioned within the necessary visible range to view
the worker. Analysis of data from the Kinect sensor is
performed using the Kinect SDK provided by Microsoft.
This SDK does not include the ability to recognize objects or
gestures at this time. The scope of the initial development of
this system was limited to that of a real-time observation and
warning system, but the scope shifted and the system was
redesigned and developed to target training observation and
evaluation for a wide variety of factory environments.
IV. METHODOLOGY
We conducted several design iterations to continually
improve upon previous systems, each iteration following the
development process depicted in Fig. 3. The Microsoft SDK
forum and online technology review forums were used as
principal sources of information on programming the Kinect.
We evaluated multiple software development kits, such as
SoftKinetic and OpenNI, and the respective scripting
languages. We chose to develop using the Microsoft SDK
because it provided the best documentation and did not need
a calibration pose to activate skeletal tracking. With the
background research completed, we took a systemic
approach to design the system to be implemented.
In defining the scope of the project, we conducted
analyses on the users and the tasks of the project. For task
analysis, the two main goals were real-time analysis of lifts,
and real time alert system. Functional requirements to
support these goals were gathered from client interviews.
The level of importance for each functional (and non-
functional) requirement was classified as high, medium, or
low.
Our design integrates an ergonomic model and
dynamically displays useful data describing the worker on
the static imaging provided by the Kinect sensors. The first
prototype was the product of analyses of many design
alternatives. After completion of the first prototype, we
conducted usability and functionality tests to determine if the
system had been properly implemented, and to reduce the
number of false positives and false negatives. Once testing
was completed, the design was iterated to improve the
model, interface, and the overall system. The final system
prototype was based on three iterations of this process.
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 51
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 52
Fig. 3. Systems Design and Development Approach
A. Iteration 1
Our first objective was to use the Kinect to output jointangles of the user's body dynamically in real-time. Thesystem stored the angles output as well as provided a visualdisplay on the skeletal image displaying these angles. Thedynamic updating of joint angles provided a useful source ofinformation when integrated with an ergonomic liftingmodel. In order to incorporate the new dynamic angles intoa useful model, we considered multiple ergonomic liftingmodels, ultimately adopting the OSHA model because itcontains helpful information on proper lifting technique andwe decide that it was the most complete and suitableergonomic model [2]. The OSHA model helps to determine arecommended weight limit (RWL) for lifts. The equationsinvolved in determining this RWL utilize static images ofhand position, distances to be traveled, and the frequency ofthe lifts. Some of the variables in the model are: horizontal
978-1-4673-1286-8/12/$31.00 ©2012 IEEE
distance between hands and feet, vertical distance fromhands to floor, total vertical distance of lift (initially set to aconstant of 36 inches), asymmetry angle (the angle of twistof the torso), lifting frequency (set to 5 lifts per minute andfor a duration of less than 1 hour), and coupling multiplier(the grip on the lifted object, initially set to "good"). Thevalues that were not automatically set were calculated by theKinect. We adapted these equations and integrated them intothe Kinect system in order to produce a RWL for the user.Our system provides users the ability to view a dynamicallyupdating recommended weight limit based on the changingof their joint angles. The RWL was shown at the bottom ofthe display screen.
Within iteration 1, we experienced many problemsassociated with adaption of new technology that is updatingand improving every day. The first issue we found wasjumpiness in occluded joints. The Kinect appeared to loseaccuracy when determining joint angles for joints that werehidden by another body part or not in direct view of thecamera. The second problem occurred when the userperformed certain movements. The smoothness of thetracking of the Kinect seemed to cause the skeletal output tojerk about in different directions and often required reset ofthe system.
B. Iteration 2
After going through the systematic process of design, wewent through a second iteration to improve the system basedon testing and reviews, provided by the client. We alsoinvestigated the use of multiple Kinect sensors to improvethe accuracy of the output, and adapted the code and the userinterface to reflect the usage of two Kinect sensors toproduce the recommended weight limit of the user on screen.
The original intent of using multiple Kinects was to placethem strategically around the user and have all the Kinectsproduce RWLs simultaneously. In different cases, it wouldbe desirable to have the Kinects intelligently switch to thedevice with the greatest accuracy. However, it was notpossible to incorporate such design, and we were limited topicking and choosing which Kinect to display therecommended weight limit; both could not be displayedsimultaneously. Furthermore, the system automaticallyrecognized the user once the he or she stepped into the fieldof view and determined the joint angles in real time fromstatic images provided by the Kinect sensors. Although thesystem dynamically updated the joint angles, the RWL wascalculated using static images using the previouslyimplemented OSHA Lower Back Model.
The model can be easily switched to a different modelwithout drastically changing the system. The adaptability andthe simplicity of the model led us to incorporate this modelas opposed to the other alternatives. The final prototype ofthe second iteration had the OSHA integrated model with theuser interface that displayed the recommended weight limit
Fig. 4. Third iteration system interface.
52
at the top right corner. The system was able to automatically
recognize the human skeletal system and dynamically update
the recommended weigh limit at up to 30 frames per second.
After conducting simple and intuitive tests, such as moving
the arms away from the body and twisting the body, the
recommended weight limit decreased and increased
respectively as expected.
There were still lingering problems from the first design in
this iteration as well. For instance, interference by a sample
box held by the users caused significant trouble for the
Kinect to recognize joints. Similarly, jumpiness in occluded
joints was still a problem when the joints were hidden behind
the body or other objects. Another problem that we faced
was the smoothness of the tracking. Because the sensor was
dynamically updating the recommended weight limit for
every frame, the output was very erratic at times. At times
skeleton would even appear to jerk in different directions
with certain movements. We decided to focus on improving
its smoothing equation in the next iteration. In all of these
problems, we noticed that the results had extreme bounds
and accuracy diminished drastically. In this iteration, we
were unable to figure out a way for the Kinect to reorient
itself, and as a result the Kinect could only look at a person
from a perpendicular angle from the waist height. One of the
goals for future iterations will be to allow viewing from
different axes depending on the desired orientation.
C. Iteration 3
The third iteration, shown in Fig. 4, marked many changes
not only to our technical system but also to the understanding
of objectives. The technical changes mostly honed the
features included in the previous iteration. The other changes
were fleshed out by conversations with our client, who
proposed separating the overall system into two pieces, a
training system and an alert system. The training system will
focus on educating employees of proper lift and carry
techniques. The program will run in a controlled
environment, and thus will not need the adaptability of the
³DOHUW´�V\VWHP��7KH�DOHUW�V\VWHP�ZLOO�EH�XVHG�LQ�WKH�IDFWRU\ to
allow real time warnings while employees perform their lifts
in the factory. During the third iteration we realized that
given the time constraints, it needed to mostly focus on the
training system side of the project. The training program was
determined to be easier to evaluate and test given the lack of
access to the factory floor during development. The
evaluation and testing phase was completed after the third
iteration.
The technical changes during the third iteration
were also heavily influenced by client interaction. The client
GHVFULEHG� D� VSHFLILF� LQWHUHVW� LQ� FRXQWLQJ� WKH� WLPH� ZRUNHUV¶�
hands were spent above the shoulders and below the knees,
and we successfully added these features. The client also
described some desired features to consider for the future,
including better lift evaluation, carry evaluation (distance
traveled), and weight estimates for the user and the better
integration of multiple Kinects. We focused on some of these
objectives including displaying a recommended weight limit
for two workers simultaneously as well as smoothing the
output from the Kinect, enhancing lift evaluation.
V. RESULTS AND DISCUSSION
As we progressed through design iterations, a number of
accuracy issues associated with the Kinect became apparent.
As mentioned previously, we dealt with jumpiness of the
perceived joint location, joint occlusion from the rest of the
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 53
body, and diminished accuracy when the Kinect is placed in
a suboptimal position. After focusing efforts on increasing
the accuracy of the Kinect, we began comparing the model
accuracy with both hand-calculated OSHA RWLs and
recommended lifting guidelines from the Ohio Bureau of
:RUNHUV¶� &RPSHQVDWLRQ� >4]. Using positions found on the
2%:&¶V� ZHEVLWH�� ZH� performed testing on 18 different
scenarios each from four camera viewpoints. The results of
these tests provided us valuable insights regarding accuracy.
Using the aid of interaction plots, we analyzed how the
aspects of different testing scenarios affected each other. In
Fig. 5 below, the interactions of lift origin positions, torso
twists, distance of hands from body, and camera position are
plotted according to the magnitude of the error between what
the Kinect model output and what a perfect reading from the
OSHA model was supposed to be. One of the most apparent
observations from the interactions is how the Kinect showed
significantly less error when the hands were held out further
in front of the body. More interesting still was how twisting
and the placement of the camera interacted with each other.
When the camera is not looking at the test subject straight
on, or the subject is twisting away from the camera, the
accuracy decreases. Also surprising was how sometimes the
Kinect output seemingly was just as accurate when the
person stood with their back to the camera.
Fig. 5. Interaction plot for error in Kinect accuracy
Next we compared the results that the Kinect was
displaying with another lifting model and found that the
OBWC model tended to be more conservative than our
implePHQWHG� 26+$� PRGHO�� 7KH� 2%:&¶V� PRGHO�
recommends that nothing be lifted from the floor unless the
hands are close to the body and the torso is not twisting, thus
the error was highest when testing the floor lifting scenarios
as seen in Fig. 6. Hand position did not have the same large
effect like it did when testing against the OSHA model, yet
twisting displayed a similar shape but at a different error
magnitude. In these tests the large twist provided the greatest
error. The valuable knowledge gained in testing will allow us
to continue its development of the most accurate system even
after the submission of this paper.
Fig. 6. Interaction plot in the system error against
VI. FUTURE WORK
The developed system may serve as a foundation for a
more complete and robust system to be developed and
implemented in the future. With an intensified focus on the
improvement of the training process a much more controlled
system environment has emerged, increasing the feasibility
of integrating multiple Kinect devices. This functionality will
improve the observational accuracy of the Kinect and
provide a solution to object interference while determining
skeletal location. To reduce the manual labor required to
operate the system, object and human recognition
functionality could be implemented. This would allow for
the system to automatically store personalized information
for each individual worker. Compounding the benefit of
these functionalities would be the ability for the system to
automatically export recorded training information to a
database, providing the ability to generate detailed analytical
reports instantaneously. Introducing these features to the
developed system will help further accomplish the goal to
provide high quality, efficient, and informative training to
workers thereby reducing the occurrence of MSDs and
employer costs.
VII. CONCLUSIONS
Results show that the Kinect holds the promise of being a
strong platform from which an ergonomic model can be
built. The Kinect can be incorporated into an effective and
efficient system to accurately aid in the prevention of back
injuries, but at this time the developed model will require
some additional functionality and modification to maximize
the achievement of this objective. We has found that
currently there is both stronger demand and more robust
available functionality to use the Kinect device and its output
as a training tool as opposed to an employee alert system;
however this does not exclude the Kinect from later use in
this area.
Developing an observational ergonomic system requires
the input and approval of many stakeholder groups causing
the development to be difficult and extremely iterative.
Fortunately, Microsoft will be continuously updating the
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 54
Kinect hardware and software to include a wider range of
functionality helping both developmental groups and their
clients. Using this improved technology, a future
developmental team may be able to take the previously
developed base system and build upon it to create a fully
automated training system preventing back injuries the first
moment an employee enters training.
ACKNOWLEDGMENT
The authors are thankful and appreciative for the
contributions, guidance, and leadership provided by the
following individuals: Ed Suhler and Ben Cho of the
University of Virginia; Professor Barry Horowitz of the
University of Virginia; Richard Osgood of Huntington
Ingalls Industries.
REFERENCES
[1] Søgaard, K. (2007). Occupational biomechanics of the upper
extremeties; a search for the cause and prevention of musculoskeletal
disorders. Journal of Biomechanics , 40 (S2).
[2] Occupational Health & Safety Administration. Guidelines for
Shipyards. Occupational Health & Safety , 2008. [Online]. Available:
http://www.osha.gov/dsg/guidance/shipyard-guidelines.html
[Accessed March, 2012]
[3] Occupational Safety & Health Administration, OSHA Technical
Manual, Section VII: Chapter 1, Occupational Safety & Health
Administration, 1995. [Online] Available:
http://www.osha.gov/dts/osta/otm/otm_vii/otm_vii_1.html, [Mar.2,
2012]
[4] 2KLR�%XUHDX�RI�:RUNHUV¶�&RPSensation. Division of Safety &
Hygiene ± Lifting Guidelines. Retrieved April 2, 2012, from
http://www.ohiobwc.com/employer/programs/safety/liftguide/liftguide
.asp
978-1-4673-1286-8/12/$31.00 ©2012 IEEE 55