Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The...

11
Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89 79 S S S e e e n n n s s s o o o r r r s s s & & & T T T r r r a a a n n n s s s d d d u u u c c c e e e r r r s s s © 2013 by IFSA http://www.sensorsportal.com Experimental Videogrammetry and Accelerometry for the 2D Analysis of Human Motion Daniel CÔRREA and Alexandre BALBINOT Instrumentation Laboratory (IEE) Electrical Engineering Department (DELET) Federal University of Rio Grande do Sul (UFRGS) – Avenue Osvaldo Aranha, 103-203, Porto Alegre, 90035-190, Brazil Tel.: +550215133084440 E-mail: [email protected], [email protected] Received: 11 November 2012 /Accepted: 19 March 2013 /Published: 29 March 2013 Abstract: Virtual human body models are used in many applications that allow human-machine interaction. The measurement of the position of a particular limb in the human body is extremely important in many applications. Human behavior understanding typically involves human body posture analysis or estimation, as well as the generated corresponding gestures. In order to provide a contribution in the referred area, this paper presents the development of an experimental gait analysis system during human walking by videogrammetry, accelerometry and virtual models. The principle of the experimental system is developed to capture the walking image using low-cost webcams, with the support of colored markers placed on the targeted joints. When the subject walks, the system captures and processes the events and determines the angles of the body segments. Parallel to this process, an accelerometer network distributed on the user’s body reads these movements, to compare the measurements. Additionally, the results of the videogrammetry system were compared to the DMAS16 Spica Tech system. For result assessment, a virtual body model is presented in real-time motion with data being stored in a database for future evaluations and analyses. The system proposed in this paper showed an average difference of 3.1°, has low cost, and is suitable for a wide range of applications. Copyright © 2013 IFSA. Keywords: Webcams, Virtual models, ZigBee accelerometry, Limb position, Limb displacement. 1. Introduction This paper offers a contribution in the field of characterization of the motion of the limbs of the human body using webcams, image processing techniques and accelerometers that form a wireless network based on IEEE 802.15.4 protocol. The position characterization or displacement of a particular limb in the human body in various applications is extremely important. Virtual human body models are used in many applications that allow human-machine interaction. The development of virtual human body models and their corresponding movements attract great attention of many professionals in healthcare, game development, among others [1-2]. Nevertheless, there is much work to be done, since the existing techniques need further improvement to adapt to the human body, which has more complex structures, due to the difficulty of modeling the geometry and the kinematics of articulated movements, and because the human body is covered with deformable tissues such as skin and clothing. The study of kinematics and the measurement during the execution of movements is Article number P_1160

Transcript of Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The...

Page 1: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

79

SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss

© 2013 by IFSAhttp://www.sensorsportal.com

Experimental Videogrammetry and Accelerometry for the 2D Analysis of Human Motion

Daniel CÔRREA and Alexandre BALBINOT

Instrumentation Laboratory (IEE) Electrical Engineering Department (DELET)

Federal University of Rio Grande do Sul (UFRGS) – Avenue Osvaldo Aranha, 103-203, Porto Alegre, 90035-190, Brazil

Tel.: +550215133084440 E-mail: [email protected], [email protected]

Received: 11 November 2012 /Accepted: 19 March 2013 /Published: 29 March 2013 Abstract: Virtual human body models are used in many applications that allow human-machine interaction. The measurement of the position of a particular limb in the human body is extremely important in many applications. Human behavior understanding typically involves human body posture analysis or estimation, as well as the generated corresponding gestures. In order to provide a contribution in the referred area, this paper presents the development of an experimental gait analysis system during human walking by videogrammetry, accelerometry and virtual models. The principle of the experimental system is developed to capture the walking image using low-cost webcams, with the support of colored markers placed on the targeted joints. When the subject walks, the system captures and processes the events and determines the angles of the body segments. Parallel to this process, an accelerometer network distributed on the user’s body reads these movements, to compare the measurements. Additionally, the results of the videogrammetry system were compared to the DMAS16 Spica Tech system. For result assessment, a virtual body model is presented in real-time motion with data being stored in a database for future evaluations and analyses. The system proposed in this paper showed an average difference of 3.1°, has low cost, and is suitable for a wide range of applications. Copyright © 2013 IFSA. Keywords: Webcams, Virtual models, ZigBee accelerometry, Limb position, Limb displacement. 1. Introduction

This paper offers a contribution in the field of characterization of the motion of the limbs of the human body using webcams, image processing techniques and accelerometers that form a wireless network based on IEEE 802.15.4 protocol. The position characterization or displacement of a particular limb in the human body in various applications is extremely important. Virtual human body models are used in many applications that allow human-machine interaction. The development of

virtual human body models and their corresponding movements attract great attention of many professionals in healthcare, game development, among others [1-2]. Nevertheless, there is much work to be done, since the existing techniques need further improvement to adapt to the human body, which has more complex structures, due to the difficulty of modeling the geometry and the kinematics of articulated movements, and because the human body is covered with deformable tissues such as skin and clothing. The study of kinematics and the measurement during the execution of movements is

Article number P_1160

Page 2: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

80

an important requirement in orthopedic rehabilitation [3] and in animation systems [4]. The motion of these models should be as realistic as possible. Therefore, the animation models should be flexible to allow interaction with the environment and with other objects or virtual beings in real time [5]. This flexibility can be obtained through some specific animation techniques [6-7] or driven by physical controllers [8]. The development of virtual reality systems allows the creation of advanced man-machine interfaces, enabling the creation of 2D or 3D environments in which the user can manipulate and explore objects or environments as if they were real, as if the users were physically in contact with the object or within the environment.

An alternative approach to conventional gait analysis techniques, such as optoelectronic motion and force plate motion analysis, involves the use of accelerometers attached to the body to measure segmental accelerations during walking [43]. Accelerometers have been used to measure a wide range of physiological characteristics [9-13], with emphasis on motor activities [14-18], such as walking [11-12] and running [19-24], as well as on some studies of small movements, such as those in the temporomandibular region, mainly related to jaw opening and closing movements [25-27], in the characterization of respiratory disorders, such as apneas [9, 28-29], in the study of human body impact and vibration [30-31], its characterization posture [32-33], and movements performed during sleep [34-35].

Human gait is a motor behavior characterized by cyclical events that pass during an individual’s route from one place to another one [36-38]. Currently, some gait biomechanical tests have been used in the diagnosis of neuromuscular and musculoskeletal disorders. This test consists in the performed movement captured during walking, in which the patient movement is recorded through one or more devices (video camera, electromagnetic sensors, and accelerometers). Based on the recorded data, the movement kinematic variables are calculated. Using cameras, the image of the gait movement of the user-patient during walking is captured and, then, all the variables associated to this movement are explored. To facilitate the achievement of three-dimensional signals, some identification markers are attached to the user’s body. Gait analysis using imaging has been highlighted due to the convenience of its use.

In view of the aforementioned, the present paper, aiming to providing a contribution to the referred area, presents the development of a videogrammetry system using webcams, whose data (joint angles) are compared to a wireless two-dimensional accelerometer system for human motion analysis in the sagittal plane during walking. Additionally, the data provided by the videogrammetry system were compared to the DMAS16 Spica Tech system. Moreover, this system is complemented by the development of a virtual human body model, for real-time test submission and monitoring. All the

information entered into this virtual model is also stored in a database for further studies that shall use the present system. Overall, this system is aimed to determine the angles of the segments of the human body selected by the system operator during a gait test. It is important to stress that the proposed system is not suitable for high-precision and high-cost gait tests (the most expensive items are high-resolutions cameras and image processing systems), but rather a low-cost alternative aimed to ensure the monitoring of events related to human gait in settings and clinics that cannot purchase expensive systems, and, thus, the replacement of manual goniometers with systems similar to the one proposed here. 2. Experimental Section

The developed system consists of a videogrammetry module based on webcam, accelerometer module configured in wireless, a two-dimensional virtual model developed in C language using the OpenCV libraries and database. It is basically composed of an independent set of tools that can be used together or separately for testing Fig. 1(a) shows the block diagram of the proposed system. 2.1. Videogrammetry Module

The precision of videogrammetry systems is typically associated with quality, resolution, and capture speed in frames per second of the camera. Thus, the higher the precision, the more expensive the equipment needed. One proposal concerns the use of image processing techniques to allow cheaper systems usage and correct any possible nonconformities. The webcam used is Vtrex P-504 (120 fps). The videogrammetry module developed is aimed to go along with the user movement through image capture and reproduce it on a virtual model, as well as to determine the angles in the body segments involved in the motion. Initially, we evaluated the traditional luminous markers or reflectors used in this area. However, due to their cost, we decided to use colored markers. As a supporting tool, we used the OpenCV library and techniques for image processing, through which it has been possible to acquire the images from the camera. Part of the code developed for the capture and display of the image in a window follows:

int main(int argc, char** argv) { CvCapture* capture_0 = cvCaptureFromCAM(0); //save image IplImage *frame = 0; //workspace cvNamedWindow("CAMERA",CV_WINDOW_

AUTOSIZE); if(capture_0){ while (1) {

Page 3: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

81

//acquires image if(!cvGrabFrame(capture_0)){ break; } frame = cvRetrieveFrame(capture_0, 0); cvWaitKey(10); //displays image cvShowImage("CAMERA", frame); }//while (1) } cvReleaseImage(&frame); cvReleaseCapture(&capture_0); cvDestroyWindow("CAMERA"); return 0; }

In order to manipulate these images, an image processing routine was created that runs every read frame. This routine includes the actual image acquisition, filter, static background subtraction, color filter, segmentation, identification of markers, calculation of the angles between the markers, and updating of angles from the virtual model and in the database. Fig. 1(b) shows the steps of this process.

After capturing the image, the frame is filtered through a Gaussian filter (Eq. (1)) to homogenize the colors in the image:

, (1)

where the variables i and j are the coordinates of the filter matrix and the deviation σ (1.5) to be chosen (this is a low-pass filter). Thus, the noise generated in the camera does not generate erroneous results, ensuring a more stable system.

The next step is the static background subtraction. This process consists in removing the gap between the current image and the background image from the experiment localization, thus, generating an image of just the user and his/her movements. The first step in this process is to convert images to gray scale using 30% of red tone, 59% of green tone, and 11% of blue tone (see Eq. (2)). These percentages are related to the human eye’s visual sensitivity to the primary colors [40]:

(2)

with the RGB components of the image

, and i, j the coordinates of the image matrix. After this step, each pixel is subtracted from the current image with the corresponding background image pixel. A binary image with the difference of the two images is subsequently obtained (Eq. (3)):

. (3)

(a)

(b)

Fig. 1. (a) Block diagram of the proposed system, and (b) image processing steps.

Page 4: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

82

With this binary image, a search is carried out to determine the region of interest (ROI) that has the information.

With the resulting images of the previous process, the search for the marker colors starts as pattern colors or according to the calibration process that can be performed by this system operator. In this step, a procedure was used to change the RGB default image to HSV color standard. This transformation is given by applying the method proposed by [41] - see Eqs. (4-6):

(4)

(5)

, (6)

where Max and Min are the maximum and minimum values [0.0, 1.0], respectively, the RGB values at a given point of the image, yH, yS, yV correspond to each component of the standard HSV at each point of the image. The result of this transformation is a matrix of three dimensions with the HSV components. This matrix is processed by a routine that looks for color values corresponding to the markers and creates a binary matrix with this information (Eq. (7)):

otherwise;0

30)j,i(xand

S)j,i(xSand

H)j,i(x_Hif;1

)j,i(yV

s

H

, (7)

where H- and H+ are the thresholds of the matrix xH, S- and S+ thresholds calibrated saturation (xS) and xV represents the light intensity. The threshold (xV (i, j) ≥ 30) allows to ignore the darker colors. After this step, erosion is applied in a binary image to remove noise from the image. The last step is the segmentation that allows identifying the colored markers in the present image. The marker identification process is the scored points’ search in the segmented image. In this process, the marker size and its central point are also identified, which is determined by a simple average between their points. These data are marked in the original image and its central points are updated on variables, used to calculate the angles. After all the markers having been found, the process used for calculating the angles starts, which consists in determining the distance in pixels between the markers that represent a particular limb (segment) and the angle of such distances in Eq. (8):

, (8)

where i and j represent the coordinates (i, line and j, column) of the markers 1 and 2, associated with a particular body segment. It is noteworthy that the marker demarcation and the system interpretation to move the virtual model are associated with the initial settings selected by the system operator. The calculated angles are sent to the virtual model and to the database, using a function developed specifically for this purpose. The response time between the capture of a frame, its processing, and the corresponding movement execution through the virtual model is negligible and imperceptible to the human eye.

2.2. The Virtual Human Body Models

According to Pope, human body models describe both the kinematic properties of the body (skeleton) and its shape and appearance (the flesh and skin). Most models describe the human body as a kinematic tree, consisting of segments that are linked by joints [42]. Every joint contains a number of degrees of freedom (DOF), indicating the directions a joint can move. The virtual model developed in this paper is 2D model that allows real-time monitoring of two-dimensional movements, which can be saved for further use or comparison. 2D models are suitable for motion parallel to the image plane and are sometimes used for gait analysis [42].

This model was developed through the GLUT library (is the OpenGL Utility Toolkit). Body development began with the creation of a small structure, where variables such as color, x angle, y angle, z angle, length, and thickness were created. Through this basic structure, the human body was set up with 123 segments inside each other with the previously described variable manipulation [44], whose basic code follows:

struct structure { float color_red; float color_blue; float color_green; float angle_x; float angle_y; float angle_z; float lenght; float r_ thickness_superior; float r_thickness_less; float r_sphere; int light

} part [124];

The entire virtual model has been built with the parameters of the structure and the GLUT functions, such as glRotatef(), glTranslatef() and glScale(). These routines convert an object (or coordinate system) by moving, rotating, stretching, shrinking or reflecting the referred object or system. These conversions are done by multiplying the matrices generated by the OpenGL with the coordinates of the objects in the scene. Fig. 2(a) shows some of the segments created and Fig. 2(b), the finished model.

Page 5: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

83

To allow virtual model movement, some functions have been developed that allow the model full manipulation, according to the angles determined by the videogrammetry module. These functions directly alter the values of the segments in the structure, generating the corresponding movement. In total, eight functions were developed which allow complete manipulation of the virtual model (see Fig. 3). These functions directly change the values of the segments of the human body.

(a)

(b)

Fig. 2. (a) Some of the segments created for the virtual model (negative model): hand-arm segment, foot, spine, and whole body (front view) and (b) Finalized model: negative, with humanoid features applied to the negative model structure and with application of a complex polygonal mesh (texture).

Fig. 3. These functions directly alter the values of the segments in the structure, generating the corresponding movement.

Of the eight functions shown in Fig. 3, two are main functions: read_angle_body and write_angle_body, with the other functions being complementary to these, or else, they reuse the main functions in their scope. The function write_angle_body is designed to directly enter the angle value in the basic structure of the model. The function read_angle_body is designed to read and return the values of the basic structure of the model.

Finally, to make it possible to communicate the image processing system or the accelerometry system with the corresponding virtual model, the function write_angle_body was developed. This function allows you to view on screen the movement executed by the virtual model as an actual test runs. 2.3. Accelerometry Module

In human gait characterization, accelerometers

can be placed in different positions of the locomotion system to determine the impact generated by the change in ground reaction force the moment the foot touches the ground. The accelerometer presents some advantages in movement assessment compared to force platforms and kinemetry systems, and its use is not restricted to a laboratory environment. Accelerometers can be of small size and weight, so that they can catch even the smallest movements of the body, and can be found in many different types and models and with different sensitivities [43].

To ensure the reliability of the results, each part of the system was carefully calibrated. Typically, there are two procedures for calibrating a capacitive accelerometer: static and dynamic. For the static calibration, the accelerometer is placed in a stationary position and the output signal is compared with some known reference, such as gravity acceleration. In the absence of motion, the acceleration value measured is the vector projection of gravitational acceleration on the measurement axis. This feature allows also the accelerometer to be used as inclinometer in static conditions.

In static conditions, if the measurement axis is horizontally aligned with the gravity, the measured value will be 0 g, while in vertical alignment, the measured value is, for example, -1 g. When the orientation is reversed, the output will be +1 g. From these two points, it is possible to obtain the curve of accelerometer static calibration, considering the linear behavior between the acceleration and the sensor output signal. This test allows determining the sensitivity and the output displacement equivalent to the accelerometer zero acceleration (offset). These parameters were determined by Eqs. (9) and (10), where V+1g and V-1g are the output voltages when the accelerometer is subjected, respectively, to +1 g and -1 g accelerations:

(9)

Page 6: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

84

(10)

For accelerometer calibration, thirty repetitions

of 1 s were made and the result is summarized in Table 1. In the dynamic calibration, the measured value is the projection of the vector sum of gravitational acceleration plus the movement acceleration on the sensitive axis, as exemplified by Fig. 4. Therefore, the measured value depends on the instrument orientation in relation to the movement and in relation to the gravitational field.

Table 1. Accelerometer Calibration.

Accelerometer Sensibility (bits/g) 37.71

Offset (bits) 508.33

Fig. 4. Dynamic calibration: accelerometer.

The accelerometry system proposed in this work consists of wireless transceivers (ZigBee - IEEE 801.15.4 protocol), triaxial accelerometers (ADXL330), and batteries [44]. These modules allowed the measurement of angles in the human body segments (limbs) involved in the desired test, without the interference of cables and wires. For communication, these modules generate a star network with a coordinator who will capture the information of the modules, sending them to a computer, where such information will be processed and the angles will be calculated. The referred data shall be recorded in a database, allowing the repetition of the motion through a real-time virtual human body model. This device is positioned in the segment of the patient's body (see Fig. 1 (a)) forming a star network, whose coordinator is in the center (configured ZigBee to manage the network, receives all the information from the end devices and transmit data to the computer via USB-serial interface). To operate the module accelerometry, the end devices are configured to use the three internal analog-to-digital converters from ZigBee (sampling time set to 20 ms). The data is sent to the coordinator by radio, which forwards to the computer via USB. For data receiving and processing, a software was developed to receive the data package, filter such data and process the information, calculate the angles, update

the virtual model, and send them to the database. This process is repeated for each received package. Once the information’s sender is identified by the program, each read sample is sent to a proper function for treatment. This function (Eq. (11)) consists in render the readings taken by the ADC in gravitational acceleration to calculate the corresponding angle (in degrees), as for the acceleration measured in the accelerometer z-axis:

(11)

After calculating the corresponding angle, the

virtual model is updated, replicating the movement in real time, or saving this reading in a database. 2.4. Commercial System

For data validation of the proposed videogrammetry system, we compare this system with a commercial videogrammetry system during a gait test. The commercial system used was Spica Tek (Spica Technology Corporation) DMAS16. It is a software for image acquisition and processing. A high-frequency camera (Imperx 200 Hz) and reflectors markers were used to capture the image.

During gait, a special illuminator lights the environment and the camera captures the user’s movement. After the test, the information is analyzed and the movement is measured through the localization of markers. Due to the illumination of the commercial system, it has not been possible to test the systems in parallel. However, we used the same repetitive motion and alternated acquisitions with both systems in order to calculate the difference between them.

3. Results and Discussion

The image processing techniques used in the videogrammetry module are intended to locate the markers, determine the angles in the segments, which have their markers, and reproduce the user's movement in the virtual model. The processing has a routine that includes image acquisition, Gaussian filter, static background subtraction, segmentation, marker identification, calculation of angles in the markers, and updating of angles in the virtual model and in the database.

Thus, after image acquisition via webcam, Fig. 5 (a), the image is filtered by Gaussian filter that results in Fig. 5 (b). This filter aims to remove noise generated by the webcam that may lead to false markers in the image. This filter acts as a low-pass filter, minimizing blurring. As it has been shown in the tests, the Gaussian filter response was satisfactory, removing the unwanted noise. The subsequent step is the static background subtraction

Page 7: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

85

process, which uses the background set in the test run environment and is specifically designed to determine the precise location of the user. This feature allows the program to use the system in different environments. The results obtained follow: Fig. 5 (c) shows the current image conversion to gray scale.

Fig. 5. (a) Captured image by the camera with colored markers positioned on the hand-arm segment, (b) previous image result filtered by Gaussian filter, (c) current image converted to grayscale, (d) static background determined in the calibration, (e) difference between the current image and static background, (f) threshold application result in the static background subtraction, (g) colors localization, (h) markers classification, and (i) angles calculation among markers.

Fig. 5 (d) presents the calibrated static background; Fig. 5 e), the difference between the current image and static background; and Fig. 5(f) displays the threshold result applied in the background subtraction. With the definition of the interest area by the system, the next step is to apply

the color filter and the segmentation, which limits one or more markers in the image with the same color tone. Fig. 5 (g) presents a marker’s coloration search result and its segmentation. Note that there is a differentiation in the marker color from red to green. This representation is the marker demarcation on the image and provides analysis of the correct joint. It should be emphasized that the red rectangle around the user is also the delimitation result of the interest region by the static background subtraction. After meeting all of the markers through the color localization function, the markers obtained are classified as shown in Fig. 5 (h), as well as the calculations of the angles in the markers, according to Fig. 5 (i). The result of these two processes is identified only in the image for the markers demarcation with their nomenclature representing the correct classification. The line between the markers represents the calculation of the angles.

After calculating the angles between the markers, the virtual model is updated and its movement is performed according to the user’s movement. Fig. 6 (a) shows the test motion performed and the right arm movement reproduction in real time. Fig. 6 (b) shows the user movement’s result for a test involving the entire user’s left side view.

(a)

(b)

Fig. 6. (a) Movement reproduction of the user right arm, and (b) movement reproduction of the left side view of the user's body. 3.1. Test and Static Experiments

A commercial goniometer was used to assess the system’s accuracy. Some colored markers were positioned on a goniometer and images captured with the videogrammetry module. The system was calibrated according to the used markers, positioning the goniometer at some given angles during the tests

Page 8: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

86

and the static background from the environment was determined. Fig. 7 shows the environment and the goniometer positioning. Note that the environment is extremely noisy when dealing with image processing. There is also the free movement of persons inside, which could cause reading errors. However, no errors were caused by false markers. Table 2 shows the tests analytical results, in which the angles were varied several times. The data obtained with the calculation of the angles between these markers was compared to the goniometer reading were worked out. Altogether, three tests were performed for each angle with an average of 500 readings per test. The negative angles represent the measure in a counterclockwise direction, while the positive angles, in a clockwise direction. Note that the quoted errors are low for considering a image analysis by webcams. It is important to stress that the reading error in image processing is caused by the camera, because the noise in the image impacts image processing. Through this test, we can conclude that this system is an excellent alternative to goniometry.

Fig. 7. Goniometer with markers being used in laboratory.

Table 2. Test Result for the Static Videogrammetry System.

Goniometer tilt

(degrees)

Angle’s mean error

(degrees) 160 0.03 120 0.77 90 0.74 50 0.64 30 0.92 0 0.33

-30 0.56 -50 0.56 -90 0.12 -120 0.11 -160 0.67

3.2. Test and Dynamic Experiments

The dynamic test consisted in some experiments to validate the angular readings obtained at the measured time, i.e., during human gait. One test was

carried out using the videogrammetry system in synchronism with the accelerometry system, making the right leg measurement during the experiment. The systems were properly calibrated and the apparatus was installed on the user, the accelerometry module in the middle of the limbs (thigh and shin) and colored markers on his/her joints, as shown in Fig. 8 (a). The user was aligned with the camera and asked to walk towards a given point according to the trigger from the synchronization system. The system was configured to store the data on files and transmit them to the database, thereby avoiding processing delays.

During the test, the user was instructed to walk several times towards the camera. Thus, it has been possible to record the displacement of the user along the route and monitor his/her movement in the virtual model. After the test, the charts were lined up as shown in Fig. 8 (b) for comparison purposes.

\

(a)

(b)

Fig. 8. (a) Markers and end devices for walking test with videogrammetry and accelerometry and (b) walking test result with videogrammetry and accelerometry. Upper graphic - the right femur movement. Below graphic - the right tibia movement. Red Line - videogrammetry. Blue Line – Accelerometry.

Page 9: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

87

The movement observed in the chart is the one made by the user moving two steps forward and one step backward. It can be seen in this chart that the accelerometer module has a greater sensitivity to tilt than image processing, and many times it becomes lagged behind the other one The same test was repeated several times and the results were similar. Analysis of the readings of the test in Fig. 8 (b) and comparison of the two systems (videogrammetry and accelerometry) for the femur readings, showed a maximum error of 8.6°, while for the tibial readings a maximum error of 21.8° (in the acceleration areas) was observed. In average, the gaps between the two systems did not exceed 5º. We concluded that the system is functional for dynamic tests. The videogrammetry system showed better results than the accelerometer system, being the latter one often set, and the imaging system as reference. 3.3. Dynamic Experiments with Commercial

System

We used both videogrammetry systems (commercial and the developed system) for capturing the same type of movement to mediate the difference between the measurements. The movement used in the comparison was the human gait, with measurement of the specific angle formed by the femur and tibia. The user walked several times in front of the cameras and its motion was measured by two systems.

After capturing the records are formatted and then compared. We obtained an average error of 6.24°

where the correlation between the curves was 0.91. The Fig. 9 demonstrates the overlap of the curves.

Because of the illumination of the commercial system, it has not been possible to test the systems in parallel. This fact decreased correlation, though a high degree of similarity between the curves was obtained.

3.4. Preliminary Results for Gait Test

The final gait test performed focused the knee angle. For this purpose, only the videogrammetry module was used, because it has provided the best results in all previous experiments (Fig. 10). Colored markers were placed on the right leg joints, as shown in Fig. 8(a). The user was asked to stand in front of the camera, then the test started and information was stored on the database. During the test, the user walked several times in front of the camera and the result obtained with one gait is shown in Fig. 8. Compared to other aforementioned tests, this one is mirrored on the x axis. Thus, we can compare the results obtained e.g. to the one generated by the Dvideow videogrammetry system [39] and conclude that it is in accordance with the referral results.

As a result, analysis of the chart showed that there is a small bend wave to act as a shock absorber because of the ground reaction force when the foot hits the ground and to maintain the weight. The second flexion wave, which is the largest one, reaches its peak early in the swing phase and represents the foot off the ground. This result is in accordance with Rose and Gamble [36] that describes the knee motion with two flexion waves.

Fig. 9. Angle formed by the tibia and femur. Blue Line - Commercial videogrammetry system. Red Line - Developed videogrammetry system.

Fig. 10. Angle from the right knee flexion obtained by the videogrammetry system.

Page 10: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

88

4. Conclusion

The results proved that this system is a good alternative to mechanical goniometer regarding angle measurement in static tests. This alternative creates new possible uses for this system. Because it is a system based on free libraries, the possibilities of expansion and adaptation of the system are limitless. The system can be opened to new projects that are not focused only on gait tests, such as the use of this parameterization in human prosthesis’ movements. The way the virtual model was structured makes it possible to adapt it to other projects focused on human body mechanics. The development of a wireless accelerometers’ network also provides many other opportunities in the biomedical instrumentation area. Know-how on image processing exploration boosts ideas for projects that support the new global trend in computer vision. Finally, the main average errors recorded in static tests in the comparison of the response of the videogrammetry module with the goniometer was 3.1°. Likewise, comparison of the accelerometry module with the goniometer showed a difference of 1.4°. In dynamic tests, the average difference was less than 5º in the comparison between the videogrammetry module and the accelerometer module. In tests of gait dynamics, the accelerometry system response has the disadvantage of faster movement at certain times, and generates a gap of 21.8° between the two systems. Excluding the movement acceleration, the maximum difference was 8.6º.

References [1]. D. Gavrila D, The visual analysis of human

movement: a survey, Comp. Vis. and Image Und., 1999, pp. 73-80.

[2]. T. Moeslund, E. Granum, A survey of computer vision-based human motion capture, Comp. Vis. and Image Und., 2001, pp. 81-85.

[3]. R. J. D. Asla, H. E. Wan, G. Li, Six DoF in vivo kinematics of the ankle joint complex: application of a combined dual-orthogonal fluoroscopic and magnetic resonance imaging technique, Journal of Orthop. Res., 2006, pp. 1019-1027.

[4]. N. I. Badler, C. B. Phillips, B. L. Webber, Simulating humans: computer graphics animation and control, Oxford University Press, New York, 1993.

[5]. H. V. Welbergen, B. J. H. Basten, A. Egges, Z. M. Ruttkay, M. H. Overmars, Real time animation of virtual humans: a trade-off between naturalness and control, Comp. Graph. forum, 29, 2010, pp. 2530-2554.

[6]. B. Hartmann, M. Mancini, C. Pelachaud, Formational parameters and adaptive prototype instantiation for mpeg-4 compliant gesture synthesis, Comp. Anim., 2002, pp. 111-119.

[7]. S. Kopp, I. Wachsmuth, Synthesizing multimodal utterances for conversational agents: research articles, Comp. Anim. Virtual Worlds, 2004, pp. 39-52.

[8]. W. L. Wooten, J. K. Hodgins, Simulating leaping, tumbling, landing, and balancing humans, in Proc. of Int. Conf. on Rob. and Animation, 2000, pp. 656-662.

[9]. D. S. Morillo, J. L. R. Ojeda, L. F. C. Foix, A. L. Jimenez, An accelerometer-based for sleep apnea screening, IEEE Trans. on Inf. Tech. in Biom., 14, 2010, pp. 491-499.

[10]. K. M. Culhane, M. O. Connor, D. Lyons, G. M. Lyons, Accelerometers in rehabilitation medicine for older adults, Age and Ageing, 34, 2005, pp. 556-560.

[11]. S. C. Flavel, M. A. Nordstrom, T. S. Miles, A simple and inexpensive system for monitoring jaw movements in ambulatory humans, J. Biomech., 35, 2000, pp. 573-577.

[12]. M. Makikawa, S. Kurata, Y. Higa, Y. Araki, R. Tokue, Ambulatory monitoring of behavior in daily life by accelerometers set at both-near-sides of the joint, in Proc. of the MEDINFO 2001, 10, 2001, pp. 840-843.

[13]. C. C. Yang, Y. L. Hsu, Algorithm design for real-time physical activity identification with accelerometry measurement, in Proceedings the 33rd Ann. Conf. IEEE Ind. Electr. Soc. (IECON' 07), Tapei, Taiwan, 2007, pp. 5-8.

[14]. H. J. Luinge, P. H. Veltink, Inclination measurement of human movement using a 3-D accelerometer with autocalibration, IEEE Trans. on Neural Systems and Reh. Eng., 12, 2004, pp. 112-121.

[15]. R. J. Seitz, T. Hildebold, K. Simeria, Spontaneous arm movement activity assessed by accelerometry is a marker for early recovery after stroke, J. Neurol., 2010, pp. 1-7.

[16]. M. Voleno, S. J. Redmond, S. Cerutti, N. H. Lovell, Energy expenditure estimation using triaxial accelerometry and barometric pressure measurement, in Proceedings of the 32nd Annual International Conference of the IEEE EMBS, Buenos Aires, Argentina, 2010, pp. 5187-5188.

[17]. F. Bianchi, S. J. Redmond, M. R. Narayanan, S. Cerutti, N. H. Lovell, Barometric pressure and triaxial accelerometry-based falls event detection, IEEE Trans. on Neural Systems and Reh. Eng., 18, 2010, pp. 619-627.

[18]. T. M. E. Nijsen, R. M. Aarts, P. J. M. Cluitmans, P. A. M. Griep, Time-frequency analysis of accelerometry data for detection of myoclonic seizures, IEEE Trans. on Inform. Tech. in Biom., 14, 2010, pp. 1197-1203.

[19]. D. Murakami, M. Murakami, Ambulatory behavior map, physical activity and biosignal monitoring system, Methods Inf. Med., 36, 1997, pp. 360-363.

[20]. J. H. Tulen, D. L. Stronks, J. B. Bussmann, L. Pepplinkhuizen, J. Passchier, Towards an objective quantitative assessment of daily functioning in migraine: a feasibility study, Pain, 86, 2000, pp. 139-149.

[21]. S. Miyaoka, H. Hirano, I. Ashida, Y. Miyaoka, Y. Yamada, Analysis of head movements coupled with trunk drift in healthy subjects, Med. Biol. Eng. Comput., 43, 2005, pp. 395-402.

[22]. D. M. Karantonis, M. R. Narayanan, M. Mathie, N. H. Lovell NH, B. G. Celler, Implementation of a real-time human movements classifier using a triaxial accelerometer for ambulatory monitoring, IEEE Trans. on Inf. Tech. in Biom., 10, 2006, pp. 156-167.

[23]. M. Akay, M. Sekine, T. Tamura, Y. Higashi, T. Fujimoto, Unconstrained monitoring of body motion during walking using the matching pursuit algorithm to characterize time-frequency patterns of body motion in poststroke hemiplegic patients, IEEE Eng. in Med. and Biol. Mag., 2003, pp. 104-109.

Page 11: Experimental Videogrammetry and Accelerometry for the 2D ... · human-machine interaction. The development of virtual human body models and their corresponding movements attract great

Sensors & Transducers, Vol. 150, Issue 3, March 2013, pp. 79-89

89

[24]. N. Wang, S. J. Redmond, E. Ambikairajah, B. G. Celler, N. H. Lovell, Can triaxial accelerometry accurately recognize inclined walking terrains?, IEEE Trans. on Biom. Eng., 57, 2010, pp. 2506-2516.

[25]. T. Torisu, Y. Yamabe, N. Hashimoto, T. Yoshimatsu, H. Fujii, Head movement properties during voluntary rapid jaw movement in humans, J. Oral. Rehabil., 28, 2001, pp. 144-1152.

[26]. Y. Yamabe, R. Yamashita, H. Fujii, Head, neck and trunk movement accompanying jaw tapping, J. Oral. Rehabil., 26, 1999, pp. 900-905.

[27]. H. Zafar, E. Nordh, P. O. Eriksson, Temporal coordination between mandibular and head-neck movements during jaw opening-closing tasks in man, Arch. Oral Biol., 45, 2000, pp. 675-682.

[28]. D. Phan, S. Bonnet, R. Guillemaud, E. Castelli, N. Thi, Estimation of respiratory waveform and heart rate using an accelerometer, in Proceedings of the 30th Annual. International Conference IEEE Engineering Medical Biological Society, Vancouver, BC, Canada, 2008, pp. 4916-4919.

[29]. T. Chau, D. Chau, N. Casas, G. Berall, D. Kenny, Investigating the stationarity of paediatric aspiration signals, IEEE Trans. Neural Syst. Rehab. Eng., 13, 2005, pp. 99-105.

[30]. N. Yoganandan, F. Pintar, D. Maiman, M. Philippens, J. Wismans, Neck forces and moments and head acceleration in side impact, Traffic Inj. Prev., 10, 2009, pp. 51-57.

[31]. D. Rendon, J. Ojeda, L. Foix, D. Morillo, M. Fernandez, Mapping the human body for vibrations using an accelerometer, in Proceedings of the 29th Annual International Conference IEEE Engineering Medical Biological Society, Lyon, France, 2007, pp. 1671-1674.

[32]. M. A. Brunsman, H. M. Daanen, K. M. Robinette, Optimal posture and positioning for human body scanning, in Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modelling (NRC’97), 1997, pp. 266-273.

[33]. H. Ghasemzadeh, R. Jafari, B. Prabhakaran, A body sensor network with electromyogram and inertial

sensors: multimodal interpretation of muscular activities, IEEE Trans. on Inf. Tech. in Biomedicine, 14, 2010, pp. 425-435.

[34]. Y. Kishimoto, A. Akahori, K. Oguri, Estimation of sleeping posture for M-Healthy by a wearable tri-axis accelerometer, in Proceedings of the 3rd IEEE-EMBS: International Summer School Symp. on Medical Devices Biosignals, MIT, Boston, USA, 2006, pp. 45-48.

[35]. P. I. Terril, D. G. Mason, S. J. Wilson, Development of a continuous multisite accelerometry system for studying movements during sleep, in Proceedings of the 32nd Annual International Conference of the IEEE EMBS, Buenos Aires, Argentina, 2010, pp. 6150-6153.

[36]. J. Rose, J. G. Gamble, Human Motion, Premier, São Paulo, 1998.

[37]. J. Perry, Gait analysis, normal and pathological function, N. J. Slack, Thorofare, 1992.

[38]. C. Vaughan, B. L. Davis, J. O’Connor, Dynamics of Human Gait, Versa Press, USA, 1992.

[39]. A. Cappozzo, F. Catani, U. D. Croce, A. Leardini, Position and orientation in space of bones during movement: anatomical frame definition and determination, Clin. Biom., 10, 1995, pp. 171-178.

[40]. M. H. Schwarz, W. B. Cowan, J. C. Beatty, An experimental comparison of RGB, YIQ, LAB, HSV, and opponent color models, ACM Trans. on Graph., 6, 1987, pp. 123-158.

[41]. R. F. Gonzalez, R. E. Woods, Digital Image Processing, Addison-Wesley Publishing Company Inc., 2000.

[42]. R. Poppe, Vision-based human motion analysis: an overview, Computer Vision and Image Understanding, 108, 2007, pp. 4-18.

[43]. J. J. Kavanagh, H. B. Menz, Accelerometry: a technique for quantifying movement patterns during walking, Gait & Posture, 28, 2008, pp. 1-15.

[44]. D. S. Côrrea, A. Balbinot, Accelerometry for the motion analysis of the lateral plane of the human body during gait, Health Technology, pp. 1, 2011, pp. 35-46.

___________________

2013 Copyright ©, International Frequency Sensor Association (IFSA). All rights reserved. (http://www.sensorsportal.com)