Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a)...

6
Abstract— The purpose of this research is to localize a resident in indoor environments by using distributed binary sensors and body activity information obtained from an inertial measurement unit (IMU). The hardware setup consists of two types of sensor nodes. The passive infrared (PIR) sensor node provides binary information about motion in its field of view, while the IMU sensor node collects motion data for body activity recognition, walking velocity and heading estimation. Basic human activities such as sitting, sleeping, standing and walking are recognized. We proposed a particle filter-based sensor fusion algorithm that considers a behavior-based map to increase the localization accuracy. Experiments were conducted in a mock apartment testbed. We used the ground truth data obtained from a motion capture system to evaluate the results. I. INTRODUCTION The world population is ageing rapidly. The elderly population had increased to almost 810 million in 2012. In 2050, the number of aged people (60 and above) is about to reach a staggering 2 billion [1, 2]. Elders have the option of going to adult day care, long term care, nursing homes, hospice care, and home care. Even though all these options support the health, nutritional, social support, and daily living needs of adults, the feeling of independence is lost. Elders usually prefer to stay in the comfort of their home where they feel more confident than moving to any expensive adult care or healthcare facility. Hence if older adults are able to complete self-care activities on their own, and maintain independence, they will feel a sense of accomplishment and be able to maintain independence longer [3]. The best way to support them is to provide a physical environment that utilizes technologies to help active ageing. Home automation that is tailored to the need of elderly can serve the purpose. One example is real-time indoor human monitoring, which plays an important role in living assistance, emergency detection, among many other services. One important and fundamental problem in developing such home automation applications is to localize the resident in the indoor environment. With the location knowledge, for example, a domestic service robot can then be called to assist the elderly in case of emergency [4, 5]. Human localization in indoor environments has been studied by many researchers in recent years. However, most Minh Pham and Weihua Sheng are with the Laboratory for Advanced Sensing, Computation and Control (ASCC Lab), School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK, 74078, USA (e-mails: [email protected], [email protected]) Dan Yang is with the College of Information Science and Engineering, Northeastern University, Liaoning Province, China (e-mail: [email protected]) Meiqin Liu is with the College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: [email protected]) of the solutions are not suitable for use in home automation applications. Some techniques use cameras for human tracking [6], which has the issue of losing privacy and is not acceptable to many people in their living environments. Some techniques rely on infrastructures such as active beacons equipped with the distance measuring capability, which are expensive and require careful maintenance [7]. Some techniques use wearable sensors to measure human motion in terms of acceleration and angular rate. These motion data are integrated for position estimation [8]. However, these techniques usually suffer the problem of sensor drifting and growing tracking errors. Therefore it is desirable to develop a new technique which is less intrusive, low-cost, easy to maintain while accurate for indoor human tracking, making it possible to be used in many home automation applications. In this paper, we propose a new solution to indoor human localization by using both distributed motion sensors and wearable inertial measurement units (IMUs). In our solution, PIR sensors are used to provide location information without capturing any user’s image. An IMU sensor attached to the human body is used to estimate the velocity and heading angle. The PIR sensor data gives a rough estimate of the human location while the body motion sensor provides good estimate on the relative body movement in a short period. Therefore the combination of these two sensor modalities is expected to result in higher accuracy than that of only one modality. To further improve the accuracy, we also exploit the correlation between the human activity and the location in an indoor environment. If the activity is recognized through wearable motion sensors, the resident is likely to be within an area associated with the activity. For example, in a home environment, the activity of sitting mostly occurs where the chairs are available and the activity of lying often occurs where the beds are located. We develop a particle filtering framework to realize accurate human tracking which utilizes this activity/location correlation. Meanwhile our solution warranties that the resident’s privacy is not violated and the cost of the whole system is kept low. The localization system is also easy to use and maintain. The paper is organized as follows: Section II presents an overview of the hardware platform of the localization system. Section III describes our methodology for the localization and tracking. Section IV presents the experiments results. Section V gives the conclusion and future work. II. HARDWARE PLATFORM In this section we describe our hardware platform, which consists of the Smart Home testbed, the PIR sensor nodes and the IMU node. Human Localization and Tracking Using Distributed Motion Sensors and an Inertial Measurement Unit Minh Pham, Dan Yang, Weihua Sheng, Senior Member, IEEE, and Meiqin Liu Proceedings of the 2015 IEEE Conference on Robotics and Biomimetics Zhuhai, China, December 6-9, 2015 978-1-4673-9675-2/15/$31.00 © 2015 IEEE 2127

Transcript of Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a)...

Page 1: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Abstract— The purpose of this research is to localize a

resident in indoor environments by using distributed binary

sensors and body activity information obtained from an inertial

measurement unit (IMU). The hardware setup consists of two

types of sensor nodes. The passive infrared (PIR) sensor node

provides binary information about motion in its field of view,

while the IMU sensor node collects motion data for body

activity recognition, walking velocity and heading estimation.

Basic human activities such as sitting, sleeping, standing and

walking are recognized. We proposed a particle filter-based

sensor fusion algorithm that considers a behavior-based map to

increase the localization accuracy. Experiments were conducted

in a mock apartment testbed. We used the ground truth data

obtained from a motion capture system to evaluate the results.

I. INTRODUCTION

The world population is ageing rapidly. The elderly population had increased to almost 810 million in 2012. In 2050, the number of aged people (60 and above) is about to reach a staggering 2 billion [1, 2]. Elders have the option of going to adult day care, long term care, nursing homes, hospice care, and home care. Even though all these options support the health, nutritional, social support, and daily living needs of adults, the feeling of independence is lost. Elders usually prefer to stay in the comfort of their home where they feel more confident than moving to any expensive adult care or healthcare facility. Hence if older adults are able to complete self-care activities on their own, and maintain independence, they will feel a sense of accomplishment and be able to maintain independence longer [3]. The best way to support them is to provide a physical environment that utilizes technologies to help active ageing. Home automation that is tailored to the need of elderly can serve the purpose. One example is real-time indoor human monitoring, which plays an important role in living assistance, emergency detection, among many other services. One important and fundamental problem in developing such home automation applications is to localize the resident in the indoor environment. With the location knowledge, for example, a domestic service robot can then be called to assist the elderly in case of emergency [4, 5].

Human localization in indoor environments has been studied by many researchers in recent years. However, most

Minh Pham and Weihua Sheng are with the Laboratory for Advanced

Sensing, Computation and Control (ASCC Lab), School of Electrical and

Computer Engineering, Oklahoma State University, Stillwater, OK, 74078, USA (e-mails: [email protected], [email protected])

Dan Yang is with the College of Information Science and Engineering,

Northeastern University, Liaoning Province, China (e-mail: [email protected])

Meiqin Liu is with the College of Electrical Engineering, Zhejiang

University, Hangzhou 310027, China (e-mail: [email protected])

of the solutions are not suitable for use in home automation applications. Some techniques use cameras for human tracking [6], which has the issue of losing privacy and is not acceptable to many people in their living environments. Some techniques rely on infrastructures such as active beacons equipped with the distance measuring capability, which are expensive and require careful maintenance [7]. Some techniques use wearable sensors to measure human motion in terms of acceleration and angular rate. These motion data are integrated for position estimation [8]. However, these techniques usually suffer the problem of sensor drifting and growing tracking errors. Therefore it is desirable to develop a new technique which is less intrusive, low-cost, easy to maintain while accurate for indoor human tracking, making it possible to be used in many home automation applications.

In this paper, we propose a new solution to indoor human localization by using both distributed motion sensors and wearable inertial measurement units (IMUs). In our solution, PIR sensors are used to provide location information without capturing any user’s image. An IMU sensor attached to the human body is used to estimate the velocity and heading angle. The PIR sensor data gives a rough estimate of the human location while the body motion sensor provides good estimate on the relative body movement in a short period. Therefore the combination of these two sensor modalities is expected to result in higher accuracy than that of only one modality. To further improve the accuracy, we also exploit the correlation between the human activity and the location in an indoor environment. If the activity is recognized through wearable motion sensors, the resident is likely to be within an area associated with the activity. For example, in a home environment, the activity of sitting mostly occurs where the chairs are available and the activity of lying often occurs where the beds are located. We develop a particle filtering framework to realize accurate human tracking which utilizes this activity/location correlation. Meanwhile our solution warranties that the resident’s privacy is not violated and the cost of the whole system is kept low. The localization system is also easy to use and maintain.

The paper is organized as follows: Section II presents an overview of the hardware platform of the localization system. Section III describes our methodology for the localization and tracking. Section IV presents the experiments results. Section V gives the conclusion and future work.

II. HARDWARE PLATFORM

In this section we describe our hardware platform, which consists of the Smart Home testbed, the PIR sensor nodes and the IMU node.

Human Localization and Tracking Using Distributed Motion

Sensors and an Inertial Measurement Unit

Minh Pham, Dan Yang, Weihua Sheng, Senior Member, IEEE, and Meiqin Liu

Proceedings of the 2015IEEE Conference on Robotics and Biomimetics Zhuhai, China, December 6-9, 2015

978-1-4673-9675-2/15/$31.00 © 2015 IEEE 2127

Page 2: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Figure 1. Top view and 3D view of mock apartment.

Figure 3. Overview of the approach.

(a) (b)

Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor

The Smart Home Testbed has a dimension of 4.88m x 6.7m (16ft x 22ft). It simulates a small apartment that consists of a living room, a bedroom, a kitchen and a bathroom. Fig. 1 shows the top view (left picture) and 3D view (right picture) of the mock apartment design. The testbed also contains an indoor localization system that provides the ground truth of human location. Fig. 2 shows the overview of the hardware platform.

A network of 8 Passive Infrared (PIR) sensors and a single Inertial Measurement Unit (IMU) are used in this project. Both use Xbee for data transmission.

III. METHODOLOGY

A. Overview

The proposed approach adopts a sensor fusion strategy that combines motion data from human body and PIR sensor data from the environment for human localization. In addition, this approach takes advantage of the correlation between the human locations and their activities in home environments for improved accuracy.

The overview of the approach is depicted in Fig. 3. The data from the IMU sensor, which include 3D acceleration and angular rate, are inputs to the following three modules: Activity Recognition, Velocity Estimation and Heading Estimation. From these modules the human body activity, velocity and the heading direction are derived, respectively. Meanwhile, the data from the array of PIR sensors are also collected. These two channels of information are fused through a Particle Filter module to estimate the location of the human. The behavior-based map, as prior knowledge, represents the correlation between human’s location and his activity. In this map, the position of walls and furniture (such as table, chair and bed) as well as other facilities are pre-

known. This map basically encodes the location probability of the resident when he is conducting certain activities such as walking, sitting and lying. This is important information that can be used to improve the localization accuracy in a Bayesian filtering framework.

In the next sections, we will first introduce the observation model of the PIR sensor and the IMU based motion model. Then we will discuss the behavior-based map. Finally we describe the sensor fusion algorithm that combines the PIR and IMU data.

B. Observation Model of PIR Sensor

In our Smart Home setup, eight PIR sensors are manually deployed on the ceiling in a configuration as shown in Fig. 4a. This configuration is selected because in this way the sensors can cover the majority of the apartment area. Their detective areas can be changed by adjusting the cylindrical lens hood of the PIR sensors. The PIR sensing area in this setup is a shape of circle on the floor with a radius of 1.10 m.

Based on the PIR sensor observation model mentioned in [9], our PIR sensor model is expressed by Equation (1), where p is probability of detection, q is probability of false alarm.

zkPIR,i

is the binary output from PIR sensor i at time k, which

takes value from {0,1}. sk is human state including 2-D location.

(zkPIR,i | sk) =

{

𝑝zkPIR,i

(1 − 𝑝)1−zkPIR,i

, 𝑖𝑓 |sk−Ci| ≤ ri

𝑞zkPIR,i

(1 − 𝑞)1−zkPIR,i

, 𝑖𝑓 ri < |sk−Ci| ≤ ri + 𝜖

1 − zkPIR,i, 𝑖𝑓 |sk−Ci| > ri + 𝜖

Figure 2. Testbed with Opti-Track system, PIR and IMU sensors.

(1)

2128

Page 3: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Figure 5. Activity recognition.

vk = {

axydt + vk−1, If swing phase

0,If stance phase; or (2)

standing, sitting or lying

Figure 6. Heading change estimation.

According to [9], the parameter p and q are estimated from measurements (p = 0.7, q = 0.05), but it’s not quite realistic when we consider the distance between the human and the sensor. We propose that the false alarm may happen when the human is not in the sensing range, but not too far away the sensor, which is depicted by the gray area (ϵ = 20 cm) as shown in Fig. 4b. If the human is far away (out of the dashed circle), the false alarm rate q becomes 0. We consider that chance of false negative (miss) may occur because of either sensor’s failure or no movement generated by human. In our framework, the probability of miss, which is (1- p) = 0.3, can be controlled because we can use the wearable IMU to detect if there is no human motion. And if that case happens, the PIR sensor values will be ignored, and there is no update of human location. Therefore, our finalized values of those probabilities are p = 0.9 and q = 0.05.

C. IMU-Based Motion Model

Body activity detection and velocity estimation

The IMU is attached to the right thigh of the human with one axis aligned vertically along the leg. Acceleration information from the IMU is used to detect human’s activity such as standing, sitting, or lying. The acceleration data collected from the IMU is first filtered by a low pass filter to reduce the noise components with a cutoff frequency of 5Hz and a sampling frequency of 20Hz. Then the sitting, standing and walking patterns are detected based on thresholds of mean and variance of filtered acceleration magnitude as in Fig. 5.

Once the walking activity is detected, the velocity vk is estimated by integrating the acceleration during the swing phase of each step, which includes a stance phase and a swing phase [10]. When the stationary stance is detected, or the activity is recognized as standing, sitting, or lying the velocity is reset to zero to eliminate accumulated errors. vk is estimated by using Equation (2), where axy = ax + ay is the

horizontal acceleration. To detect the stationary stance, we apply thresholds to both the magnitude of the

acceleration |am| = √ax2 + ay

2 + az22 and the magnitude of

angular rate |wm| = √wx2 + wy

2 + wz22. The stationary stance

is recognized if 9m /s2 < |am| < 11𝑚 /s2 and |wm| <30o/s.

Heading estimation

Heading provides walking direction and this information can be directly read from the outputs (yaw, pitch, and roll) from the IMU. However, the heading is reliable only if the IMU is inside an environment without any other magnetic fields except the Earth’s magnetic field. In indoor living environments, due to the magnetic disturbance caused by many devices such as computers, micro wave, and other electrical appliances, the accuracy of heading read from the IMU is neither accurate nor reliable. This problem has been observed in many experiments we conducted inside our testbed.

Therefore, we developed a different approach to estimating the walking direction. In our approach, the angular rate from the IMU is used to estimate the heading changes when the human is walking. Similar to velocity estimation, the heading angle change can be estimated by integrating the angular rate over time. It is clear that the drifts of the gyroscope may lead to poor results since the errors are also integrated. In order to overcome this problem, first we applied a low pass filter to this signal to reduce noises. The filtered angular rate is used to detect pairs of turning points (the red dashes in Fig. 6). A turning has a beginning point (t0) where the human starts changing the walking direction and an end point (t1) where the human starts walking straight. Then the estimated heading θk is calculated by adding the amount of angle change to the previous heading angle θk−1 and the measurement noise 𝒩(0, σθ) which has zero mean and a standard deviation value of σθ. The standard deviation is set to 𝜋/6 based on measurements obtained in our experimental testing.

P(θk) = 𝒩(θk−1 + wzdt , σθ) (3)

To detect the turning point t0, we adopt a threshold which is estimated through experiments as well. The ending point t1 of a turning is the time when the next stationary stance is detected.

2129

Page 4: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Now we can derive the motion model of the human,

sk = f(sk−1) + uk (4)

where the state vector sk = [xk, yk]T, uk is the process noise, The propagated position can be expressed by the following equations

[xkyk

] = [xk−1 + 𝑇𝑠vk cos(θk) + nk

yk−1 + 𝑇𝑠vk sin(θk) + nk] (5)

where vk and θk are the velocity and heading at time k sampled from normal distribution 𝒩(vk, σ𝑣) and 𝒩(θk, σθ) with velocity mean vk and heading mean θk which are estimated from the acceleration and angular rate information

as mentioned earlier,kn is a noise with Gaussian distribution.

The standard deviation σv and σθ are set to value of 20% of the mean velocity and 𝜋/6 rad respectively. 𝑇𝑠 is the sampling period.

D. Behavior-Based Map

The location of human and his behavior in a home environment are highly correlated. For example, when the human is detected as sitting, it means that the human may be sitting on a chair in the dining area, on a sofa in the living room, on the bed, or on the toilet. Based on the location estimate in the most recent time steps, we can find which furniture is actually involved. Therefore the position of that furniture, which is pre-known, can be used to determine the human position and the error accumulation can be prevented, allowing the location accuracy to be significantly improved. In this sense, the furniture serves as virtual landmarks in human localization.

To facilitate such behavioral landmark-assisted localization, we introduce the concept of behavior-based map and conduct the location inference in a Bayesian framework through Particle filtering. Basically the behavior-based map can be represented by an accessibility probability function (APF) which can be defined as

P(sk|ak) = φ

Here φ is the location (sk) probability of the human when he is conducting activity ak which can be either sitting, lying or walking. We assume the location of furniture is fixed and not movable, and the human can walk anywhere except places occupied by furniture and walls. Fig. 7a shows the sitting and

lying map which is applied when the human activity is

recognized as sitting or lying. In this map, the red colors mean locations with high probability for sitting or lying, and the blue color have extremely low probability. For the walking map as shown in Fig. 7b, if the place is occupied by the furniture such as the table and the facilities in the kitchen, or the wall, P(sk|ak) is set to almost zero which implies that a human will not walk in that area. The behavior map will be utilized to improve the location estimate, as explained in the next section.

E. Human Localization through Sensor Fusion

By fusing the two channels of information: PIR data and IMU data, we can derive more accurate location estimate than using only one of them. The fusion will be done in a Bayesian filtering framework. Although Kalman filter is a popular method used in many navigation systems, the requirement of linear model and Gaussian noise is not satisfied in our case. Therefore, we use particle filtering for human localization with multiple data sources.

The particle filter, or Sequential Monte Carlo, is one of the numerical methods to estimate posterior density function P(sk|z1:k) of system state sk given the observation data up to time k, z1:k. It is an iterative process consisting of two main steps, prediction and update, which employs two models, system model (or motion model) and measurement model. Each particle represents a possible state, and all particles can approximate the posterior probability distribution of the state. The prediction step is based on state motion model P(sk |sk−1), in which the propagation of particles utilizes velocity and heading angle estimates provided by the IMU. The propagation of particles can be expressed by Equation (5). The update step is based on the observation model P(zk| sk ), in which PIR data and activity information derived from IMU data are used to compute the weight of each particle. Locations of particles with more weight mean they are closer to the true state. And a step at the end of the loop is resampling which is used to remove low weighted particles and to generate new particles with normalized weight.

Here we explain how to update the weight of particles. Basically, we need calculate the likelihood of the two

observations zki = [ zk

PIR,i, zkIMU,i ] given a human location sk

P(zki | sk) = 𝑃(zk

PIR,i, zkIMU,i| sk)

= 𝑃(zkPIR,i| sk). 𝑃(zk

IMU,i| sk)

Here

𝑃(zkIMU,i| sk)

= ∑ 𝑃(zkIMU,i, ak | sk)

ak

∝ ∑ 𝑃(zkIMU,i, sk | ak). 𝑃( ak)

ak

= ∑ 𝑃(zkIMU,i| ak). 𝑃( sk| ak). 𝑃( ak)

ak

= ∑ 𝑃(zkIMU,i| ak). 𝑃( ak| sk)

ak

(6)

Figure 7. (a) Sitting and lying map; (b) Walking and Standing map.

(a) (b)

2130

Page 5: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Algorithm 1: The Particle filtering for human localization Initial

Particles’ parameters { s1i , w1

i , θ1i , N},

i = 1, … , 𝑁 For k = 2: T (T is total number of observations) Estimate vk, θk and recognize human activity ak based on

the observation data zkIMU

Prediction step: Propagate particles according to Equation (5). Update step:

Assign the particle a weight wki according to the

observation zkPIR , and zk

IMU

wki = wk

PIR,i. wkIMU,i. wk−1

i

END For

Normalization: wki = wk

i / ∑ wkiN

i=1 Estimate: calculate sk̂ according to

sk̂ = E[ski ] =

∑ ski ∙wk

iNki=1

∑ wkiNk

i=1

, Nk: Number of remaining particles at

step k Resampling:

Calculate Neff = 1/ ∑ (wki )2N

i=1 ; Set Nt = resample_percentage ∗ N; If Neff < Nt

{𝑠𝑘𝑡 , wk

t }i=1:N′ = resample{𝑠𝑘

𝑡 , wkt }i=1:N

END If

= ∑ [P(ak |sk)P(zkIMU|ak)]

ak

P(zi,kPIR | sk)wk−1

i

Figure 8. Result 1 of localization and tracking.

Therefore we have

P(zki | sk) = 𝑃(zk

PIR,i| sk). 𝑃(zkIMU,i| ak). 𝑃( ak| sk) (7)

The above equation calculates the likelihood of observing

the PIR data zkPIR,i

and IMU data zkIMU,i

given that the location

sk of the human is known. With the correlation between

activity and location 𝑃( ak| sk), the behavior-based map is

integrated into the update step.

The weight of each particle is updated to generate the posterior distribution P(sk|z1:k). Both the PIR sensors and the IMU sensor provide the likelihood of observation which can be combined with the prior location prediction calculated from the predict step to obtain the posterior localization.

Let wkPIR,i ~ P(zk

PIR,i | sk) be the weight of particle ith at time

k based on the PIR sensors, and wkIMU,i~ P(zk

IMU,i | sk) be

the weight based on the IMU sensor. P(zkIMU,i | sk) can be

calculated according to Equation (6). The updated weight should be the product of these two weights and the previous weight

wki = wk

PIR,i. wkIMU,i. wk−1

i (8)

If the number of particles with low weight reaches a certain threshold, the resampling should be conducted. Otherwise, the particle filter will degenerate when there are only a few high weighted particles, which leads to poor approximation of the state estimate. The effective number of particles 𝑁𝑒𝑓𝑓 is used as the indicator of degeneracy [11],

which is calculated as following

𝑁𝑒𝑓𝑓 = 1

∑ (𝑤𝑘𝑖 )2𝑁

1 (9)

where N is the total number of particles. If 𝑁𝑒𝑓𝑓is less than a

threshold 𝑁𝑡, the resampling will kill particles with low weight and high weighted particles are replicated. After resampling, particles become more concentrated in higher probability area of the posterior, and the state is estimated based on the mean of posterior distribution.

sk̂ = E[ski ] =

∑ ski ∙wk

iNki=1

∑ wkiNk

i=1

(10)

and the covariance matrix is

𝑃𝑘 = ∑ 𝑤𝑘𝑖 (𝑠𝑘

𝑖𝑁1 − 𝑠�̂�) (11)

Below is the particle filtering algorithm for human localization.

IV. EXPERIMENT RESULTS

A. Environment Setup

A human subject was asked to move around the mock

apartment, who wore a cap with reflective markers on it to be

detected by the Opti-track system as the ground truth.

Regular daily activities were performed: standing, walking,

sitting, and sleeping. Eight PIR Sensors are placed on the

ceiling at a standard ceiling height of about 2.45m. A binary

response was read every 0.05s from each sensor (sampling

rate of 20Hz). The radius of each PIR sensor is restricted as

1.10m by a hand-made cylindrical lens hood. The server PC

is mainly responsible for collecting the data from the PIR

sensors and the IMU sensor, then estimating the human’s

location in the apartment and creating a movement trajectory

of the human.

B. Evaluation

Fig. 8 shows the result of an experiment in which we performed a test trajectory from the start point to the end point. We assume that we know the position and the heading of the start point. In this case, the human is assumed to just get back to home, then the PIR at the front door is activated and the system starts performing localization and tracking.

The red trajectory is the ground truth obtained from the Opti-track system, and the green one is our estimation. The big blue circles are detective area of 8 PIR sensors. By applying data fusion, the information from the PIR sensors

2131

Page 6: Human Localization and Tracking Using Distributed Motion ...Figure 3. Overview of the approach. (a) (b) Figure 4. (a) PIR sensor network ; (b) Sensing region of a PIR sensor The Smart

Figure 9. Result 3 of localization and tracking. Without activity recognition: (a)Realtime estimation; (b) Refined trajactories. With

activity recognition: (c)Realtime estimation; (d) Refined trajactories

(a) (c)

(b) (d)

and the IMU sensor can help reduce the errors that are mainly caused from slowly increasing inaccuracy of the heading angle. For example, in Fig. 8, the human is walking at the point R1 which is the ground truth and the estimated point is E1, and the error between them is about 20cm. Then the human sits on the chair. The sitting activity is recognized, and the furniture with the highest probability is determined. Therefore, the next estimated point E2 is determined by the location of the chair which is R2 At this point, the heading is reset to the direction of the chair the human sits on, and the velocity also becomes zero.

With respect to the PIR sensors, it can also help correct the estimated location. The human is walking at point R3 and the estimated point is E3, and the error between them is about 25cm. Then the human walks across the border of the sensing range of another PIR towards point R4. No matter what the current estimated heading or velocity is, the next estimated point will jump to the closest point in that area. In this case, only the position is corrected, while the velocity and heading are kept unchanged. The end point of this trajectory is when the human sits on the armchair, the position and heading are reset by the known values, and the velocity is set to zero.

The impact of activity recognition on improving the accuracy of indoor localization and tracking is further demonstrated in another experiment as shown in Fig. 9, with the starting point at the entrance door area. The experiment in Fig. 9a and 9b is conducted without any sitting or lying, and the experiment in Fig. 11c and 11d is conducted with sitting on the chair and the sofa. A big error can be seen in Fig. 9a and 9b at the end of the route. While the human is actually going to the bed room as shown by the red line, the estimated green line is showing that he will go out of the apartment. While in Fig. 9c and 9d, that error is removed after each time the human sits down on the chair or sofa. The RMSEs corresponding to each condition in this case are 0.5339 m and 0.1795 m. Obviously, by employing activity recognition and knowledge of furniture locations, the accuracy of localization and tracking is significantly improved.

V. CONCLUSION

In this paper, we proposed a method using a particle filter to fuse PIR sensor data and IMU sensor data along with a behavior-based map for human localization in an indoor environment. The subject’s position is first roughly determined through the PIR sensors mounted on the ceiling. Then inertial sensor which is worn on the human body is used for velocity estimation, heading estimation and body activity recognition. Then, the particle filter is used to integrate the two types of sensor data to estimate and refine the localization result. Our method has the advantage of reducing the obtrusiveness, while maintaining high accuracy of indoor localization. We conducted experiments in a mock apartment and the accuracy of the proposed method is evaluated.

Still, our work has some limitations that we need to make improvement in future. For example, we need to extend the ability of recognition to more daily activities, not just sitting, lying, standing and walking. Also, in our future work, we will focus on tracking multiple people living together in smart home environments.

ACKNOWLEDGMENT

This project is supported by the National Science Foundation (NSF) Grant CISE/IIS 1231671, CISE/IIS/1427345, National Natural Science Foundation of China (NSFC) Grants 61328302, 61222310 and the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China (No. ICT1536).

REFERENCES

[1] World Health Organization. 10 facts on ageing and the life course. 2014 [cited 2014; Available from:

http://www.who.int/features/factfiles/ageing/ageing_facts/en/.

[2] United Nations Population Fund. Ageing in the Twenty-First Century: A Celebration and a Challenge. 2012.

[3] Wikipedia contributors. Elderly care. 2014; Available from: http://en.wikipedia.org/wiki/Elderly_care#Promoting_independence_in

_the_elderly.

[4] Brian T. Horowitz. Cyber Care: Will Robots Help the Elderly Live at Home Longer? 2010; Available from:

http://www.scientificamerican.com/article/robot-elder-care/.

[5] Jennifer Hicks. Hector: Robotic Assistance for the Elderly. 2012; Available from:

http://www.forbes.com/sites/jenniferhicks/2012/08/13/hector-robotic-

assistance-for-the-elderly/. [6] Yu, C.-R., et al. Human localization via multi-cameras and floor

sensors in smart home. in Systems, Man and Cybernetics, 2006.

SMC'06. IEEE International Conference on. 2006. IEEE. [7] Priyantha, N.B., A. Chakraborty, and H. Balakrishnan. The cricket

location-support system. in Proceedings of the 6th annual international

conference on Mobile computing and networking. 2000. ACM. [8] L. Ojeda and J. Borenstein. Non-gps navigation with the personal

dead-reckoning system. In Unmanned Systems Technology IX, SPIE,

volume 6561, 2007 [9] Oh, Songhwai, et al. "Tracking and coordination of multiple agents

using sensor networks: system design, algorithms and experiments."

Proceedings of the IEEE 95.1 (2007): 234-254 [10] Foxlin, Eric. "Pedestrian tracking with shoe-mounted inertial sensors."

Computer Graphics and Applications, IEEE 25.6 (2005): 38-46.

[11] Arulampalam, M.S., et al., A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. Signal Processing, IEEE

Transactions on, 002. 50(2): p. 174-188

2132