Detection and Tracking of Humans and Faces

15
Hindawi Publishing Corporation EURASIP Journal on Image and Video Processing Volume 2008, Article ID 526191, 9 pages doi:10.1155/2008/526191 Research Article Detection and Tracking of Humans and Faces Stefan Karlsson, Murtaza Taj, and Andrea Cavallaro Multimedia and Vision Group, Queen Mary University of London, London E1 4NS, UK Correspondence should be addressed to Murtaza Taj, [email protected] Received 15 February 2007; Revised 14 July 2007; Accepted 25 November 2007 Recommended by Maja Pantic We present a video analysis framework that integrates prior knowledge in object tracking to automatically detect humans and faces, and can be used to generate abstract representations of video (key-objects and object trajectories). The analysis framework is based on the fusion of external knowledge, incorporated in a person and in a face classifier, and low-level features, clustered using temporal and spatial segmentation. Low-level features, namely, color and motion, are used as a reliability measure for the classification. The results of the classification are then integrated into a multitarget tracker based on a particle filter that uses color histograms and a zero-order motion model. The tracker uses ecient initialization and termination rules and updates the object model over time. We evaluate the proposed framework on standard datasets in terms of precision and accuracy of the detection and tracking results, and demonstrate the benefits of the integration of prior knowledge in the tracking process. Copyright © 2008 Stefan Karlsson et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. INTRODUCTION Video filtering and abstraction are of paramount importance in advanced surveillance and multimedia database retrieval. The knowledge of the objects’ types and position helps in semantic scene interpretation, indexing video events, and mining large video collections. However, the annotation of a video in terms of its component objects is as good as the ob- ject detection and tracking algorithm that it is based upon. The quality of the detection and tracking algorithm depends in turn on its capability of localizing objects of interest (ob- ject categories) and on tracking them over time. It is in gen- eral dicult to define object categories for retrieval in video because of dierent meanings and definitions of objects in dierent applications. However, some categories of objects, such as people and faces, are of interest across several ap- plications and provide relevant cues about the content of a video. Detecting and tracking people and faces provide sig- nificant semantic information about the video content for video summarization, intelligent video surveillance, video indexing, and retrieval. Moreover, the human visual system is particularly attracted by people and faces, and therefore their detection and tracking enable perceptual video coding [1]. A number of approaches have been proposed for the inte- gration of object detectors in a tracking process. A stochastic model is implemented in [2] to track a single face in a video, which relies on combined face detection and prediction from the previous frame. Faces are detected in a coarse-to-fine net- work, thus producing a hierarchical trace of face detections for each frame that is used in a trained probabilistic frame- work to determine face positions. Edgelet-based part detec- tor and mean shift can be used to perform detection and tracking of partially occluded objects [3]. The incorporation of recent observations improves the performance of a par- ticle filter [4], and has been used in a hockey player tracking system by increasing the particles in the proposal distribution around detections [5]. As an alternative to an object detector, contour extraction can be combined with color information as part of the object model [6]. Other methods include mo- tion segmentation combined with a nearest neighborhood filter [7], updating a Kalman filter with detections [8], com- bining detection and MAP probabilities [9], and using detec- tions as input to a probabilistic data association filter [10]. In this paper, we propose a unified multiobject detection and tracking framework that uses an object detection algo- rithm integrated with a particle filter and demonstrates it on people and faces. The proposed framework integrates prior knowledge of object categories with probabilistic tracking. We use both a priori knowledge (in the form of training of an object classifier) and on-line knowledge acquisition (in the form of the target model update). Detection of faces and people is done by a cascaded Adaboost classifier, supported

Transcript of Detection and Tracking of Humans and Faces

Page 1: Detection and Tracking of Humans and Faces

Hindawi Publishing CorporationEURASIP Journal on Image and Video ProcessingVolume 2008, Article ID 526191, 9 pagesdoi:10.1155/2008/526191

Research ArticleDetection and Tracking of Humans and Faces

Stefan Karlsson, Murtaza Taj, and Andrea Cavallaro

Multimedia and Vision Group, Queen Mary University of London, London E1 4NS, UK

Correspondence should be addressed to Murtaza Taj, [email protected]

Received 15 February 2007; Revised 14 July 2007; Accepted 25 November 2007

Recommended by Maja Pantic

We present a video analysis framework that integrates prior knowledge in object tracking to automatically detect humans andfaces, and can be used to generate abstract representations of video (key-objects and object trajectories). The analysis frameworkis based on the fusion of external knowledge, incorporated in a person and in a face classifier, and low-level features, clusteredusing temporal and spatial segmentation. Low-level features, namely, color and motion, are used as a reliability measure for theclassification. The results of the classification are then integrated into a multitarget tracker based on a particle filter that uses colorhistograms and a zero-order motion model. The tracker uses efficient initialization and termination rules and updates the objectmodel over time. We evaluate the proposed framework on standard datasets in terms of precision and accuracy of the detectionand tracking results, and demonstrate the benefits of the integration of prior knowledge in the tracking process.

Copyright © 2008 Stefan Karlsson et al. This is an open access article distributed under the Creative Commons AttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properlycited.

1. INTRODUCTION

Video filtering and abstraction are of paramount importancein advanced surveillance and multimedia database retrieval.The knowledge of the objects’ types and position helps insemantic scene interpretation, indexing video events, andmining large video collections. However, the annotation of avideo in terms of its component objects is as good as the ob-ject detection and tracking algorithm that it is based upon.The quality of the detection and tracking algorithm dependsin turn on its capability of localizing objects of interest (ob-ject categories) and on tracking them over time. It is in gen-eral difficult to define object categories for retrieval in videobecause of different meanings and definitions of objects indifferent applications. However, some categories of objects,such as people and faces, are of interest across several ap-plications and provide relevant cues about the content of avideo. Detecting and tracking people and faces provide sig-nificant semantic information about the video content forvideo summarization, intelligent video surveillance, videoindexing, and retrieval. Moreover, the human visual system isparticularly attracted by people and faces, and therefore theirdetection and tracking enable perceptual video coding [1].

A number of approaches have been proposed for the inte-gration of object detectors in a tracking process. A stochasticmodel is implemented in [2] to track a single face in a video,

which relies on combined face detection and prediction fromthe previous frame. Faces are detected in a coarse-to-fine net-work, thus producing a hierarchical trace of face detectionsfor each frame that is used in a trained probabilistic frame-work to determine face positions. Edgelet-based part detec-tor and mean shift can be used to perform detection andtracking of partially occluded objects [3]. The incorporationof recent observations improves the performance of a par-ticle filter [4], and has been used in a hockey player trackingsystem by increasing the particles in the proposal distributionaround detections [5]. As an alternative to an object detector,contour extraction can be combined with color informationas part of the object model [6]. Other methods include mo-tion segmentation combined with a nearest neighborhoodfilter [7], updating a Kalman filter with detections [8], com-bining detection and MAP probabilities [9], and using detec-tions as input to a probabilistic data association filter [10].

In this paper, we propose a unified multiobject detectionand tracking framework that uses an object detection algo-rithm integrated with a particle filter and demonstrates it onpeople and faces. The proposed framework integrates priorknowledge of object categories with probabilistic tracking.We use both a priori knowledge (in the form of training ofan object classifier) and on-line knowledge acquisition (inthe form of the target model update). Detection of faces andpeople is done by a cascaded Adaboost classifier, supported

Page 2: Detection and Tracking of Humans and Faces

2 EURASIP Journal on Image and Video Processing

Trained face classifier

Object detector

Segmentation

Fusion

Oct (x, y,w,h,n)

Sc(i, j)Oct (x, y,w,h,n)

It(i, j)

It(i, j)

It(i, j)

Detection Tracking

External knowledge

1

2

3

4

Face training

Colour featuresPerson training

Trained people classifier

Change detection parameters

t t − 1Propagation

Likelihood∀ particle

Expectation (·)

Online accumulated knowledge

Create/updatemodel

Key objectselection

Key objects

TrajectoriesPost-processing

Figure 1: Flow chart of the proposed object-based video analysis framework.

by color and motion segmentation, respectively. Next, a par-ticle filter tracks the objects over time and compensates formissing or false detections. The detections, when available,influence the proposal distribution and the updating of thetarget color model (see Figure 1). We evaluate the proposedframework on the standard datasets CLEAR [11], AMI [12],and PETS 2001 [13].

The paper is organized as follows. Section 2 introducesface and people detection and evidence fusion. The integra-tion of detections in particle filtering and track managementissues are described in Section 3. Section 4 introduces theperformance measures. Section 5 presents the experimentalresults. Finally, in Section 6 we draw the conclusions.

2. DETECTING HUMANS AND FACES

2.1. Classifying object categories

The a priori knowledge about object categories to be discov-ered in a video is incorporated through the training of anobject detector. The validity of the proposed framework isindependent of the chosen detector, and here we use two dif-ferent detectors to demonstrate the feasibility and generalityof the proposed framework.

In particular, to detect faces and people, we use an Ad-aboost feature classifier based on a set of Haar-wavelet-likefeatures (see [14, 15]). These features are computed on theintegral image I(x, y), defined as I(x, y) =∑ x

i=1

∑ yj=1I(i, j),

where I(i, j) represents the original image intensity. TheHaar features are differences between sums of all pixelswithin subwindows in the original image. Therefore, in theintegral image, they are calculated as simple differences be-

(a) (b) (c) (d) (e) (f) (g)

(h) (i) (j) (k) (l) (m) (n) (o)

Edge features Center-surround features

Line features

Figure 2: Haar features used for classification. (a–e) edge features;(f-g) center-surround features; (h–o) line features.

tween the top-left and the bottom-right corners of the corre-sponding subwindows.

For face detection, we use a trained classifier [16] forfrontal, left, and right profile faces, with the 14 featuresshown in Figure 2 (see ((a)–(d), (f)–(o))). The edge featureshown in Figure 2(e) is used to model tilted edges, such asshoulders, and it is therefore not suitable for modeling faces.

For people detection, the training was performed usingthe 13 features shown in Figure 2 (see ((a)–(e), (h)–(o)))[15]. We used nt = n+

t + n−t = 4285 training samples, withn+t = 2543 positive 10 × 24 pixel samples selected from the

CLEAR dataset (see Figure 3) and n−t = 1742 negative sam-ples with different resolutions. Since there is one weak clas-sifier for each distinct feature combination, effectively thereare 2543×13 = 33059 weak classifiers that, after training, areorganized in 20 layers. Note that the features in Figure 2 (see

Page 3: Detection and Tracking of Humans and Faces

Stefan Karlsson et al. 3

Figure 3: Subset of positive samples used for training the persondetector.

((c), (d), (g), (l)–(o))) are computed on the integral imagerotated by 45◦ [17].

Let us denote the object classification result withOc

t (x, y,w,h,n), where c denotes the object class (we will usethe subscript f for faces and p for people), n = 1, . . . ,Nc isthe number of detected objects for class c at time t, (x, y) isthe center of the object, and w and h are its width and height,respectively.

2.2. Low-level segmentation

Low-level segmentation provides a reliability cue for eachdetection. We use skin color segmentation and motion seg-mentation to support face and person categorization, respec-tively.

Skin color segmentation is based on a nonlinear transfor-mation of the YCbCr color space [18], which results in a two-dimensional ad hoc chromaticity plane C′bC

′r . As this trans-

formation is degenerate for gray pixels, RGB values with re-spect to the conditions 0.975 < R/B and G/B < 1.025 arediscarded. To distinguish skin pixels in the C′bC

′r plane, an

ellipse encircling skin chromaticity is defined as

x2

a2+

y2

b2= 1, (1)

with [xy

]=[

cos θ sin θ− sin θ cos θ

][C′b − cxC′r − cy

]. (2)

We sampled skin chromaticity from the CLEAR dataset andcomputed the values cx = 110, cy = 152, a = 25, b = 15, andθ = 2.53, which are comparable to those in [18]. An exampleof skin color segmentation is shown in Figure 4(d).

Motion segmentation is performed using a statistical colorchange detector [19]. The detector assumes that a referenceimage is available, either because an image without objectscan be taken or because of the use of an adaptive backgroundalgorithm [20, 21]. An example of motion segmentation re-sults is presented in Figure 4(b).

Let us denote the segmentation mask as Sct (i, j), wherei = 1, . . . ,W and j = 1, . . . ,H represent the pixel position,with W and H representing the image width and height, re-spectively.

(a) (b)

(c) (d)

Figure 4: Sample segmentation results on CLEAR test sequences.(a) Outdoor test sequence and (b) corresponding motion segmen-tation result. (c) Indoor test sequence and (d) corresponding colorsegmentation result.

(a) (b)

(c) (d)

Figure 5: Sample person and face detection results. (a) Person de-tection using the classifier only; (b) filtered detections after evidencefusion. (c) Face detection using the classifier only; (d) filtered detec-tions after evidence fusion.

2.3. Evidence fusion

Segmentation results are used to remove false positive detec-tions. A detection Oc

t (xd, yd,wd,hd,n) is accepted if∣∣Oct

(xd, yd,wd,hd,n

)∩ Sct (i, j)∣∣∣∣Oc

t

(xd, yd,wd,hd,n

)∣∣ > λc, (3)

where |·| is the cardinality of a set and λc is the minimumnumber of segmented pixels used to accept a detected area.For color segmentation λ f = 0.1, whereas for motion segmen-tation λp = 0.2. The values of these thresholds depend on thefact that detections may contain background areas (for peo-ple) or hair regions (for faces). Figure 5 shows two examplesof detection results prior to and after evidence fusion.

Page 4: Detection and Tracking of Humans and Faces

4 EURASIP Journal on Image and Video Processing

The resulting object detections are then used to initializethe object tracker as well as to solve track management issues,as discussed in the next section.

3. GENERATING TRAJECTORIES

3.1. The tracker

Tracking estimates the state of an object in subsequentframes. We use a particle filter tracker as it can deal with non-Gaussian multimodal distributions [5, 22].

Let us represent the target state as xt = [x, y,w,h]. Theposterior pdf of a target location in the state space is definedas a sum of Dirac deltas centered around the particles, withweights ωn

t :

p(

xt | z1:t) ≈ Ns∑

n=1

ωnt δ(

xt − xnt

), (4)

where xnt is the state of the nth particle in frame t, z1:t are

the measurements from time 1 to time t, and Ns is the totalnumber of particles. The state transition p(xn

t | xnt−1) is a

zero-order motion model defined as xt = xt−1 + N (xt−1, σ),where N (xt−1, σ) is a Gaussian noise centered in the previousstate with variance σ . The update of the pdf over time is basedon the recalculation of the weights ωn

t :

ωnt ∝ ωn

t−1p(

zt | xnt

)p(

xnt | xn

t−1

)q(

xnt | xn

t−1, zt) , (5)

where p(zt | xnt ) is the likelihood of the measurement. Since

we use resampling to avoid the degeneracy of the particles(i.e., when the weights of all particles except one tend to zeroafter few iterations [22]), ωn

t−1 = 1/N ∀n and (5) is simpli-fied to

ωnt ∝

p(

zt | xnt

)p(

xnt | xn

t−1

)q(

xnt | xn

t−1, zt) . (6)

To compute the likelihood p(zt | xnt ), we use a color his-togram φM = [ϕM

1,1,1, . . . ,ϕMRGB] as object model [5, 6], where

R, G, and B are the number of bins in each color channel.The color difference between the model M and a particle p,dJ(φ

M,φp), is based on the Jeffrey divergence [23]. The like-lihood is finally estimated as

p(

zt | xnt

) = 1√2πσl

edJ (φM ,φp)

2/2σ2

l . (7)

3.2. Particle propagation

Instead of using the transition prior only, we include ob-ject detections, when available, in the proposal distribution:a fraction of the particles is spread around the previous stateaccording to the motion model, whereas the rest are spreadaround the detections. For this reason, each detection has tobe linked to the closest state. This association is establishedwith a gated nearest neighborhood filter, which selects the

detection Oct (xd, yd,wd,hd,n) closest to the state xt if it is in

its proximity. The proximity conditions are∣∣xd − xtr∣∣ < δc

(wtr + ηchtr

),∣∣yd − ytr

∣∣ < δc(ηcwtr + htr

),(

1− γc)wtr < wd <

(1 + γc

)wtr ,(

1− γc)htr < hd <

(1 + γc

)htr ,

(8)

where (xtr , ytr) is the center, wtr and htr are the width andheight of the ellipse representing the object, and η f = 1,ηp = 0, δ f = γp = 0.25, δp = γ f = 0.5 are determinedexperimentally. The association is incorporated in (9) [5] as

q(

xt | xt−1, zt) = αcqd

(xt | zt

)+(1− αc

)p(

xt | xt−1), (9)

where αc is the fraction of particles spread around the detec-tion in the state space and qd(xt | zt) is a Gaussian aroundthe associated detection. If the proximity conditions are notsatisfied, a new candidate track is initialized and αc = 0. Insuch a case, (9) reduces to q(xt | xt−1, zt) = p(xt | xt−1),whereas (6) reduces to ωn

t ∝ p(zt | xnt ).

3.3. Model update

Object detections are also used to online update the objectmodel M. This update aims to avoid track drifting when theobject appearance varies due to changes in illumination, size,or pose. The color histogram is updated according to

ϕMr,g,b(t) = βcϕ

dr,g,b(t) +

(1− βc

)ϕMr,g,b(t − 1), (10)

where r = 1, . . . ,R, g = 1, . . . ,G, b = 1, . . . ,B, and βc is theupdate factor. Note that the histogram is only updated whenthere is an associated detection in order to prevent back-ground pixels from becoming a part of the model M.

3.4. Track management issues

Unlike [5], where tracks are initiated with a single detection,we integrate information coming from the detector and thetracker processes to deal with track initiation and termina-tion issues. A detection Oc

t (x, y,w,h,n) that is not associatedwith a track is considered as a candidate for track initializa-tion. Tracking is started in sleeping mode. To switch a trackfrom sleeping to active mode, Ni detections are accumulatedin subsequent frames. The value of Ni depends on frequencyof the detections:

Ni = min

(3

2− 1/ ff , 9

), (11)

where f is the frequency of detections and f = 9/20 is theminimum frequency. If there are not a sufficient number ofsuccessive detections, then the track is discarded.

A track is terminated if the low-level segmentation resultsdo not provide enough evidence for the presence of an object:∣∣Xc

t

(xd, yd,wd,hd,n

)∩ Sct (i, j)∣∣∣∣Xc

t

(xd, yd,wd,hd,n

)∣∣ < λc, (12)

Page 5: Detection and Tracking of Humans and Faces

Stefan Karlsson et al. 5

(a) (b)

Figure 6: Example of using track management rules for sequenceS3, frame 270. (a) Without track management, the tracked ellipsesdegenerate. (b) With track management, the tracked ellipses cor-rectly estimate the face areas.

with λp = 0.2 and λ f = 0.1. Moreover, a person track isterminated if Nt = 25 subsequent frames without an asso-ciated detection. A face track is terminated when the colorhistogram of the object changes drastically; that is, the Jef-frey divergence dJ between the current target and the modelis larger than a threshold D. A cut-off distance of D = 0.15was found appropriate. Also, we terminate the tracks that de-viate more than 3σ from the average face size, learnt on thefirst 300 tracked faces. Finally, faces whose ratio is w/h > 1.5are considered unlikely and therefore removed. An exampleof performance improvements achieved with the proposedinitialization and termination rules is shown in Figure 6.

3.5. Postprocessing

Track verification is performed to remove false tracks in apostprocessing stage. False tracks are generally initiated byrepeated multiple detections on the same object. To removethese tracks, a score is computed for each overlapping track:snt = (0.6Nf )/50 + 0.4frd, where snt is the score for track nat time t, Nf is the number of frames tracked in a 50-framewindow, and frd is the frequency of detection. The weights onNf (0.6) and frd (0.4) favor tracks with a long history againstnew ones with a high frequency. Finally, tracks shorter than15 frames are likely to be cluttered and therefore removed.

4. PERFORMANCE MEASURES

To quantitatively evaluate the performance of the proposedframework, two groups of measures are used, namely, detec-tion and tracking performance measures. We chose as detec-tion measures precision P and recall R, which are designed toquantify the ability of an algorithm to identify true targets ina video, as opposed to false detections and missed detections.These measures are commonly used to evaluate the perfor-mance of database retrieval algorithms and are defined as

P = TPTP + FP

,

R = TPTP + FN

,(13)

where TP is the number of true positives, FP is the number offalse positives, and FN is the number of false negatives.

Table 1: Brief information about the datasets.

Dataset Seq. Sequence name Task Frames

AMI

S1 EN2001b.Closeup1 face 100–600

S2 EN2001b.Closeup4 face 1–500

S3 IS1003c.L face 1–500

S4 IS1004a.R face 250–750

CLEAR

S5 PVTRA102a09 people 500–3001

S6 PVTRA102a10 people 3007–5701

S7 PVTRA102a11 people 1–500

S8 PVTRA102a12 people 1000–1500

PETS S9 PETS1SEG people 1–500

The tracking performance measures quantify the accu-racy of the estimated object size (dD) and the accuracy ofthe estimated object position (dD ist). The measure dD quan-tifies the overlap between the ground truth and the estimatedtargets, and it is defined as

dD = 1−∑ Nfn

n=1

∑ Nfrt=1

[2∣∣G(t)

n ∩D(t)n∣∣/∣∣G(t)

n + D(t)n∣∣]∑ Nfr

u=1Nufn

, (14)

where G(t)n denotes the ground truth for track n at time t,

D(t)n is the corresponding estimated target, Nfn is the number

of matched objects in the ground truth and the tracked ob-jects in a frame, Nfr is the total number of frames, and Nu

fr isthe total number of matched objects in the entire sequence.The measure dD ist is the distance between the centers of theestimated tracked object and the ground truth, normalizedby the size of the ground truth:

dD ist =∑ Nfn

n=1

∑ Nfrt=1

√((xd − xg)/wg

)2+((yd − yg)/hg

)2∑ Nfru=1N

ufn

,

(15)

where (xd, yd) and (xg , yg) are the centers of the tracked ob-ject and the ground truth, and wg and hg are the width andheight of the corresponding ground truth object.

5. EXPERIMENTAL RESULTS

We demonstrate the proposed framework on three stan-dard datasets, namely, CLEAR, AMI, and PETS 2001. Thesedatasets include indoor and outdoor scenarios for a total of8700 frames (see Table 1).

The same set of parameters is used for motion segmen-tation and for the tracker in all the experiments. For the sta-tistical change detector, the noise variance is σ = 1.8 andthe kernel size is k = 3. The particle filter uses 150 particlesper object, with a transition factor of 12 pixels per frame.For the likelihood (7), αl = 0.068. For faces, α f = 0.9 andβ f = 0.35, and for people, αp = 0.25 and βp = 0.1. Thesevalues have been found appropriate after extensive testing.The histogram for the color model and the likelihood is uni-formly quantized with 10× 10× 10 bins in the RGB space.

We compare the proposed approach that integrates de-tections and particle filtering (referred to as PFI) with the

Page 6: Detection and Tracking of Humans and Faces

6 EURASIP Journal on Image and Video Processing

Table 2: Comparison of tracking performance (means and stan-dard deviations for 8 runs).

Faces

Seq. PFI PF NN

S1

dD (σdD ) 0.24(0.02) 0.25(0.03) 0.27

dD ist(σdD ist ) 0.10(0.004) 0.14(0.02) 0.10

P(σP) 0.76(0.06) 0.70(0.03) 0.70

R(σR) 1.00(0) 0.98(0.03) 1

S2

dD (σdD ) 0.28(0.01) 0.34(0.01) 0.28

dD ist(σdD ist ) 0.13(0.005) 0.24(0.01) 0.12

P(σP) 0.95(0.01) 0.92(0.03) 0.94

R(σR) 0.96(0.01) 0.89(0.01) 0.94

S3

dD (σdD ) 0.27(0.03) 0.39(0.01) 0.32

dD ist(σdD ist ) 0.13(0.02) 0.21(0.02) 0.16

P(σP) 0.52(0.03) 0.38(0.02) 0.47

R(σR) 0.73(0.03) 0.74(0.02) 0.72

S4

dD (σdD ) 0.38(0.03) 0.49(0.03) 0.26

dD ist(σdD ist ) 0.26(0.03) 0.41(0.04) 0.17

P(σP) 0.66(0.08) 0.52(0.04) 0.60

R(σR) 0.69(0.06) 0.48(0.05) 0.29

People

Seq. PFI PF NN

S5

dD (σdD ) 0.25(0.02) 0.26(0.01) 0.24

dD ist(σdD ist ) 0.18(0.01) 0.17(0.02) 0.19

P(σP) 0.78(0.02) 0.78(0.01) 0.80

R(σR) 0.90(0.02) 0.92(0.03) 0.82

S6

dD (σdD ) 0.25(0.05) 0.35(0.03) 0.21

dD ist(σdD ist ) 0.16(0.03) 0.22(0.02) 0.13

P(σP) 0.23(0.04) 0.22(0) 0.26

R(σR) 0.55(0.08) 0.59(0.11) 0.62

S7

dD (σdD ) 0.36(0.04) 0.36(0.01) 0.31

dD ist(σdD ist ) 0.21(0.02) 0.24(0.02) 0.17

P(σP) 0.74(0.04) 0.70(0.02) 0.81

R(σR) 0.84(0.01) 0.84(0.01) 0.84

S8

dD (σdD ) 0.34(0.03) 0.37(0.04) 0.35

dD ist(σdD ist ) 0.21(0.02) 0.21(0.03) 0.21

P(σP) 0.59(0.02) 0.57(0.03) 0.60

R(σR) 0.67(0.02) 0.65(0.04) 0.61

particle filtering alone (referred to as PF). To offer a fair com-parison, in both cases the initialization and termination rulespresented in Section 3.4 are used. We also compare PFI withthe nearest neighborhood filter (NN). The measurementsused for evaluation are the mean (dD , dD ist, R, and P) andthe corresponding standard deviations on 8 runs of the per-formance measures presented in Section 4 (see Table 2).

The comparison of PFI and PF for faces shows that dD

and dD ist scores are smaller for all face sequences indicatingbetter correspondence between track ellipses and the groundtruth. Further, R and P are larger for the same sequences,except for one R score. Figure 9 shows sample results of peo-ple and face tracking, and their framewise dD scores are il-lustrated in Figure 8. In Figure 8 (row 1), the quality of PFI

(a) (b)

(c) (d)

Figure 7: Comparison of tracking results between NN (green) andPFI (blue). (a) Sequence S2 and (b) Sequence S4: the NN algorithmfails when there is low frequency of detections. (c) Sequence S6 and(d) Sequence S7: the NN filter produces jagged trajectories.

00.10.20.30.40.50.6

d DS3

248 258 268 278 288 298 308 318 328 338

Frames

00.10.20.30.40.50.6

d D

S2

17 22 27 32 37 42 47 52 57 62 67

Frames

00.10.20.30.40.50.6

d D

S5

2500 2550 2600 2650 2700 2750 2800

Frames

00.10.20.30.40.50.6

d D

S5

2500 2550 2600 2650 2700 2750 2800

Frames

PFIPF

Figure 8: Performance comparison of face tracks (sequence S3 andS2) and people tracks (sequence S5) for PF and PFI.

Page 7: Detection and Tracking of Humans and Faces

Stefan Karlsson et al. 7

(a) (b)

(c) (d)

Figure 9: Comparison of tracking results with PF (green) and PFI (blue). (a)–(b) Sequence S5; (c) sequence S2; (d) sequence S3.

(a)

050

100150200

250300350400450

500

Tim

e(t

)(s

econ

ds)

200100

0Height (pixels)

0 50 100 150 200 250 300 350

Width (pixels)

300200

1000

Width (pixels)250 200 150 100 50 0

Height (pixels)

200

300

400

500

600

700

Tim

e(t

)(s

econ

ds)

(b)

(c)

Figure 10: Example of trajectory-based video description and object prototypes. (a) Resulting tracks superimposed on the images. (b)Evolution of the tracks over time. (c) Automatically generated key-objects for frontal, left, and right profile faces.

Page 8: Detection and Tracking of Humans and Faces

8 EURASIP Journal on Image and Video Processing

results improves more quickly than those of PF. The aver-age dD for PFI is 0.17 and for PF is 0.33, and in Figure 8(row 2), the average for PFI is 0.24 and the average for PFis 0.31. In Figure 8, rows 3 and 4, are the human tracking ex-amples with average of 0.22 and 0.12 for PFI and average of0.30 and 0.15 for PF, respectively. The lower average values ofdD in all these cases show improved performance of PFI overPF.

The comparison between PFI and NN for faces showsthat the dD and dD ist scores are better for sequences S1 andS3, similar for sequence S2, whereas these scores indicate bet-ter performance of the NN tracker for S4, but with lower Rand P scores. The reason is that in S4 the NN tracker failsto track in parts of the sequence with very low frequency ofdetections, whereas the particle filter succeeds in tracking inthese regions (see Figure 7). For people tracking, the scoresare similar for S5 and S8, whereas NN is better for S6 andS7 because sometimes detections that are larger than the per-son dominate in frequency, and PFI will filter out the cor-rectly sized detections (which are instead taken into accountby NN).

To conclude, Figure 10 shows an example of trajectory-based video description using spatiotemporal objecttrajectories of two faces and the corresponding objectprototypes (frontal and profile faces). Only the true tracksare computed by the proposed algorithm and false de-tections and associated tracks are filtered out using skincolor segmentation and postprocessing. Videos results areavailable at http://www.elec.qmul.ac.uk/staffinfo/andrea/detrack.html.

6. CONCLUSIONS

We presented a general video analysis framework for detect-ing and tracking object categories and demonstrated it onpeople and faces. Video results and quantitative measure-ments show that the proposed integration of detections withparticle filtering improves the robustness of the state estima-tion of the targets.

The proposed framework is general, and classifiers ofother body parts and other object types can be incorporatedwithout changing the overall structure of the algorithm. Us-ing additional object detectors, a complete story line of avideo based on specific object categories and their trajec-tories could be produced, describing interactions and otherimportant events. Moreover, the video could be annotatedsemantically with identity information of the appearing per-sons by adding a face recognition module [24].

Our current work includes improving the performanceof the human detector by using a larger training database andrefining the bounding boxes of the detection using edges andmotion segmentation results.

ACKNOWLEDGMENT

The authors acknowledge the support of the UK Engineer-ing and Physical Sciences Research Council (EPSRC), underGrant no. EP/D033772/1.

REFERENCES

[1] A. Cavallaro and S. Winkler, “Perceptual semantics,” in DigitalMultimedia Perception and Design, G. Ghinea and S. Y. Chen,Eds., Idea Group, Toronto, Canada, April 2006.

[2] S. Gangaputra and D. Geman, “A unified stochastic model fordetecting and tracking faces,” in Proceedings of the 2nd Cana-dian Conference on Computer and Robot Vision, pp. 306–313,Victoria, BC, Canada, May 2005.

[3] B. Wu and R. Nevatia, “Detection and tracking of multiple,partially occluded humans by Bayesian combination of edgeletbased part detectors,” International Journal of Computer Vi-sion, vol. 75, no. 2, pp. 247–266, 2007.

[4] R. van der Merwe, A Doucet, J. F. G. de Freitas, and E. Wan,“The unscented particle filter,” in Advances in Neural Informa-tion Processing Systems 14 (NIPS ’01), vol. 8, pp. 351–357, Van-couver, BC, Canada, December 2001.

[5] K. Okuma, A. Taleghani, N. de Freitas, J. J. Little, and D.G. Lowe, “A boosted particle filter: multitarget detection andtracking,” in Proceedings of the 8th European Conference onComputer Vision (ECCV ’04), vol. 1, pp. 28–39, Prague, CzechRepublic, May 2004.

[6] X. Xu and B. Li, “Head tracking using particle filter with in-tensity gradient and color histogram,” in Proceedings of IEEEInternational Conference on Multimedia and Expo (ICME ’05),vol. 2005, pp. 888–891, Amsterdam, The Netherlands, July2005.

[7] S. McKenna and S. Gong, “Tracking faces,” in Proceedings ofthe 2nd International Conference on Automatic Face and Ges-ture Recognition, pp. 271–276, Killington, VT, USA, October1996.

[8] P. Withagen, K. Schutte, and F. Groen, “Object detection andtracking using a likelihood based approach,” in Proceedings ofthe Advanced School for Computing and Imaging Conference,vol. 2, pp. 248–253, Lochem, Netherlands, June 2002.

[9] M. G. S. Bruno and J. M. F. Moura, “Integration of Bayes de-tection and target tracking in real clutter image sequences,” inProceedings of IEEE International Radar Conference, pp. 234–238, Atlanta, GA, USA, May 2001.

[10] P. Willett, R. Niu, and Y. Bar-Shalom, “Integration of Bayes de-tection with target tracking,” IEEE Transactions on Signal Pro-cessing, vol. 49, no. 1, pp. 17–29, 2001.

[11] R. Kasturi, “Performance evaluation protocol for face, personand vehicle detection & tracking in video analysis and contentextraction (VACE-II),” Computer Science & EngineeringUniversity of South Florida, Tampa, FL, USA, January 2006,http://isl.ira.uka.de/clear06/downloads/ClearEval Protocolv5.pdf.

[12] http://www.idiap.ch/amicorpus, July 2007.[13] http://www.cvg.cs.rdg.ac.uk/pets2001/pets2001-dataset.html,

July 2007.[14] P. Viola and M. Jones, “Rapid object detection using a boosted

cascade of simple features,” in Proceedings of IEEE ComputerSociety Conference on Computer Vision and Pattern Recogni-tion, vol. 1, pp. 511–518, Kauai, HI, USA, December 2001.

[15] P. Viola, M. J. Jones, and D. Snow, “Detecting pedestriansusing patterns of motion and appearance,” in Proceedings ofIEEE International Conference on Computer Vision (ICCV ’03),vol. 2, pp. 734–741, Nice, France, October 2003.

[16] G. Bradski, A. Kaehler, and V. Pisarevsky, “Learning-basedcomputer vision with Intel’s open source computer vision li-brary,” Intel Technology Journal, vol. 9, pp. 119–130, 2005.

[17] R. Lienhart and J. Maydt, “An extended set of Haar-like fea-tures for rapid object detection,” in Proceedings of International

Page 9: Detection and Tracking of Humans and Faces

Stefan Karlsson et al. 9

Conference on Image Processing (ICIP ’02), vol. 1, pp. 900–903,Rochester, NY, USA, September 2002.

[18] R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, “Face detectionin color images,” IEEE Transaction on Pattern Analysis MachineIntelligence, vol. 24, no. 5, pp. 696–706, 2002.

[19] A. Cavallaro and T. Ebrahimi, “Interaction between high-leveland low-level image analysis for semantic video object extrac-tion,” EURASIP Journal on Applied Signal Processing, vol. 2004,no. 6, pp. 786–797, 2004.

[20] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Imagechange detection algorithms: a systematic survey,” IEEE Trans-actions on Image Processing, vol. 14, no. 3, pp. 294–307, 2005.

[21] C. Stauffer and W. E. L. Grimson, “Learning patterns of ac-tivity using real-time tracking,” IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757,2000.

[22] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “Atutorial on particle filters for online nonlinear/non-gaussianbayesian tracking,” IEEE Transactions on Signal Processing,vol. 50, no. 2, pp. 174–188, 2002.

[23] Y. Rubner, J. Puzicha, C. Tomasi, and J. M. Buhmann, “Em-pirical evaluation of dissimilarity measures for color and tex-ture,” in Proceedings of the IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition (CVPR ’01), vol. 2,pp. 25–43, Kauai, HI, USA, December 2001.

[24] J. Ruiz-del-Solar and P. Navarrete, “Eigenspace-based facerecognition: a comparative study of different approaches,”IEEE Transactions on Systems, Man and Cybernetics Part C,vol. 35, no. 3, pp. 315–325, 2005.

Page 10: Detection and Tracking of Humans and Faces

EURASIP Journal on Information Security

Special Issue onSecure Steganography in Multimedia Content

Call for PapersSteganography, the art and science of invisible communica-tion, aims to transmit information that is embedded invis-ibly into carrier data. Different from cryptography it hidesthe very existence of the secret. Its main requirement is unde-tectability, that is, no method should be able to detect a hid-den message in carrier data. This also differentiates steganog-raphy from watermarking where the secrecy of hidden data isnot required. Watermarking serves in some way the carrier,while in steganography, the carrier serves as a decoy for thehidden message.

The theoretical foundations of steganography and detec-tion theory have been advanced rapidly, resulting in im-proved steganographic algorithms as well as more accuratemodels of their capacity and weaknesses.

However, the field of steganography still faces many chal-lenges. Recent research in steganography and steganalysis hasfar-reaching connections to machine learning, coding theory,and signal processing. There are powerful blind (or univer-sal) detection methods, which are not fine-tuned to a partic-ular embedding method, but detect steganographic changesusing a classifier that is trained with features from knownmedia. Coding theory facilitates increased embedding effi-ciency and adaptiveness to carrier data, both of which willincrease the security of steganographic algorithms. Finally,both practical steganographic algorithms and steganalyticmethods require signal processing of common media like im-ages, audio, and video. The field of steganography still facesmany challenges, for example,

• how could one make benchmarking steganographymore independent from machine learning used in ste-ganalysis?

• where could one embed the secret to make steganog-raphy more secure? (content adaptivity problem).

• what is the most secure steganography for a givencarrier?

Material for experimental evaluation will be made availableat http://dud.inf.tu-dresden.de/∼westfeld/rsp/rsp.html.

The main goal of this special issue is to provide a state-of-the-art view on current research in the field of stegano-graphic applications. Some of the related research topics forthe submission include, but are not limited to:

• Performance, complexity, and security analysis ofsteganographic methods

• Practical secure steganographic methods for images,audio, video, and more exotic media and bounds ondetection reliability

• Adaptive, content-aware embedding in various trans-form domains

• Large-scale experimental setups and carrier modeling• Energy-efficient realization of embedding pertaining

encoding and encryption• Steganography in active warden scenario, robust

steganography• Interplay between capacity, embedding efficiency, cod-

ing, and detectability• Steganalytic application in steganography benchmark-

ing and digital forensics• Attacks against steganalytic applications• Information-theoretic aspects of steganographic secu-

rity

Authors should follow the EURASIP Journal on Informa-tion Security manuscript format described at the journalsite http://www.hindawi.com/journals/is/. Prospective au-thors should submit an electronic copy of their completemanuscript through the journal Manuscript Tracking Sys-tem at http://mts.hindawi.com/ according to the followingtimetable:

Manuscript Due August 1, 2008

First Round of Reviews November 1, 2008

Publication Date February 1, 2009

Guest Editors

Miroslav Goljan, Department of Electrical and ComputerEngineering, Watson School of Engineering and AppliedScience, Binghamton University, P.O. Box 6000,Binghamton, NY 13902-6000, USA;[email protected]

Andreas Westfeld, Institute for System Architecture,Faculty of Computer Science, Dresden University ofTechnology, Helmholtzstraße 10, 01069 Dresden, Germany;[email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 11: Detection and Tracking of Humans and Faces

EURASIP Journal on Wireless Communications and Networking

Special Issue on

Synchronization in Wireless Communications

Call for PapersThe last decade has witnessed an immense increase of wire-less communications services to keep pace with the ever in-creasing demand for higher data rates combined with highermobility. To satisfy this demand for higher data rates, thethroughput over the existing transmission media had to beincreased. Several techniques were proposed to boost up thedata rate: multicarrier systems to combat selective fading,ultrawide band (UWB) com-munications systems to sharethe spectrum with other users, MIMO transmissions to in-crease the capacity of wireless links, iteratively decodablecodes (e.g., turbo codes and LDPC codes) to improve thequality of the link, cognitive radios, and so forth.

To function properly, the receiver must synchronize withthe incoming signal. The accuracy of the synchronizationwill determine whether the communication system is ableto perform well. The receiver needs to determine at whichtime instants the incoming signal has to be sampled (tim-ing synchronization), and for bandpass communications thereceiver needs to adapt the frequency and phase of its localcarrier oscillator with those of the received signal (carriersynchronization). However, most of the existing communi-cation systems operate under hostile conditions: low SNR,strong fading, and (multiuser) interference, which make theacquisition of the synchronization parameters burdensome.Therefore, synchronization is considered in general as a chal-lenging task.

The objective of this special issue (whose preparation isalso carried out under the auspices of the EC Network ofExcellence in Wireless Communications NEWCOM++) is togather recent advances in the area of synchronization of wire-less systems, spanning from theoretical analysis of synchro-nization schemes to practical implementation issues, fromoptimal synchronizers to low-complexity ad hoc synchroniz-ers. Suitable topics for this special issue include but are notlimited to:

• Carrier phase and frequency offset estimation andcompensation

• Doppler shift frequency synchronization• Phase noise estimation and compensation• Timing recovery• Sampling clock offset impairments and detection• Frame synchronization• Joint carrier and timing synchronization

• Joint synchronization and channel estimation• Data-aided, non-data-aided and decision directed syn-

chronization algorithms• Feedforward or feedback synchronization algorithms• Turbo-synchronization• Synchronization for MIMO receivers• Signal processing for (distributed) synchronization• Acquisition and tracking performance analysis• Spreading code acquisition and tracking• Theoretical bounds on synchronizer performance• Design of efficient training sequences or pilots

Authors should follow the EURASIP Journal on WirelessCommunications and Networking manuscript format de-scribed at the journal site http://www.hindawi.com/journals/wcn/. Prospective authors should submit an electroniccopy of their complete manuscript through the journalManuscript Tracking System at http://mts.hindawi.com/, ac-cording to the following timetable.

Manuscript Due July 1, 2008

First Round of Reviews October 1, 2008

Publication Date January 1, 2009

Guest Editors

Heidi Steendam, Department of Telecommunications andInformation Processing (TELIN), Ghent University, 9000Gent, Belgium; [email protected]

Mounir Ghogho, School of Electronic and ElectricalEngineering, Leeds University, 182 Woodhouse Lane, LeedsLS2 9JT, UK; [email protected]

Marco Luise, Department of Information Engineering,University of Pisa, 56122 Pisa, Italy; [email protected]

Erdal Panayirci, Department of Electronics Engineering,Kadir Has University, 34083 Istanbul, Turkey;[email protected]

Erchin Serpedin, Department of Electrical Engineering,A&M University, College Station, TX 77840, USA;[email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 12: Detection and Tracking of Humans and Faces

EURASIP Journal on Wireless Communications and Networking

Special Issue on

Advances in Propagation Modelling forWireless Systems

Call for Papers

The true challenge for new communication technologies isto “make the thing work” in real-world wireless channels.System designers classically focus on the impact of the radiochannel on the received signals and use propagation modelsfor testing and evaluation of receiver designs and transmis-sion schemes. Yet, the needs for such models evolve as newapplications emerge with different bandwidths, terminal mo-bility, higher carrier frequencies, new antennas, and so forth.Furthermore, channel characterization also yields the funda-mental ties to classical electromagnetics and physics, as wellas the answers to some crucial questions in communicationand information theory. In particular, it is of outstanding im-portance for designing transmission schemes which are effi-cient in terms of power or spectrum management.

The objective of this special issue is to highlight the mostrecent advances in the area of propagation measurement andmodeling. Original and research articles are solicited in allaspects of propagation, including experimental characteriza-tion, channel sounding, theoretical modeling, hardware em-ulation and new communication technologies.

Topics include, but are not limited to:

• 4G channel measurements and modeling• Fixed wireless access (including outdoor-to-indoor)• UWB propagation• 60 GHz channel measurements and modeling• Propagation models for wireless sensor networks, in-

cluding RFIDs• Spectrum sensing and channel prediction for cognitive

radio• Intra/inter vehicle and vehicle-to-infrastructure chan-

nel characterization• Body area propagation modeling• Double-directional and MIMO channels• Multiuser MIMO channels• Multi-hop and cooperative channels• Polarimetric channels• Shadowing correlation modeling• Temporal variations in wireless channels• Frequency and range dependence of parameters

• High-resolution algorithms for parameter extraction• Channel prediction and tracking• Numerical methods in wireless channel modeling• Advances in channel emulation and sounding

Authors should follow the EURASIP Journal on Wire-less Communications and Networking manuscript for-mat described at the journal site http://www.hindawi.com/journals/wcn/. Prospective authors should submit an elec-tronic copy of their complete manuscript through the jour-nal Manuscript Tracking System at http://mts.hindawi.com/,according to the following timetable.

Manuscript Due August 1, 2008

First Round of Reviews November 1, 2008

Publication Date February 1, 2009

Guest Editors

Claude Oestges, Microwave Laboratory, UniversitéCatholique de Louvain,1348 Louvain-la-Neuve, Belgium;[email protected]

Michael Jensen, Department of Electrical & ComputerEngineering, Brigham Young University, Provo, UT 84602,USA; [email protected]

Persefoni Kyritsi, Antennas, Propagation and RadioNetworking Section, Aalborg University, 9100 Aalborg,Denmark; [email protected]

Mansoor Shafi, Telecom New Zealand, Wellington, NewZealand; [email protected]

Jun-ichi Takada, Department of InternationalDevelopment Engineering, Tokyo Institute of Technology,Tokyo, Japan; [email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 13: Detection and Tracking of Humans and Faces

EURASIP Journal on Wireless Communications and Networking

Special Issue on

Fairness in Radio Resource Management forWireless Networks

Call for Papers

Radio resource management (RRM) techniques (such as ad-mission control, scheduling, sub-carrier allocation, channelassignment, power allocation, and rate control) are essentialfor maximizing the resource utilization and providing qual-ity of service (QoS) in wireless networks.

In many cases, the performance metrics (e.g., overallthroughput) can be optimized if opportunistic algorithmsare employed. However, opportunistic RRM techniques al-ways favor advantaged users who have good channel condi-tions and/or low interference levels. The problem becomeseven worse when the wireless terminals have low mobilitysince the channel conditions become slowly varying (or evenstatic), which might lead to long-term unfairness. The prob-lem of fair resource allocation is more challenging in multi-hop wireless networks (e.g., mesh and multihop cellular net-works).

The performance fairness can be introduced as one ofthe QoS requirements (e.g., as a condition on the minimumthroughput per user). Fair RRM schemes might penalize ad-vantaged users; hence, there should be a tradeoff between theoverall system performance and the fairness requirements.

We are soliciting high-quality unpublished research pa-pers addressing the problem of fairness of RRM techniquesin wireless communication systems. Topics include (but arenot limited to):

• Fairness of scheduling schemes in wireless networks• Tradeoff between maximizing the overall throughput

and achieving throughput fairness• RRM fairness: problem definition and solution tech-

niques• Fairness performance in emerging wireless systems

(WiMAX, ad hoc networks, mesh networks, etc.)• Cross-layer RRM design with fairness• Short-term and long-term fairness requirements• Adaptive RRM to support fairness• Fairness in cooperative wireless communications• Issues and approaches for achieving fairness in multi-

hop wireless networks

• RRM framework and QoS architecture• Complexity and scalability issues• Experimental and implementation results and issues• Fairness in multiple-antenna transmission/reception

systems• Fairness in end-to-end QoS provisioning

Authors should follow the EURASIP Journal on Wire-less Communications and Networking manuscript for-mat described at the journal site http://www.hindawi.com/journals/wcn/. Prospective authors should submit an elec-tronic copy of their complete manuscript through the jour-nal Manuscript Tracking System at http://mts.hindawi.com/,according to the following timetable:

Manuscript Due July 1, 2008

First Round of Reviews October 1, 2008

Publication Date January 1, 2009

Guest Editors

Mohamed Hossam Ahmed, Faculty of Engineering andApplied Science, Memorial University of Newfoundland,St. John’s, NF, Canada A1B 3V6; [email protected]

Alagan Anpalagan, Department of Electrical andComputer Engineering, Ryerson University, Toronto, ON,Canada M5G 2C5; [email protected]

Kwang-Cheng Chen, Department of ElectricalEngineering, National Taiwan University, Taipei, Taiwan;[email protected]

Zhu Han, Department of Electrical and ComputerEngineering, College of Engineering, Boise State UniversityBoise, Idaho, USA; [email protected]

Ekram Hossain, Department of Electrical and ComputerEngineering, University of Manitoba, Winnipeg, MB,Canada R3T 2N2; [email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 14: Detection and Tracking of Humans and Faces

EURASIP Journal on Bioinformatics and Systems Biology

Special Issue on

Network Structure and Biological Function:Reconstruction, Modelling, and Statistical Approaches

Call for Papers

We are particularly interested in contributions, which eluci-date the relationship between structure or dynamics of bi-ological networks and biological function. This relationshipmay be observed on different scales, for example, on a globalscale, or on the level of subnetworks or motifs.

Several levels exist on which to relate biological function tonetwork structure. Given molecular biological interactions,networks may be analysed with respect to their structural anddynamical patterns, which are associated with phenotypes ofinterest. On the other hand, experimental profiles (e.g., timeseries, disturbations) can be used to reverse engineer networkstructures based on a model of the underlying functional net-work.

Is it possible to detect the interesting features with the cur-rent methods? And how is our picture of the relationsshipbetween network structure and biological function affectedby the choice of methods?

Perspectives both from simulation approaches as wellas the evaluation of experimental data and combinationsthereof are welcome and will be integrated within this spe-cial issue.

Authors should follow the EURASIP Journal on Bioin-formatics and Systems Biology manuscript format describedat the journal site http://www.hindawi.com/journals/bsb/.Prospective authors should submit an electronic copy of theircomplete manuscript through the journal Manuscript Track-ing System at http://mts.hindawi.com/, according to the fol-lowing timetable:

Manuscript Due June 1, 2008

First Round of Reviews September 1, 2008

Publication Date December 1, 2008

Guest Editors

J. Selbig, Bioinformatics Chair, Institute for Biochemistryand Biology, University of Potsdam, Germany;[email protected]

M. Steinfath, Institute for Biochemistry and Biology,University of Potsdam, Germany;[email protected]

D. Repsilber, AG Biomathematics & Bioinformatics,Genetics and Biometry and Genetics Section, ResearchInstitute for the Biology of Farm Animals, Dummerstorf,Germany; [email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 15: Detection and Tracking of Humans and Faces

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Why publish in this journal?

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Hindawi Publishing Corporation410 Park Avenue, 15th Floor, #287 pmb, New York, NY 10022, USA

ISSN: 1687-6911; e-ISSN: 1687-692X; doi:10.1155/RLSP

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Research Letters in Signal Processing is devoted to very fast publication of short, high-quality manuscripts in the broad field of signal processing. Manuscripts should not exceed 4 pages in their final published form. Average time from submission to publication shall be around 60 days.

Why publish in this journal?Wide Dissemination

All articles published in the journal are freely available online with no subscription or registration barriers. Every interested reader can download, print, read, and cite your article.

Quick Publication

The journal employs an online “Manuscript Tracking System” which helps streamline and speed the peer review so all manuscripts receive fast and rigorous peer review. Accepted articles appear online as soon as they are accepted, and shortly after, the final published version is released online following a thorough in-house production process.

Professional Publishing Services

The journal provides professional copyediting, typesetting, graphics, editing, and reference validation to all accepted manuscripts.

Keeping Your Copyright

Authors retain the copyright of their manuscript, which is published using the “Creative Commons Attribution License,” which permits unrestricted use of all published material provided that it is properly cited.

Extensive Indexing

Articles published in this journal will be indexed in several major indexing databases to ensure the maximum possible visibility of each published article.

Submit your Manuscript Now...In order to submit your manuscript, please visit the journal’s website that can be found at http://www.hindawi.com/journals/rlsp/ and click on the “Manuscript Submission” link in the navigational bar.

Should you need help or have any questions, please drop an email to the journal’s editorial office at [email protected]

Signal ProcessingResearch Letters in

Editorial BoardTyseer Aboulnasr, Canada

T. Adali, USA

S. S. Agaian, USA

Tamal Bose, USA

Jonathon Chambers, UK

Liang-Gee Chen, Taiwan

P. Dan Dan Cristea, Romania

Karen Egiazarian, Finland

Fary Z. Ghassemlooy, UK

Ling Guan, Canada

M. Haardt, Germany

Peter Handel, Sweden

Alfred Hanssen, Norway

Andreas Jakobsson, Sweden

Jiri Jan, Czech Republic

Soren Holdt Jensen, Denmark

Stefan Kaiser, Germany

C. Chung Ko, Singapore

S. Maw Kuo, USA

Miguel Angel Lagunas, Spain

James Lam, Hong Kong

Mark Liao, Taiwan

Stephen Marshall, UK

Stephen McLaughlin, UK

Antonio Napolitano, Italy

C. Richard, France

M. Rupp, Austria

William Allan Sandham, UK

Ravi Sankar, USA

John J. Shynk, USA

A. Spanias, USA

Yannis Stylianou, Greece

Jarmo Henrik Takala, Finland

S. Theodoridis, Greece

Luc Vandendorpe, Belgium

Jar-Ferr Kevin Yang, Taiwan