8/2/2019 IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA
1/4
Abstract - In this paper, a hardware solution for object tracking
problem using image processing principles is provided. Object
tracking is the process of detecting the moving objects of inter-
est and plotting its trajectory analyzing them. A solution based
on image histogram used to detect the object motion in the col-
or image frames capture by the CCD camera. These captured
video frames are subject to back ground subtraction principles
in order to identify the moving object in the captured image.
This difference frame is used to determine the displacement
and the velocity information of the moving vehicles of the cap-
tured frame. Object velocity estimation of the target vehiclewith in the range of 100 meters to 3 kilometers are easily identi-
fied and processed. This SoC hardware design supports the
real time requirements of common video frame with more than
30 fps.
A VHDL coding of the object displacement and its velocity is
carried out and the simulation results will be displayed in the
ModelSim wave form window. Thus the same code will be syn-
thesized using Xilinx ISE tools targeting the virtex-4 FPGA. In
order to verify the results of the hardware for its functionality,
an equivalent program is written in MATLAB.
I. INTRODUCTIONTracking a moving object in a complicated scene is a diffi-
cult problem. If objects as well as the camera are moving,the resulting motion may get more complicated. Particle-
filter based approaches are employed to model the compli-
cated tracking problem.. A general-purpose object tracking
algorithm is needed in many applications: There are several
applications of object tracking including video compression,
driver assistance, video post-editing. Some other applications
are surveillance via intelligent security cameras, perceptual
user interfaces that can make use of users gestures or move-
ments, or putting 3D tracked real object in a virtual environ-
ment in augmented reality.
Objectives
For our implementation, we assume that the camera is static
and only the object is moving, where background differencemethods are used.
For our implementation of object tracking, we assume that
the CCD camera position is fixed and that the vehicles are
moving at a determined rate. Background subtraction princi-
ples are used to identify the motion of the object in the cap-tured video frame. Image histogram of the input static frame
is compared with the histogram successive frames in order to
find there is any change in the object or motion of the vehi-
cles. Once the motion is identified, then we compute the
displacement of the object in pixels which in turn used to
determine the velocity of the moving object. A SoC design
which performs the above operations .
II. IMPLEMENTATION
Basics of Object Tracking
Tracking a moving object in a complicated scene is a diffi-
cult problem. If objects as well as the camera are moving,
the resulting motion may get more complicated.The image object velocity Soc design ports are shown in
Figure 2.1. This SoC is designed such away that the input is
accepted only when the data_in_valid is true and output the
values only when the output valid is true.
Figure2.1.Object Velocity SoC
Figure2.2 . Building blocks object velocity SoC
The basic building blocks of the object velocity SoC is
shown in the figure 2.2. The output video lines of the CCDcamera is stored in the two SRAM namely frame A and
Frame B respectively. Address generation logic is coded inorder to read the 8 x 8 from the SRAM.The basic principle
employed in finding the vertical and horizontal histograms is
accumulation. This is shown in the Figure2.3. For the verti-
cal accumulation, it is required to add all the 8-bit pixel data
in each column of the frame. For each column, this corre-
sponds to addition of 8-bit data in all the 64 pixels. Similar-ly, horizontal accumulation involves addition of all the row
wise addition for each row. That is, the 8-bit data of all the
64 pixels in each row are to be added. The accumulated data
are stored in 1 x 64 arrays namely the Hy and Hx arrays.
These are called the frame-horizontal and frame-verticalarrays. Each element in these bins is of 16 bit allowing for
the maximum possible value of accumulation. For these op-
erations to be completed in minimum time, the entire frame
IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEXFPGA
1M.Srinivas,
2Y.Raghavender rao
1Lecturer, JNTUH nachupally(kondagattu), KNR.
2Assoc.Professor, JNTUH nachupally(kondagattu), KNR.
e-mail: [email protected], [email protected]
International Journal ofSystems , Algorithms &
ApplicationsJSA A
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677 17
8/2/2019 IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA
2/4
data needs to be stored in the memory. However, this results
in inefficient memory utilization. Thus, the entire process re-
quires 64 such read-in and read-out operations. Now, the re-
quirement for processing on 1 x 8 array data, instead of the
entire 64 x 64 frame data, makes the accumulation complex.
The technique used here is that as and when the 1 x 64 vector
is read, it is put into the 64 vertical bins separately. Further,
all these 64 pixel data are accumulated in the correspondinghorizontal bin. This completes the single vector processing.
Again when a new vector is read-in, it is added to the current
indices of vertical bins separately. The horizontal accumula-
tion is same as that for the first 1 x 64 vector. After 64 such
operations, the complete vertical and horizontal accumulation
is accomplished.
Figure2.3. Horizontal and Vertical Averaging
Two point Moving Averaging Filter
In the process of capturing, recording, image processing and
detecting the object or some combination of these, errors and
noise may creep into the image. Smoothing is used primarily
to diminish the effect of spurious noise and to blur the false
contours pixels that may be present in a digital image. Here,calculation of the new value is based in the averaging of
brightness values in 2 successive data of the bins. In this pa-
per, to achieve parallel processing and hence, ensure high-
speed operation, two moving average filters are used. Simul-
taneous filtering of the two bins allows for less processing
time.
For an array f(i) the resulting average array a(i) for 2-point
moving average algorithm is
f (i)/2;for i = 1 or N.a (i) =(f (i) + f (i+1))/2
Where N is maximum length of the array
It is important to note that the new value is equal to half of the
original at the starting and ending of the array.. These posi-
tions correspond to the border of the frame being processed.The entire processing is concentrated on a specific object that
is being tracked. Therefore, it is generally taken care during
the pre-processing (image segmentation or pattern identifica-
tion) itself that the desired object is that sufficiently in the
middle to ensure easy and accurate processing.
Maxima Index Finder
Simple technique for finding the maximum and its index in a
given array . The values in the Hx or Hy bins correspond to the
accumulated gray level intensity of the object in that row or
column of the image being processed. Therefore, the indices
of the maxima in vertical accumulator bin (Hy) and horizontal
accumulator bin (Hx) correspond to column number and row
number respectively of the object of interest in the actual 2-D
image. Hence, the index value is important rather than the
magnitude of maximum value. This is the exact principle used
in object tracking for determining the position of the object
(missile) in the image.
Block Processing
Block processing produce the same results as processing the
image all at once. In distinct block processing, distinct blocks
are rectangular / square partitions that divide a matrix into m x
n sections.
Figure2.4.Block processing of input frame
Figure 2.4 shows one block (8 x 8) of an image frame(512x512). Pixel intensity values are indicated as integer val-
ues of an array remaining from 0 to 255. The Hx and Hy are
partial sum of horizontal and vertical sums of given ( 8 x 8 )matrix.
Velocity Estimation PrincipleA moving object is also changing location in its Region of
Interest (ROI) and needs therefore also to be distinguished in
every frame from a sequence of consecutive images. Once the
object is segmented into a number of frames, the displacement
of the object between two image frames must be extracted.
The displacement of the box corresponds to the distance cov-
ered by the object in reality. The establishment of this corre-
spondence is a problem by itself and often forces a need for
calibration. When the whole object is observed in one of the
frames, the difference between two consecutive frames showstwo leftovers instead of one. Additional effort is then required
to relate these two as belonging to a single object.
Velocity Estimation Process
Figure 2.5 Object Velocity estimation
The process of velocity measurement used in this implementa-
tion depends on a number of parameters, mainly coupled to
the camera in use, such as image resolution, frame frequencyff
IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA International Journal ofSystems , Algorithms &
ApplicationsJSA A
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677 18
8/2/2019 IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA
3/4
and view angle . Another parameter of importance is the dis-
tance between the camera and the moving object, dp. Givenffin frames/seconds, dp in meters and in degrees, the width ofthe captured scenery is da = 2dptan ( / 2). Figure 2.5 illus-trates a camera where the involved parameters are pointed.An
object with velocity v (meter/seconds) will cover the distanceda (meters) in t= da / v (seconds). During this time, the cam-
era takesN= t .ff = (da .ff/ v) frames. In other words, if allthe frames are super-imposed, there will be Ninstances of the
moving object on a single frame. If W, in pixels, denotes thewidth of frames delivered by the camera, the movement of the
object corresponds to a displacement in pixels given by np =W/ N= (W . v) /(da .ff). The minimum velocity that can be
detected corresponds to a single pixel displacement of theobject. In order to overcome this limitation, a 5% margin of
the total frame-width is provided on both vertical edges. Obvi-
ously, the maximum displacement by means of pixels is corre-
lated to the maximum object-velocity that can be detected.
Typical PAL camera specifications like: horizontal view angle
of 60, 720 pixel wide frames, and frame-rate of 25 frames/s
are utilized in where the displacements of a 3 meter long ob- ject are shown for different speeds. Obviously, the displace-
ment depends on the distance dp of the camera from the cap-tured scenery in which the object moves. The size of the blob
is dependent on dp as well. Such dependencies can be re-solved by non-linear post-processing the blob sizes & dis-
placement over a sequence of images. This effectively elimi-
nates accuracy considerations from the presented algorithm.
Velocity Estimation Hardware ProcessThe difference in recording times of the two frames gives thetime interval of the motion of object. Velocity of the object
moving inX-direction is obtained by finding the maximum of
the horizontal direction. (This step is used in our implementa-tion, by assuming the input reference frame as complete
black), similarly the velocity of object in the Y- direction is
obtained by finding the maximum of the vertical direction(However this step is not utilized). Pixel displacement is
computed by assuming the fixed distance between the cam-
era axis and the moving object with the angle between the
camera axis and the object always perpendicular to each other.
III. RESULTS
3.1.VHDL Simulation results
Figure3.1. Velocity computation Simulated result
Figure3.2 . Image object velocity SoC Chip pin details
3.2.MATLAB Simulation
Figure3.5. Frame 1and Frames 2 and Frame 3
Figure3.6 . Frames RED GREEN and Blue Histogram
IV. CONCLUSIONThe image object velocity SoC is developed using the Mod-
elSim and Xilinx EDA tools. Image histogram, two point
moving averaging filter, maxima index finder and velocity
compute has been implemented successfully. In order to veri-
fy the results of this SoC design an equivalent MATLAB code
is also developed. All the results of the VHDL simulation and
the MATLAB are matching bit by bit. The SoC developed is
synthesized targeting the Xilinx Vertex-4 FPGA in the ML404
PCI based FPGA development board. Synthesized result re-
veals that the SoC is able to run at a speed of 133MHz, indi-
cates that the system is capable of processing at least 30fps of
720 x 480 NTSC frames. The image object velocity of the firsttwo sample input frames simulated is around 111 meters per
second and hence the objective of the SoC design of object
velocity has been met.
IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA International Journal ofSystems , Algorithms &
ApplicationsJSA A
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677 19
8/2/2019 IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA
4/4
V. REFERNCES[1] Massimo Piccardi, Background subtraction techniques: a re-
view*, IEEE International Conference on Systems, man and
cybernets 2004.[2] ChrisStaufferandW.E.LGrimson Adaptivebackground mixture
models forreal-time tracking, 1999 IEEE[3] Ahmed Elgammal, David Harwood, Larry Davis, Non-
parametric model for background subtraction 6th European
Conference on Computer Vision. Dublin, Ireland, June/July 2000.[4] Thanart Horprasert chalidabhongse, Kyungnam Kim,David
Harwood, and Larry Davis, A Perturbation Method for Evalu-
ating Background Subtraction Algorithms available at
[5] Makito Seki, Hideto Fujiwara and Kazhuhiko Sumi , A Ro- bust Background subtraction Method for Changing Back-ground, 2000 IEEE.
[6] C. Yang, R. Duraiswami, N. Gumerov and L. Davis, Improved
Fast Gauss Transform and Efficient Kernel Density Estima-tion, In IEEE International Conference on Computer Vision,pages 464-471, 2003.
[7] Christopher Wren, Ali Azarbayejani, Trevor Darrell, Alex Pent-land, Pfinder:Real-Time Tracking of the Human Body, 1996
IEEE.[8] Chris Stauffer,and W.E.L Grimson, Adaptive background mix-
ture models for real-time tracking, 1999 IEEE
[9] Donald L. Hung, H.D.Cheng, and Savang sengkhamyong, De-
sign of a Configurable Accelerator for Moment Computation,IEEE Transactions on VLSI systems, Vol.8, N0.6, December
2000.[10]Ahmed Elgammal, David Harwood, Larry Davis, Non-
parametric model for background subtraction 6th European Con-ference on Computer Vision. Dublin, Ireland, June/July 2000.
[11]Suleyman Malki,G.Deepak, Vincent Mohanna, Markus Ring-hofer and Lambert Spaanenburg, Velocity Measurement by a
Vision Sensor CIMSA 2006 IEEE International Conferenceon Computational Intelligence for Measurement Systems and
Applications La Coruna - Spain, 12-14 July 2006.
IMAGE PROCESSING FOR OBJECT TRACKING USING VERTEX FPGA International Journal ofSystems , Algorithms &
ApplicationsJSA A
Volume 2, Issue 2, February 2012, ISSN Online: 2277-2677 20
Top Related