1
INTEGRATED SIDE/REAR SAFETY SYSTEM
Ho Gi Jung1, 2
, Young Ha Cho1, Pal Joo Yoon
1, Jaihie Kim
2
1MANDO Corporation Central R&D Center, 413-5, Gomae-Dong, Giheung-Gu, Yongin-Si,
Kyonggi-Do 446-901, Republic of Korea
E-mail: {hgjung, haya0709,pjyoon}@mando.com 2Yonsei University, School of Electrical and Electronic Engineering, 134, Sinchon-Dong,
Seodaemun-Gu, Seoul 120-749, Republic of Korea
E-mail: {hgjung, jhkim}@yonsei.ac.kr
Keywords: Active Safety System, Blind Spot Detection, Collision Warning, Parking Assist System
INTRODUCTION
Intelligent vehicles become to gather more and more interest and are drastically progressing
(Bishop, 2005). Until recent days, respective system functions have been developed such as
LDW(Lane Departure Warning), ACC(Adaptive Cruise Control), CW/CA(Collision
Warning/Collision Avoidance), IPAS(Intelligent Parking Assist) and BSD(Blind Spot Detection).
Each system employed their own sensors, processing unit and actuators. As more and more systems
are adopted, the number of added sensors and processing units becomes too large. In order to
integrate redundant sensors and actuators, automotive standards for software architecture are under
development (AUTOSAR, 2007). Furthermore, sensing technologies recognizing surrounding
environment are continuously developed (Gandhi, 2006; Li, 2006).
This paper proposes a novel side/rear safety system using only one scan-type laser radar, which
is installed at the rear left side of vehicle as shown in figure 1. The sensor monitors the left side, i.e.
driver seat side, and backward side of vehicle. Our proposed system, ISRSS(Integrated Side/Rear
Safety System), consists of four components: scan-type laser radar, ECU(Electronic Control Unit),
active steering system, and user interface. ISRSS includes four functions: BSD(Blind Spot
Detection), RCWS(Rear Collision Warning System), semi-automatic perpendicular parking, semi-
automatic parallel parking.
Figure 1 – Sensor installation and coverage
2
PREPROCESSING
Range data acquired by scan-type laser radar are processed and converted into meaningful clusters.
Preprocessing procedure consists of four steps: invalid data elimination, isolated single point
elimination, occlusion detection and small cluster elimination.
Occlusion is defined as an incontinuous position between consecutive two range data. It can be
recognized by finding transition between invalid data and valid data. Two continuous valid data
with longer distance than a threshold also make an occlusion.
Figure 2(a) shows initial range data with respect to angle and Figure 2(b) shows initial range data
in Cartesian coordinate system. It can be observed that there are many invalid and noisy data.
Figure 2(c) and (d) shows range data clusters resulting from the preprocessing procedure. It can be
observed that noisy data are removed and connected range data are recognized as a cluster. It is
noticeable that although range data acquired by scan-type laser radar contain 2D (two dimensional)
information, they are actually treated like 1D (one dimensional) array because each data are ordered
with respect to angle.
For the sake of BSD and RCWS, recognized clusters are tracked by general Kalman filtering.
The center point of tracked cluster is used as the representative of the cluster.
100 200 300 400 500 600 700 800 900 1000
-300
-200
-100
0
100
200
300
400
500
Angle
Range
-100 -50 0 50 100 150 200-100
-50
0
50
100
150
x
y
(a) initial range data with respect to angle (b) initial range data in Cartesian coordinates
50 100 150 200 250-60
-40
-20
0
20
40
60
80
100
120
140
Angle
Range
-20 -10 0 10 20 30 40 50 60 70
-10
0
10
20
30
40
50
60
x
y
(c) resultant clusters with respect to angle (d) resultant clusters in Cartesian coordinates
Figure 2 – Preprocessing of acquired range data
3
BSD(Blind Spot Detection)
The proposed ISRSS detects potentially dangerous vehicles in blind spot zone by finding range data
cluster and tracking. In general, BSD system monitors one of two vehicle sides, driver seat side
(Rüder, 2002). Blind spot zone is defined as shown in Figure 3 such that it covers 45º with respect
to vehicle side surface and ranges 12m with 5m width.
During driving with higher speed than a threshold, if driver turns on lane change indicator and
there exists a vehicle in blind spot zone, then system informs this dangerous situation by audible
and visual warning message.
Figure 3 – Monitoring of blind spot zone. Figure 4 – Rear vehicle detection for RCWS
RCWS(Rear Collision Warning System)
With the same operating principle, the proposed ISRSS detects vehicle approaching dangerously
from rear side as shown in figure 4 (Akita, 2006).
Displacement between consecutive detections can roughly estimate the speed of approaching
vehicle. Based on the estimate of TTC(Time To Contact), ISRSS warns driver to prepare potential
rear-side collision. Equation (1) defines approximated TTC. Where, d[n] means the measured
distance to object at nth sampling time and v means the relative speed between subjective vehicle
and object. Ts denotes the time difference between samplings.
[ ] [ ]
[ ] [ ]1
s
c
d n d n tt
v d n d n
⋅= =
− − (1)
4
FREE SPACE DETECTION FOR IPAS(Intelligent Parking Assist System)
ISRSS detects free space, which is used as the target position of semi-automatic perpendicular
parking. ‘L’ character template matching measures the fitness of range data cluster with respect to
‘L’ character shape. Range data clusters with larger fitness than a threshold are recognized as a
corner. In general situation, free space will be located in a limited angular range, e.g. ±60º, with
respect to subjective vehicle’s rear axis. Selecting the nearest corner in the range can successfully
designate main reference vehicle. In other words, ‘L’ character template matching finds vehicle
located at far side of free space. By searching the nearest occlusion or corner in the direction of the
main reference vehicle’s shorter side in a limited angular range, e.g. ±45º, vehicle at the opposite
side can be detected. This vehicle is called sub reference vehicle. Once free parking space is
detected as shown in figure 5(a), path planner generates optimal path from current position to target
position. Path tracker continuously sends required steering angle command to active steering
system.
ISRSS detects free space for semi-automatic parallel parking system as shown in figure 5(b).
Basic operation is similar to the perpendicular case. Instead of main reference vehicle’s shorter side
direction, longer side direction is used during searching sub reference vehicle. The angular range is
smaller, e.g. ±15º, because the distance between two reference vehicles are larger. Consequently,
‘L’ character template matching finds far side vehicle. With the pose of detected vehicle, nearer
vehicle is detected. Once the free space is detected, path planner creates the optimal path for parallel
parking operation and path tracker follows planned path (Jung, 2006b).
Target position is a rectangular region with the same length and width as subjective vehicle. By
setting the central axis at the middle of free space, the position in the direction of parking row can
be determined. However, the depth position of target position should be aligned to the vehicle
locating outward. It should not be ignored that one of two reference vehicles with slanted pose will
cause somewhat undesirable result.
(a) perpendicular parking situation (b) parallel parking situation
Figure 5 –Free space detection for IPAS
5
EXPERIMENTAL RESULTS
Experimental setup
In order to validate the feasibility of proposed system, we set up a test vehicle as shown in Figure 6.
Used scan-type laser radar is SICK’s LD-OEM (SICK, 2007). This provides 360º FOV(Field Of
View) and up to 250m scanning range. Although its minimum angular resolution is 0.125º, we used
0.25º because of scanning frequency. The communication between LD-OEM and processing PC is
CAN(Control Area Network). Although LD-OEM is developed for indoor application, during test it
did not cause any problem.
Two cameras were installed respectively at the backend and side-mirror of test vehicle in order
to record surrounding situation. Rear-view camera was used for recording garage parking situation
and RCWS. In order to cover a wide range of rearward situation, fisheye lens with 120º FOV was
used. Side-view camera was used for recording parallel parking situation and BSD situation.
Figure 6 – Test vehicle with scan-type laser radar and side/rear cameras for monitoring
Experimental results: BSD
To make other vehicles to overtake our test vehicle, subjective vehicle travelled at relatively slow
speed, e.g. 80km/h. Range data and synchronously acquired images were stored for analysis.
Figure 7 shows an example sequence of range data and images while test vehicle was overtaken
by a SUV(Sport Utility Vehicle). Figure 7(a) shows the image at time index n=1 and detected
vehicle. As shown in Figure 7(b) corresponding to Figure 7(a), range data cluster corresponding to
SUV was recognized and marked by a rectangle. Figure 7(c) and (d) shows the situation when SUV
was located inside of blind spot zone. If driver turns on left lane change indicator, system will warn
driver of dangerous situation by audible warning. Figure 7(e) and (f) shows the situation when SUV
went in front of subjective vehicle. Although SUV went out of blind spot zone, another vehicle, a
truck, was locating in blind spot zone. Consequently, left lane change still remained dangerous. In
Figure 7(e) and (f), to show the state transition of passing vehicle in the viewpoint of BSD,
detection mark is drawn at the first SUV.
Figure 8 shows the location of SUV with respect to subjective vehicle and its warning state. This
sequence consists of 25 samplings. Locations designated by dot marking are not in warning state
and locations designed by ‘x’ marking are in warning state. It can be observed that when the SUV is
inside of blind spot zone, it is recognized as dangerous vehicle.
6
(a) image at n=1 (b) range data at n=1
(c) image at n=13 (d) range data at n=13
(e) image at n=25 (f) range data at n=25
Figure 7 – Image and range sequence in overtaking situation
Figure 8 – Vehicle locations and BSD warning state
7
Experimental results: RCWS In order to make following vehicle to approach near to test vehicle, subjective vehicle travelled
relatively slow, e.g. 80km/h. However, in casual driving situation on public roadway, we could not
experience rear collision situation. Instead, by regarding the vehicle in left lane as a following
vehicle, RCWS system was validated.
Figure 9 shows the image and range data sequence in rear collision situation. In Figure 9(a) and
(b), the relative speed was 1.8m/s and the TTC was 12 sec. In Figure 9(c) and (d), the relative speed
was 3.3m/s and the TTC was 4.2 sec. In Figure 9(e) and (f), the relative speed was 2.75m/s and the
TTC was 1.2 sec. In this case, RCWS warning turned on because TTC was under the threshold, e.g.
2 sec.
(a) image at n=1 (b) range data at n=1
(c) image at n=20 (d) range data at n=20
(e) image at n=40 (f) range data at n=40
Figure 9 – Image and range sequence in rear collision situation
8
Figure 10 shows the locations of target vehicle, i.e. bus in this case, and RCWS warning states.
This sequence consists of 44 samplings. Locations designated by dot marking are not in warning
state and locations designed by ‘x’ marking are in warning state. It can be observed that when the
TTC went below a threshold, it is recognized as dangerous vehicle.
Figure 10 – Vehicle locations, relative speed, TTC and RCWS warning state
Experimental results: garage parking
In various situations, target parking position was recognized. Unlike BSD and RCWS, parking
application used only one range data set. Generally, this data set was acquired when driver started
parking procedure
Figure 11(a) shows the test situation of garage parking. In this case, target position was located
at apartment parking slot and between two parked vehicles. Figure 11(b) shows recognized corners.
Figure 11(c) shows recognized target parking position and Figure 11(d) shows recognition result in
more detail. Corner recognized as main reference vehicle is marked by ‘L’ with fitting error and sub
reference vehicle is marked by small ball that is a corner recognized in initial corner detection.
The distance between two reference vehicles was measured as 3.23m and was larger than subject
vehicle’s width. Figure 11(e) shows target parking position projected onto undistorted input image.
As explained in experimental setting, we used fisheye lens to cover wider FOV in garage parking
situation. Acquired input image was converted into undistorted image based on radial distortion
model. Caltech calibration toolbox (Bouguet, 2007) and extrapolation-based refinement method
(Jung, 2006a) was used for lens calibration and image rectification. It can be observed that
established target position is correct and proper.
9
-25 -20 -15 -10 -5 0 5 10 15 20
-20
-15
-10
-5
0
5
10
15
20
x
y
0.16797 0.22621
0.085867
0.23446
(a) image showing the test situation of garage parking (b) recognized corners
-25 -20 -15 -10 -5 0 5 10 15 20
-20
-15
-10
-5
0
5
10
15
20
x
y
0.22621
-3 -2 -1 0 1 2 3 4
-1
0
1
2
3
4
5
6
x
y 0.22621
(c) recognized target position (d) recognized reference vehicles
(e) target position projected onto undistorted input image
Figure 11 – Target parking position designation in garage parking using scan-type laser radar
10
Experimental results: parallel parking
In order to validate the target position designation in parallel parking situation, proposed system
was tested with various situations. Parallel parking is almost the same with garage parking situation
except some parameter values.
Figure 12(a) shows the test situation and Figure 12(b) shows recognized corners. Figure 12(c)
shows recognized target position and Figure 12(d) shows the result in more detail. ‘L’ marking
denotes the main reference vehicle and small ball denotes the sub reference vehicle. In this case, the
occlusion of sub reference vehicle was recognized. Figure 12(e) shows the recognized position
projected onto input image.
-15 -10 -5 0 5 10 15
0
5
10
15
20
x
y
0.297
0.28975
0.10794
0.16358
0.079799
0.25435
(a) image showing the test situation of parallel parking (b) recognized corners
-15 -10 -5 0 5 10 15
0
5
10
15
20
x
y 0.16358
(c) recognized target position (d) recognized reference vehicles
(e) recognized position projected onto input image
Figure 12 – Target position designation in parallel parking using scan-type laser radar
11
CONCLUSION
This paper proposed a novel integrated surrounding environment recognition system: ISRSS.
Experimental results show that scan-type laser radar installed at the left backend of vehicle can
cover BSD, RCWS, and target parking position designation. Our proposed ISRSS is supposed to
reduce total system price because it uses only one sensor and integrated processing unit.
Furthermore, compared with different kinds of sensors such as mm-wave radar, stereo vision and
motion stereo vision, processing algorithm for scan-type laser radar is simple and reliable.
Recently scan-type laser radar is expected to become more inexpensive. However, there remain
some unsolved problems: 1) scan-type laser radar more robust to direct sun-light should be
developed, 2) the size should be reduced such that it can be integrated into bumper, and 3)
endurance requirement for automotive application, which has been open to question because of its
mechanical scanning mechanism, should be satisfied.
REFERENCES
Akita, T., Kadoya, A. and Hayashi, K. (2006), “Rear obstacle warning system utilizing a rear monitor camera”, 13th
World Congress on Intelligent Transport Systems and Services, London, UK.
AUTOSAR (2007), Automotive Open System Architecture, http://www.autosar.org
Bishop, R. (2005), Intelligent vehicle technology and trends, Artech House, Boston, London.
Bouguet, J. Y., “Camera calibration toolbox for Matlab”, http://www.vision.caltech.edu/bouguet/calib_doc/index.html.
Gandhi, T. and Trivedi, M. M. (2006), “Vehicle surround capture: survey of technologies and a novel omni-video-based
approach for dynamic panoramic surround maps”, IEEE Transactions on Intelligent Transportation Systems, Vol. 7, No.
3, Sep. 2006, pp. 293-308.
Jung, H. G., Lee, Y. H., Yoon, P. J. and Kim J. (2006a), “Radial distortion refinement by inverse mapping-based
extrapolation”, the 18th
International Conference on Pattern Recognition, Hong Kong, Chaina.
Jung, H. G., Choi, C. G., Kim, D. S. and Yoon, P. J. (2006b), “System configuration of intelligent parking assistant
system”, 13th
World Congress on Intelligent Transport Systems and Services, London, UK.
Li, S. (2006), “Monitoring around a vehicle by a spherical image sensor”, IEEE Transactions on Intelligent
Transportation Systems, Vol. 7, No. 4, Dec. 2006, pp.541-550.
Rüder, M., Enkelmann, W. and Garnitz, R. (2002), “Highway Lane Change Assistant”, IEEE Intelligent Vehicle
Symposium, pp. 240-244, vol. 1.
SICK (2007), “Laser Measurement Systems – Indoor”,
http://www.sick.com/home/factory/catalogues/auto/lmsindoor/en.html.
Top Related