ENSOR EQUIREMENTS AND SPECIFICATIONS - … · 2.5 SILICON RETINA STEREO CAMERA ... Present far...

78
RELIABLE APPLICATION SPECIFIC DETECTION OF ROAD USERS WITH VEHICLE ON-BOARD SENSORS Public Page 1 of 78 DOCUMENT DELIVERABLE NUMBER D1.2 DUE DATE 30.06.2008 ISSUED BY ARC ACTUAL DATE 07.10.2008 CONTRIBUTING WP/TASK WP1 / TASK 1.2 PAGES 6 CONFIDENTIALITY STATUS PUBLIC ANNEXES - PROJECT GRANT AGREEMENT NO. 216049 ACRONYM ADOSE TITLE RELIABLE APPLICATION SPECIFIC DETECTION OF ROAD USERS WITH VEHICLE ON-BOARD SENSORS CALL FP7-ICT-2007-1 FUNDING SCHEME STREP DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS AUTHORS ARC E. SCHOITSCH APPROVAL WORKPACKAGE LEADER ARC E. SCHOITSCH PROJECT COORDINATOR CRF N. PALLARO AUTHORISATION PROJECT OFFICER EUROPEAN COMMISSION I. HEIBER

Transcript of ENSOR EQUIREMENTS AND SPECIFICATIONS - … · 2.5 SILICON RETINA STEREO CAMERA ... Present far...

RELIABLE APPLICATION SPECIFIC DETECTION OF ROAD USERS WITH VEHICLE ON-BOARD SENSORS

Public Page 1 of 78

DOCUMENT DELIVERABLE NUMBER D1.2 DUE DATE 30.06.2008 ISSUED BY ARC ACTUAL DATE 07.10.2008 CONTRIBUTING WP/TASK WP1 / TASK 1.2 PAGES 6 CONFIDENTIALITY STATUS PUBLIC ANNEXES - PROJECT GRANT AGREEMENT NO. 216049 ACRONYM ADOSE

TITLE RELIABLE APPLICATION SPECIFIC DETECTION OF ROAD USERS WITH VEHICLE ON-BOARD SENSORS

CALL FP7-ICT-2007-1 FUNDING SCHEME STREP

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

AUTHORS

ARC E. SCHOITSCH

APPROVAL WORKPACKAGE LEADER ARC E. SCHOITSCH PROJECT COORDINATOR CRF N. PALLARO

AUTHORISATION

PROJECT OFFICER EUROPEAN COMMISSION I. HEIBER

Public Page 2 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 2 of 78

REVISION HISTORY

VER. DATE PAG. NOTES AUTHOR

0.1w 2008-03-03 All Initial version (ARC) E. Schoitsch

0.2w 2008-04-02 8-9 First input SRS (ARC) W. Kubinger, M. Litzenberger

0.3w 2008-04-16 All SRS requirements, comments and extensions (ARC)

E. Schoitsch

0.5w 2008-04-28 All Integration ARC, Bosch (FIR) U. Beutnagel-Buchner, E. Schoitsch

0.6w 2008-05-25 All Update SRS sensor (ARC), STM (FIR, MFOS) W. Kubinger, L. Bizarri, E. Schoitsch

0.8w 2008-06-16 All Integration VTT (HR-ATAG, HR-PTAG), Update Bosch, ARC

E. Schoitsch, W. Kubinger, V. Ville, H.

Hagstedt

0.9w 2008-06-25 All Updates Specs ARC (SRS), Bosch (Review) E. Schoitsch, W. Kubinger, H. Hagstedt

0.95w 2008-07-01 p. 5, 6 Update UMICORE (state-of-the-art) T. Krekels

0.98w 2008-07-31 p. 33-38 Harmonize SRS Specs with D6.1 W. Kubinger, M. Litzenberger

0.98d 2008-08-07 All ARC review of complete document, minor adaptation of SRS specifications

E. Schoitsch, W. Kubinger

0.99d 2008-08-21 All Adopted to new template W. Kubinger

1.0w 2008-08-23 All Integration comments VTT, UMICORE, ARC E. Schoitsch, V. Ville, J. Van Nylen

1.1w 2008-08-25 p. 29-30 Additional specifications for HR-Tag (VTT) V. Ville

1.2w 2008-09-11 All Requirements and specs of MFOS and 3DCAM. Review of whole document, including corrections from Bosch, VTT, MM, ARC.

D. Capello, N. Pallaro

1.3w 2008-09-16 All Update SRS specs format, “notes on dependability” subchapter added, some corrections (1.2.4)

E. Schoitsch, W. Kubinger

1.4w 2008-10-07 All Final review, including input from IMEC. New paragraphs 3.6 and 4.6.

N.Pallaro

Public Page 3 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 3 of 78

TABLE OF CONTENTS

1. INTRODUCTION................................................................................................................................................ 4 1.1 PURPOSE AND SCOPE ................................................................................................................................. 4 1.2 SCENARIO SCHEME AND PARTNERS ........................................................................................................... 4

1.2.1 ID numbering scheme ......................................................................................................................... 4 1.2.2 ADOSE scenarios................................................................................................................................ 4 1.2.3 ADOSE sensors................................................................................................................................... 5 1.2.4 Other partners contributions............................................................................................................... 5

2. STATE OF THE ART ........................................................................................................................................ 5 2.1 FIR-IMAGER AND IR OPTICS ....................................................................................................................... 5 2.2 ENVIRONMENTAL SENSORS AND CMOS IMAGERS .................................................................................... 6 2.3 RANGING CAMERAS .................................................................................................................................... 6 2.4 HARMONIC RADAR AND TAGS...................................................................................................................... 6 2.5 SILICON RETINA STEREO CAMERA .............................................................................................................. 6

3. GENERAL REQUIREMENTS OF SENSORS .............................................................................................. 6 3.1 SENSOR 01: FAR INFRARED IMAGER (FIR)................................................................................................ 6

3.1.1 Scenarios group 1: Collision avoidance, extra-urban and urban.................................................. 6 3.2 SENSOR 02: MFOS.................................................................................................................................... 6

3.2.1 Scenarios group 1: Collision avoidance, extra-urban and urban.................................................. 6 3.2.2 Scenarios group 2: Pre-crash warning/preparation front impact .................................................. 6

3.3 SENSOR 03: 3DCAM ................................................................................................................................. 6 3.3.1 Scenarios group 2: Pre-crash warning/preparation front impact .................................................. 6 3.3.2 Scenarios group 2: Pre-crash warning/preparation rear impact ................................................... 6

3.4 SENSOR 04: HR-TAG ................................................................................................................................ 6 3.4.1 Scenarios group 1: Collision avoidance, extra-urban and urban.................................................. 6 3.4.2 Scenarios group 2: Pre-crash warning/preparation front impact .................................................. 6

3.5 SENSOR 05: SRS SENSOR ......................................................................................................................... 6 3.5.1 Scenarios group 2: Pre-crash warning/preparation side impact................................................... 6

3.6 SUMMARY OF THE GENERAL REQUIREMENTS OF ADOSE SENSORS ........................................................ 6 4. SPECIFICATIONS OF SENSORS.................................................................................................................. 6

4.1 FIR IMAGER ................................................................................................................................................ 6 4.2 MULTIFUNCTIONAL OPTICAL SENSOR (MFOS) .......................................................................................... 6 4.3 3D-RANGE CAMERA (3DCAM).................................................................................................................. 6 4.4 HARMONIC RADAR AND TAGS (HR-PTAG, HR-ATAG) ........................................................................... 6 4.5 SILICON RETINA SENSOR (SRS)................................................................................................................ 6 4.6 SUMMARY OF THE SPECIFICATIONS OF ADOSE SENSORS ....................................................................... 6

5. QUALITY INDICATORS AND DEPENDABILITY ........................................................................................ 6

6. CONCLUSIONS................................................................................................................................................. 6

7. BIBLIOGRAPHY................................................................................................................................................ 6

8. LIST OF FIGURES ............................................................................................................................................ 6

9. LIST OF TABLES.............................................................................................................................................. 6

10. LIST OF ACRONYMS.................................................................................................................................. 6

Public Page 4 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 4 of 78

1. INTRODUCTION

1.1 Purpose and scope

D 2.1 describes the activities carried out in Task 1.2 and the obtained results. Task 1.2 is the most demanding one in WP1, requiring strong involvement from all partners, especially WP leaders, technology providers, component and systems suppliers (Bosch, M.Marelli, STMicroelectronics, Umicore), and end-users (CRF-FIAT). Starting from the deliverable report D1.1, requirements are derived from the selected scenarios and specifications take into account technology-specific potential / restrictions as well as environmental conditions and terms-of-use. Initial considerations on dependability and quality indicators are set up.

1.2 Scenario scheme and partners

1.2.1 ID numbering scheme

The requirement ID is structured by the following code: x1.x2.yy.zz – NNN_NNN_NNN where

x1.x2 : scenario group.subgroup as defined in the deliverable report D1.1 (“1.X” means “valid for all subgroups of scenario 1”)

yy : sensor or sensor combination zz : requirement number for scenario/sensor combination NNN_NNN_NNN : scenario, sensor and requirement short name structured by “_” 1.2.2 ADOSE scenarios

As reported in the deliverable report D1.1, the selected scenarios (with the responsible partner) are as follows:

Scenario group ID 1, Collision avoidance:

1.1 Collision Avoidance Extra-Urban Areas incl. VRUs (BOSCH) 1.2 Collision Avoidance Urban Areas incl. VRUs (BOSCH)

Scenario group ID 2, Pre-crash warning/preparation:

2.1 Pre-Crash Warning/Preparation Front Impact (CRF) 2.2 Pre-Crash Warning/Preparation Side Impact (ARC) 2.3 Pre-Crash Warning/Preparation Rear Impact (MMSE)

Public Page 5 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 5 of 78

1.2.3 ADOSE sensors

The sensors to be developed in ADOSE are classified as follows:

Sensor group 0: single sensors

Sensor 01: FIR (Bosch)

Sensor 02: MFOS (CRF)

Sensor 03: 3DCAM (CRF/IMEC)

Sensor 04: HR-TAG (VTT)

Sensor 05: SRS (ARC)

Sensor group 1 – n: reserved for sensor combinations 1.2.4 Other partners contributions

Partner STM contributed to FIR/NIR sensor requirements (scenarios group 1) and MFOS sensor requirements. MMSE defined rear impact pre-warning/preparation scenarios (scenarios 2.3) and contributed to FIR and MFOS sensor requirements. Triad, UMI, UU, IZM and Paragon contributed in co-operation with partners to their sensor clusters (FIR, MFOS, HR). 2. STATE OF THE ART

2.1 FIR-Imager and IR optics

Stand alone night vision systems based on NIR cameras with active scene illumination as well as on FIR cameras are on the market in high-end cars. Most of the present night vision systems show high resolution images on a display for driver interpretation, without automatic pedestrian detection. The screen might be the windshield itself or a separate display, on top of the dashboard or integrated into the same.

While CMOS-sensors have reached very attractive cost levels at high resolution, uncooled bolometer FIR-focal plane arrays are still very high priced. This aspect does not allow them being added to CMOS imaging systems in order to have more effective recognition of living objects like pedestrians.

Various principles are in use for detection of far infrared radiation, i.e. photonic (photoconductive or photovoltaic) or thermal with different thermal to electric conversion methods (like resistive pyroelectric, bolometers or thermoelectric).

Photonic detectors e.g. based on materials like HgCdTe, InSb, AlGaAs/GaAs (multiple quantum-well devices), SiGe/Si (semiconductor fabrication compatible system) normally all require cooling and due to their high cost are of minor relevance for automotive applications.

Thermal detectors allow uncooled operation and have reached cost levels to enter the automotive market. Nevertheless the first systems for example on General Motors Deville (with a pyroelectric detector) were not successful because of high price.

Present far infrared night vision systems like the Autoliv-BMW system (7, 6 and 5 series) introduced in the market more recently apply resistive bolometer arrays consisting of Vanadium-

Public Page 6 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 6 of 78

Oxide (VOX), material arranged on a CMOS readout circuit. VOX offers a solid state phase transition slightly above ambient temperature and thus allows reasonable temperature coefficients. However VOX is very difficult to be mastered reliably and it is not semiconductor foundry compatible, leading to a need for expensive dedicated post CMOS processing equipment. Corresponding camera cost is still several hundred Euros, which does not allow the sensor to be an add-on into an automotive multi-sensor system.

Although LETI/ULIS have successfully introduced semiconductor compatible amorphous silicon instead of VOX into bolometer arrays and have realized pixel sizes around 30µm, the focal plane array prices are still very high. Several additional steps for producing the highly thermally insulated resistors on top of the fully processed CMOS wafers add to the cost.

Thermopiles offer an alternative at low array resolutions and can be made CMOS compatible e.g. with material pairs like polysilicon/aluminium with reasonable Seebeck coefficients. A major drawback however is the large pixel size due to the required number of thermocouple pairs per pixel and the difficulty achieving still good thermal insulation and sensitivity with many contact pairs.

Also on the traditionally high cost of thermal imaging optics is an impediment to a large scale implementation of FIR imaging systems. Umicore's proprietary GASIR® moulding technology is currently the state-of-the-art in infrared moulded optics. As the glass is rather sensitive to physical damaging, a protective coating has to be applied to the outward facing side of the lens. In current night vision systems the coating of choice is Diamond Like Carbon (DLC), typically applied on Germanium lenses and windows. However DLC doesn’t sufficiently protect the optic against sand impact for the whole vehicle lifetime and DLC being a germanium coating, it is not available on GASIR®. A tougher coating is Boron Phosphide (BP), but this coating shows other drawbacks and is several orders of magnitude more expensive than DLC, which is not acceptable in automotive applications (nowadays BP coatings are only applied for military purposes).

Umicore has developed an iDLC™ coating on GASIR® as a protective coating, fulfilling the specifications of DLC on Ge, but this iDLC™ coating on GASIR®, as it is today, does need further optimization to serve the harsh automotive environments.

2.2 Environmental sensors and CMOS Imagers

Most automotive environmental sensors (twilight, tunnel, solar, rain, misting) available on the market integrate more sensing functions (with a maximum of three different sensors) at package level and the optical detectors are mostly photodiodes, so that no imaging functions are performed. The main automotive suppliers are: Bosch, Valeo, Kostal, Hella, TRW, First Technology. Visibility sensors for detection of fog and mist are not available in the automotive market, although the existing and emerging ADAS are looking for it.

CMOS APS (Active Pixel Sensor) systems are already available on the market in high-end cars for driver assistance functions like blind spot detection (Shefenacker-Volvo C70), night vision support (Bosch-Mercedes Benz, S class), high beam assist (Gentex-BMW, 7-6-5 series), LDW (Valeo-Nissan, Primera). Further functions are under development like traffic sign recognition and collision warning.

Nevertheless the market penetration is still lacking with respect the initial forecast due to the high cost of the overall system (camera and embedded processing electronics).

On the whole, vision devices will address towards a general assistance to other sensors for most applications. That means, you will see cameras and other sorts of vision devices supporting nearly all the driver assistance systems in the mid-long term future.

Public Page 7 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 7 of 78

Technology enablers for future vision devices:• Progress on semiconductor technologies

• Smart focal planes

• High power illuminators

• More robust real time processing

Technology enablers for future vision devices:• Progress on semiconductor technologies

• Smart focal planes

• High power illuminators

• More robust real time processing

Figure 1 - Roadmap for vision devices (PREVENT-ProFusion).

In agreement with the Roadmap for vision devices of PREVENT-ProFusion Project [1], shown in Figure 1, the performance of currently available CMOS image sensors have been improved to better cope with the automotive environmental conditions and scenarios: lower pixel dimension (down to 5.6um), higher dynamic range (120 dB), NIR enhanced response for active night vision support, higher resolution (PVGA format) and higher frame rate (up to 60 fps).

Several silicon suppliers have in their product portfolio automotive CMOS imagers, ready for production: Micron, Kodak, ST Microelectronics, Cypress, Omnivision, Melexis, etc.

The concept of a multifunctional optical sensor (MFOS) for the adaptive headlamp system developed in the context of an EURIMUS project1 [2, 3] was demonstrated by CRF. The sensor was designed to detect environmental parameters (background luminance and visibility), while providing at the same time information on the driving scenario (curvature of the road, oncoming vehicles, approaching tunnels/bridges).

The main advantages of using a single multi-functional sensor in the automotive field are:

• reduced number of individual sensors and electronic components in the car; • use of one single electronic module handling different functions; • simplified cabling and integration; • reduction of the overall cost of the system. The optical solutions of the former MFOS prototype (figure 2) represent a notable trade-off between functionality and cost, but new approaches are required to push through integration of further optical functions, thus overcoming the physical and geometrical constraints posed by free-space optics.

1 “Micro-optics/micro-mechanics for Adaptive Lighting System”, EURIMUS project label EM34.

Public Page 8 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 8 of 78

Figure 2 – MFOS sensor prototype

Furthermore the multi-functional and multi-spectral integration onto a single device is currently limited by the performance of the existing CMOS imagers: lack of flexibility in windowing mode (programming of several ROIs - with different dimensions, gain, … - and parallel working), not-compensated response versus temperature, low sensitivity level in both visible and NIR range, physical constraints related to the optical package and unsuitable optical filters to address the multi-spectral requirement.

2.3 Ranging cameras

Mainly two kinds of vision techniques are used on car for distance measurements: triangulation and Time-Of-Flight (TOF).

Triangulation systems such as stereo-vision (or structured-light) exhibit some disadvantages. They require a certain minimal baseline to operate, depending on the target range, which induces a minimal size of the system. They also require solving the so-called correspondence problem, related to the identification of the pair of points in the stereo images which are projections of the same point in the acquired scene. This is a very complex problem and the solution is computationally expensive.

Time-Of-Flight principle relies on the physical propagation properties of light and consists of a modulated light source (continuous/pulsed diode lasers or LEDs) and an array of pixels, each capable of detecting the phase of the incoming light. The detector measures thus the arrival time of the signal that is reflected by the objects (within the illuminated scene) and travels back to the camera system.

There are two main categories of ranging cameras: (1) Indirect TOF, (2) CW TOF.

Indirect TOF (ITOF) cameras (like for example UseRCams-Siemens in PREVENT IP [4,5,6]) acquire part of a pulse and/or the full pulse. These detectors need, due to the low amount of photons that can be collected, the accumulation of several pulses in order to reach the required signal to noise ratio (which is ultimately proportional to the square root of the number of acquired pulses). ITOF presents barely no accumulation capability in the pixel, due to the fact that the detector and the Read Out Integrated Circuit (ROIC) are integrated inside the same pixel in standard CMOS technology which consequently limits the available surface to implement any complex functionality. This means that all raw data have to be accumulated outside the

Public Page 9 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 9 of 78

pixel and all pixels have to be read out sequentially over the different column lines, which severely limits the maximum number of lines for a given frame rate, the effective frame rate being equal to the one as seen by the user, multiplied by the number of required accumulated pulses. This results in three limitations: low spatial resolution (large pixel size, i.e. 130x300 um, and low number of pixels, i.e. 64x8 pixel, in case of UseRCams), low distance accuracy (low number of accumulated pulses, e.g. less than 128 pulses), low temporal resolution (low frame rate). Their advantage is that they are based on standard CMOS technology which offers the benefits of flexible and functional design in an completely integrated system.

Continuous wave modulated TOF (CWTOF) cameras (i.e. 3D range camera by CSEM [7,8, 9,10]) have the advantage of inherent higher SNR and higher resolution (i.e. 124x160 pixel), but will require relatively complex driving circuitry and off-chip data collection, conversion and processing. They are based on CMOS/CCD technology (ZMD silicon foundry, 0.8 µm / 0.6 µm) and, not being a standard mainstream technology, they are costly and lack of technological progress. A similar technique is used by Canesta with the Electronic Perception Technology [11,12]. It has the advantage of being based on standard CMOS, but the ROIC integrated in the pixel to prevent early saturation due to background illumination degrades the quantum efficiency of the detector. A third pixel technology that can be used is based on so-called avalanche photodiodes (e.g. PMD-SensL). These are biased above their breakdown voltage and electrons generated in the substrate generate a Geiger avalanche. Disadvantages are that for the photodiode, non-standard CMOS processing and operating voltages are required and that there could be limitations in terms of dynamic range and accuracy due to the limited photon detection probability (depending on photodiode technology), dark count rate, spurious after-pulses and dead time to quench the avalanche. To fully exploit the benefits of the technology, 3D packaging technology should be used.

Current TOF ranging cameras (ITOF and CWTOF) are not adequate for automotive applications because they either lack of pixel resolution, depth accuracy (due to sensitivity issues or limited background suppression capability), speed or dynamic range, or are simply too expensive. As shown in Figure 1, 3D range cameras will increase the ranging coverage. Considering the similarity between LIDAR sensors and 3D range cameras, probably these two technologies will converge on the long term, as LIDARs will get higher resolution and, hence, develop towards camera functionalities on the one hand, and on the other hand 3D range cameras are expected to increase the coverage for ranging.

2.4 Harmonic radar and tags

Most radar sensors are commercially available for adaptive cruise control (ACC). Since 2005, Continental, Robert Bosch, TRW, Delphi and Hella offered ACC systems in Europe. The sensors are based on 76/77 GHz technology and cover long range using fixed multi beams or mechanical scanning and they are linked with the braking/acceleration actuators.

Since 2006 first blind spot detection (BSD) systems, utilising short range 24 GHz radar sensors, are offered on the market too.

Those long and short range radars work for detection and ranging purposes, using the principle that when the radio frequency waves are transmitted from the vehicle, the electromagnetic waves are reflected back from the obstacle and collected by the antenna for analysis. The time

Public Page 10 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 10 of 78

lag and the change in the frequency of the transmitted and reflected waves give information about the relative distance and velocity between the vehicle and the obstacle.

There are currently (and have been recently) several European funded projects working with different radar technology for the automotive market. PROTECTOR (2000-2003) and SAVE-U (2002-2005) focused on the prevention of accidents involving vulnerable road users (VRUs) and on the investigation, in terms of technological feasibility, of systems based on short-range sensors (24 GHz microwave radar, laser-radar, near and far infrared, mono and stereo vision). WATCH-OVER (2006-2009) aims at developing a cooperative system based on interactions between an in-vehicle module and VRUs such as pedestrians, cyclists and motorcyclists. The project is focused on the selection and the adaptation of commercially available short-range communication technologies (such as UWB, WI-FI, 802.15 networks and RFID) for the in-vehicle system and for the wearable active transponders. Seeing that the vulnerable user detection is based on commercially available active transponders (not ad-hoc developments), high cost and not maintenance free communication systems are addressed.

The 77 GHz automotive radar technology has also been developed in two EU-funded projects: DenseTraffic (2001-2004) and RadarNet (2000-2004). The goal of the DenseTraffic project was to develop a forward-looking radar sensor for second-generation Adaptive Cruise Control (ACC) systems with stop-and-go and cut-in situation capabilities. A single sensor with seven-beam antenna and increased angular coverage was demonstrated. In RadarNet project near (25m) and far (180m) distance 77 GHz radar sensors based on integrated MMIC (Monolithic Microwave Integrated Circuit) technology were developed. The radar system has reduced dimensions and costs as well as it has improved the performance in terms of resolution. In RadarNet vision the new system can be used for urban collision avoidance, pre-crash sensing, parking aid, stop-and-go (extension ACC) and collision warning. These two latter projects have contributed to reduce the costs of 77GHz hardware for automotive radar which is beneficiary for the ADOSE project. A particular kind of radars, harmonic radars, have been discussed since at least the early seventies, when Harold Staras and Joshua Shefer applied for a patent on a harmonic automotive radar for vehicle detection [13].The term harmonic is used in radar systems to detect and localize small passive or active transponders, tags, that when irradiated with a certain radio frequency (or microwave frequency) emits an harmonic frequency. This harmonic frequency is offset from the transmitted frequency and all the other echoes. When the proper receiving technique is used, the receiver very easily can detect and localize the tags in dense clutter, where conventional radar system would fail. The nature of a harmonic receiving system is close to a multi-frequency radar system. So the harmonic radar system can in addition offer multi-frequency radar processing for object classification and tracking, also for non-tagged objects.

Harmonic radars, with passive harmonic tags, have been used to track insects, because in its simplest form, the nonlinear reflector tag is just a small dipole antenna connected to a diode. When operated at microwave or even at ultrahigh frequencies, the tags are small and light for insects to carry them. These kinds of radars have a range of several hundred meters [14].

In addition passive harmonic tags have been commercially available at lower frequencies (MHz and KHz) for several years. Examples of this are the Recco avalanche rescue system and numerous anti theft systems.

The current radar technology in the 77 GHz band is expensive and mostly based on fixed beams. The NORDITE-SARFA project is working towards low cost MEMS phase-shifters that can be used for beam forming. Also, in automotive projects, the overall idea is to have active tags that communicate with the car to provide the car with information of tag position and info on the tagged object. In such applications the tag itself has to know its localization and

Public Page 11 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 11 of 78

communicate this to the car. For active tags there are currently research done to find suitable communication technologies such as RFID, Bluetooth and ZigBee.

Passive tags have not been used with automotive radar yet. Their use should be suitable for vehicle to road user detection, localization and communication. Ad-hoc networks and sensor networks for vehicle-to-vehicle communication have been extensively studied. One serious problem for this communication method is the flooding of the network, when a large number of vehicles moving at high speeds are at the same time trying to communicate to each other. The ad-hoc network is changing so fast, that the network does not have enough time to organize itself [15].

2.5 Silicon retina stereo camera State of the art stereo-vision is based on matching patches of two digital camera images that are produced and processed at a fixed frame rate. Irrespective of the dynamic content of the visual scene a very large number of pixel-by-pixel operations have to be performed to yield the stereo correspondence in a single image frame. In such systems it is therefore necessary to allocate a large amount of memory and provide substantial processing power to provide scene depth information in real-time, i.e. typically at 25 frames per second. Typical constraints for smart, compact and mobile stereo-vision mass-applications, e.g. hard real-time or low-power requirements, can hardly be satisfied with this approach. High speed stereo vision beyond 100 frames per second is realized in bulky and very expensive instruments, which can be hardly integrated into the vehicle frame.

The “silicon retina” sensor (SRS) technology is based on bio-inspired analogue circuits that pre-process the visual information on-chip in parallel for each pixel [16,17,18,19,20,21]. These optical sensors provide excellent temporal resolution, a wide dynamic range and have low power consumption. Most striking, these sensors exploit a very efficient asynchronous, event-driven data protocol that largely suppresses image data redundancy. Due to on-chip pre-processing of visual information, moving objects are represented as a set of coherent edges. The sparse image coding consequently allows to implement computationally very intensive stereo depth calculation on a single low-power, low-cost DSP. Compared with conventional stereo image processing, this kind of computation is much more efficient and requires less memory and computational power.

Two 128×128 pixel sensors, that ARC has co-developed, have been successfully integrated together with a low-power, low-cost DSP in a compact, smart vision system for roadside traffic data acquisition. Specialized signal processing routines for performing the traffic monitoring tasks have been developed by ARC engineers and implemented in the embedded system.

While the traffic data acquisition system has reached an early product stage, other applications of the technology are still under intense investigation at ARC.

Public Page 12 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 12 of 78

3. GENERAL REQUIREMENTS OF SENSORS

3.1 Sensor 01: Far Infrared Imager (FIR)

3.1.1 Scenarios group 1: Collision avoidance, extra-urban and urban

ID: 1.X.01.01 – CA_EXT_FIR/NIR_ObjDet Name: Object Detection

Description: Objects have to be detected (verified detection) within the stopping distance of vehicle at a straight road with a temperature difference between object and surrounding of about 1K. Verified detection means that the objects are not only detected once but also classified by the NIR camera and verified by the FIR camera again (see also requirements for system reaction time 1.X.01.03). This is considered in the system reaction time of 2 sec. Critical for detection range and angular resolution is the extra-urban scenario (high vehicle speed, long distances), see also scenario 1.1.0 – CA_EXT. Responsibility: Bosch

Rationale: Detection range has to be longer than stopping distance of vehicle to avoid a collision. Details:

1. Detection range:

The stopping distance is defined as:

vta

vvd ⋅+⋅

=2

)(2

, with

v : vehicle speed

a : braking deceleration: 26sm

assumed

t : reaction time (driver and system): s2 assumed Maximal stopping distance (for max. vehicle speed extra-urban):

mh

kmd 120100 ≈⎟⎠⎞

⎜⎝⎛

Maximal detection range is equal or higher than 120 m. Minimal stopping distance (for min. vehicle speed extra-urban / typical vehicle speed intra-urban):

mh

kmd 4550 ≈⎟⎠⎞

⎜⎝⎛

Minimal detection range is equal or higher than 45 m.

Public Page 13 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 13 of 78

2. Angular resolution: for object detection a minimal number of pixels are required. The specified distance of the object is equal or higher than the stopping distance of the vehicle.

⎟⎠⎞

⎜⎝⎛

=

dw

ndresoarctan

)(α , with

d : distance of object assumed as the stopping distance, see 1.

ow : width of object: m5.0 for short and m1 long side assumed n : required number of pixels for object detection, 1 px for short side, 2 px for long side

Angular resolution for smallest object in maximum distance: °

=pxmres 188.4)120(α

3. Spectral range: for objects with ambient temperature the camera has to be sensitive in the spectral range of 7-14 µm

4. Thermal sensitivity: according to sampling theorem the thermal sensitivity has to

be KT 5.02

=∆

Priority: High

Relations: Field of view (FOV)

Source: Bosch

ID: 1.X.01.02 – CA_INT_FIR/NIR_FOV Name: FOV – field of view

Description: Objects at the side of the road have to be visible in the FOV of the camera (straight roads), see case a. Objects have to be visible in the FOV of the camera in curved roads (for different curve radii and vehicle speed), see cases b – d. The urban scenario is critical for the field of view (small distance / small curve radius), see also scenario 1.2.0 – CA_INT.

Responsibility: Bosch

Rationale: Objects have to be visible at: a. Straight road: 45 m and lateral offset (at road side: laneofwidth⋅5.1 )

b. Curved road: distance of 120 m and a curve radius of 450 m (100 km/h) c. Curved road: distance of 85 m and a curve radius of 250 m (80 km/h) d. Curved road: distance of 45 m and a curve radius of 80 m (50 km/h)

Public Page 14 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 14 of 78

Details:

1. Straight road use cases (a):

⎟⎟⎟⎟

⎜⎜⎜⎜

⎛ +⋅⋅==

d

ww

dFOVo

l 25.1

arctan2)(2α , with

FOV : field of view d : distance of object

lw : width of lane (3.75 m assumed)

ow : width of object (0.5 m assumed)

°= 88.14)45( mFOV (45 m: detection range for vehicle speed 50 km/h) °= 90.29)22( mFOV (22 m: detection range for vehicle speed 30 km/h)

2. Curved road use cases (b, c, d): According to [22] only curve radiuses of min. 250 m are assumed. The limited curve radius is also reasonable, because the view is often obstructed by trees, bushes or other things.

πα

⋅⋅⋅°

==)(2

)(360)(2vr

vdvFOV , with

FOV : field of view )(vd : distance of object )(vr : curve radius

v : vehicle speed

°= 05.10)50(h

kmFOV with: mh

kmr 250)50( = , mh

kmd 85,43)50( =

°= 62.19)80(h

kmFOV with: mh

kmr 250)80( = , mh

kmd 60,85)80( =

°= 26.15)100(h

kmFOV with: mh

kmr 450)100( = , mh

kmd 86,119)100( =

Priority: High

Relations: Object detection

Source: Bosch

Public Page 15 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 15 of 78

ID: 1.X.01.03 – CA_FIR/NIR_Avail Name: Availability

Description: The system has to be available as soon as possible after request.

Responsibility: Bosch

Rationale: The start-up time of the system has to be as small as possible. Details: The minimum required start-up time for the FIR camera is limited by the start-up time of the display and the related NIR camera. Because the image of the FIR camera is not displayed and only assists the object detection, the start-up time is not as critical as for the NIR camera. Furthermore the driver can not directly see the effect. Priority: High

Relations: ---

Source: Bosch

ID: 1.X.01.04 – CA_FIR/NIR_ReaTime Name: System reaction time

Description: More detections have to be verified for system reaction.

Responsibility: Bosch, MM

Rationale: The system reaction time (= time for verified detection, measured from object appearance to confirmation) has to be as short as possible. Details: For a verified detection the object has to be detected at least three times. Verification has to be evaluated in the NIR image, because the FIR image resolution is too small for classification. Three times at NIR image for verification plus two times at FIR image for indication and confirmation at a NIR frame rate of 25 fps results 160 ms.

Priority: High

Relations: Detection range

Source: Bosch

Public Page 16 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 16 of 78

ID: 1.X.01.05 – CA_FIR/NIR_Perf Name: Performance

Description: High reliability of detections has to be achieved (for intervening even more than for warning reaction)

Responsibility: Bosch, MM

Rationale: To increase the detection rate and reduce the false positive rate, additional object information has to be available. Redundancy has to be generated by a multiple sensor system and sensor data fusion. Therefore applied sensors have to be connected, calibrated and synchronized. Details:

1. Calibration: the FIR camera has to be calibrated to the NIR camera depending on two options:

- 1:1 coverage (HFOV 24°) between FIR and standard NIR camera (640x480px) Maximal tolerated horizontal pixel displacement is 1 px of FIR image (in case of 50px width), resulting maximum missing coverage at the border of the imager is 2% (corresponds to 13 px of NIR VGA image)

- FIR HVFOV (24°) included in the NIR MFOS HFOV (30°) 30x15° FOV of MFOS sensor is mapped on about 780x390 pixel and consequently 24x12° FOV of FIR camera, corresponding to 624x312 CMOS pixels, is fully included. The adjustment of the FIR-camera should not be a problem as it might be adjusted electronically by image shift.

2. Synchronization: the FIR camera has to be synchronous to the NIR camera. The accuracy of the synchronization has to be <= 20 ms (corresponds to ½ frame of NIR camera with a typical frame rate of 25 fps).

Priority: High

Relations: ---

Source: Bosch

ID: 1.X.01.06 – CA_FIR/NIR_Interf Name: Interfaces

Description: The FIR camera has to be interfaced to other modules for FIR/NIR data fusion.

Responsibility: Bosch

Rationale: The FIR camera has interfaces to the power supply and to the ECU which controls/coordinates both NIR and FIR camera. Details:

1. The functionality of the camera has to be stable at a supply voltage from 6V to about 16V, because the power supply in vehicles can vary in these ranges.

2. Data interface between FIR camera and ECU. Two alternative architectures for FIR data pre-processing: intelligence in FIR camera or in ECU.

Public Page 17 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 17 of 78

• FIR camera without intelligence: the FIR camera provides the FIR images via the interface (LVDS) to the ECU

• FIR camera with intelligence: the FIR provides only object data (pre-processed data corresponding to a recognized hot spot) via the interface (timestamp, object position, confidence) to the ECU

Priority: High

Relations: ---

Source: Bosch

ID: 1.X.01.07 – CA_FIR/NIR_Locat Name: Camera location

Description: The camera has to be mounted at the front of the car with forward viewing direction, but not behind the windscreen, because of its FIR filtering characteristics. The camera location shall be protected against water and dirt. The requirements concerning camera size and depend on the exact camera location.

Responsibility: Bosch

Rationale: Following requirements are assumed but they are depending on the real camera location and can only be seen as values for rough orientation: Details:

1. Camera size: i.e. 57 x 58 x 72 mm 2. Camera weight: i.e. 500 g 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 2 W

Priority: High

Relations: ---

Source: Bosch

3.2 Sensor 02: MFOS

3.2.1 Scenarios group 1: Collision avoidance, extra-urban and urban

ID: 1.X.02.01 – CA_EXT_NIR/FIR_ObjDet Name: Object detection

Description: The object detection requirements of the MFOS sensor are shared with the ones provided by the FIR camera (1.X.01.01 – CA_EXT_FIR/NIR_ObjDet), in particular regarding the detection range. Different considerations have to be done regarding the angular resolution since standard processing algorithms require objects to subtend at least 6 pixels.

Public Page 18 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 18 of 78

Responsibility: CRF

Rationale: Detection range has to be longer than stopping distance of vehicle to avoid a collision. Details:

1. Detection range:

vta

vvd ⋅+⋅

=2

)(2

, with

v : vehicle speed

a : braking deceleration, 26sm

assumed

t : reaction time (driver and system), s2 assumed Maximal stopping distance (for max. vehicle speed extra-urban):

mh

kmd 120100 ≈⎟⎠⎞

⎜⎝⎛

Maximal detection range is equal or higher than 120 m. Minimal stopping distance (for min. vehicle speed extra-urban / typical vehicle speed intra-urban):

mh

kmd 4550 ≈⎟⎠⎞

⎜⎝⎛

Minimal detection range is equal or higher than 45 m.

2. Angular resolution: for object detection a minimal number of pixels are required. The specified distance of the object is equal or higher than the stopping distance of the vehicle.

⎟⎠⎞

⎜⎝⎛

=

dw

ndresoarctan

)(α , with

d : distance of object assumed as the stopping distance, see 1.

ow : width of object: m5.0 for short and m1 long side assumed n : required number of pixels for object detection, 6 px for short side

Angular resolution for smallest object in maximum distance: °

=pxmres 26)120(α

Priority: High

Relations: Field of view (FOV)

Source: Bosch, CRF

Public Page 19 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 19 of 78

ID: 1.X.02.02 – CA_INT_NIR/FIR_FOV Name: FOV – field of view

Description: The FOV of the frontal monitoring functions of the MFOS sensor has to be not less than the one of the FIR camera. Thus, the same considerations of 1.X.01.02–CA_INT_FIR/NIR_FOV could be done.

Responsibility: CRF

Rationale: Objects have to be visible at: a. Straight road: 45 m and lateral offset (at road side: laneofwidth⋅5.1 )

b. Curved road: distance of 120 m and a curve radius of 450 m (100 km/h) c. Curved road: distance of 85 m and a curve radius of 250 m (80 km/h) d. Curved road: distance of 45 m and a curve radius of 80 m (50 km/h) Details:

1. Straight road use cases (a):

⎟⎟⎟⎟

⎜⎜⎜⎜

⎛ +⋅⋅==

d

ww

dFOVo

l 25.1

arctan2)(2α , with

FOV : field of view d : distance of object

lw : width of lane (3.75 m assumed)

ow : width of object (0.5 m assumed)

°= 88.14)45( mFOV (45 m: detection range for vehicle speed 50 km/h) °= 90.29)22( mFOV (22 m: detection range for vehicle speed 30 km/h)

2. Curved road use cases (b, c, d): According to [22] only curve radiuses of min. 250 m are assumed. The limited curve radius is also reasonable, because the view is often obstructed by trees, bushes or other things.

πα

⋅⋅⋅°

==)(2

)(360)(2vr

vdvFOV , with

FOV : field of view

Public Page 20 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 20 of 78

)(vd : distance of object )(vr : curve radius

v : vehicle speed

°= 4.31)50(h

kmFOV with: mh

kmr 80)50( = , mh

kmd 85,43)50( =

°= 6.19)80(h

kmFOV with: mh

kmr 250)80( = , mh

kmd 60,85)80( =

°= 3.15)100(h

kmFOV with: mh

kmr 450)100( = , mh

kmd 86,119)100( =

Priority: High

Relations: Object detection

Source: Bosch, CRF

ID: 1.X.02.03 – CA_NIR/FIR_ReaTime Name: System reaction time

Description: The same considerations as reported in 1.X.01.03 – CA_FIR/NIR_ReaTime have to be done. More detections have to be verified for system reaction.

Responsibility: CRF, MM

Rationale: The system reaction time (time for “verified detection”, measured from object appearance to confirmation) has to be as short as possible. Details: For a “verified detection” the object has to be detected at least three times. Verification has to be evaluated in the NIR image, because the FIR image resolution is too small for classification. Three times at NIR image for verification plus two times at FIR image for indication and confirmation at a NIR frame rate of 25 fps results 160 ms.

Priority: High

Relations: Detection range

Source: Bosch, CRF

Public Page 21 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 21 of 78

ID: 1.X.02.04 – CA_NIR/FIR_Perf Name: Performance-Reliability of detection

Description: High reliability of detections has to be achieved (for intervening even more than for warning reaction) in all weather and illumination conditions. Severe weather conditions (e.g. fog) reducing the overall performances below a specified threshold have to be detected thus allowing the temporally deactivation of the object detection function.

Responsibility: CRF, MM

Rationale: To increase the detection rate and reduce the false positive rate, additional object information have to be available. Redundancy has to be generated by a multiple sensor system and sensor data fusion. Therefore applied sensors have to be connected, calibrated and synchronized. Details:

1. Calibration: the FIR camera has to be calibrated to the NIR camera depending on two options:

- 1:1 coverage (HFOV 24°) between FIR and standard NIR camera (640x480px) Maximal tolerated horizontal pixel displacement is 1 px of FIR image (in case of 50px width), resulting maximum missing coverage at the border of the imager is 2% (corresponds to 13 px of NIR VGA image)

- FIR HVFOV (24°) included in the NIR MFOS HFOV (30°) 30x15° FOV of MFOS sensor is mapped on about 780x390 pixel and consequently 24x12° FOV of FIR camera, corresponding to 624x312 CMOS pixels, is fully included. The adjustment of the FIR-camera should not be a problem as it might be adjusted electronically by image shift.

2. Synchronization: the FIR camera has to be synchronous to the NIR camera. The accuracy of the synchronization has to be <= 20 ms (corresponds to ½ frame of NIR camera with a typical frame rate of 25 fps).

3. Deactivation: FIR and NIR cameras are affected differently by the weather and lighting conditions. Due to that the reliability of the sensor output can be different and consequently the data fusion has to take into account this parameter. For instance the presence of fog decreases the performance of the NIR camera (object classification) much more than the FIR camera, which could justify in some cases the deactivation of the function.

Priority: High

Relations: ---

Source: Bosch, CRF, MM

ID: 1.X.02.05 – CA_NIR/FIR_Interf Name: Interfaces

Description: The NIR camera has to be interfaced to other modules for FIR/NIR data fusion.

Responsibility: CRF, MM

Rationale: The NIR camera has interfaces to the power supply and to the ECU which controls/coordinates both NIR and FIR cameras.

Public Page 22 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 22 of 78

Details: 1. The functionality of the MFOS camera has to be stable at a supply voltage from 6V to about

16V, because the power supply in vehicles can vary in these ranges. 2. Data interface between NIR and ECU. The NIR camera provides the NIR images via the

interface (LVDS) to the ECU having intelligence.

Priority: High

Relations: ---

Source: Bosch, CRF, MM

ID: 1.X.02.06 – CA_NIR/FIR_Locat Name: Camera location

Description: The camera has to be mounted with forward view direction, behind the windshield near the internal rear view mirror. The camera location has to take into account the possibility to have the windshield portion, matching the MFOS FOV, to be cleaned by the wipers.

Responsibility: CRF

Rationale: The following requirements could be assumed as rough estimation. In particular the camera size, the weight and the power consumption have been derived from the figures of the already developed first MFOS prototype. Details:

1. Camera size (including package, electronics and add-on mechanical components to the internal rear view mirror): i.e. 80 x 100 x 70 (S, L, H) mm

2. Camera weight: i.e. <500 g 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 2 W

Priority: High

Relations: ---

Source: CRF

ID: 1.X.02.07 – CA_NIR/FIR_Fog/rain Name: Fog and rain detection

Description: In order to have a high performance of the safety system in all weather conditions (e.g. fog, rain, snow, etc …), both perception and actuation systems should work well. The MFOS has to detect the fog in order to be able to preventively deactivate functions experiencing a sensible performance reduction of perception in this weather condition. See requirement 1.X.02.04 – CA_NIR/FIR_Perf.

Public Page 23 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 23 of 78

The activation criteria should take into account the data regarding the weather condition (e.g. rain) in order to adapt as much as possible the braking power to the road surface (dry, slippery, frozen, etc.) in case of emergency braking.

Responsibility: CRF

Rationale: Two thresholds have to be foreseen in order to produce three output indicating respectively good visibility (>150m), poor visibility (150<x<50) and critical visibility (<50m). At least three levels of rainfall have to be discriminated by the sensor to derive the slippery of the road.

Priority: High

Relations: ---

Source: CRF 3.2.2 Scenarios group 2: Pre-crash warning/preparation front impact

[23, 24, 25]

ID: 2.1.02.01 – PRE_MFOS_ObjDet

Name: Object detection

Description: The aim of the pre-crash function is to detect obstacles (mainly vehicles, cycles and motorcycles, pillar and pedestrian) that would collide with the ego vehicle if no changes were made to the vehicle speed and direction. Thus it is extremely important to be able to recognise the obstacles positions both longitudinally and laterally. If radar is the most appropriate technology to detect obstacle longitudinal position with good performances at far distances, camera vision systems can provide better lateral resolution and larger FOV at short distances (below 50m). Responsibility: CRF

Rationale: Considering that the vision sensor should complement the LRR in order to detect obstacles at lateral position and short distances, a large FOV is required. Moreover, the detection range must be appropriate to low speed scenarios since for high vehicle speed, the stopping distance could easily be more than 100m thus only LRR could detect first obstacles. Details: 1. Detection range

The stopping distance is defined as:

vta

vvd ⋅+⋅

=2

)(2

, with

v : vehicle speed

a : braking deceleration: 26sm

assumed

Public Page 24 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 24 of 78

t : reaction time (driver and system): s2 assumed Minimal stopping distance (for min. vehicle speed extra-urban / typical vehicle speed intra-urban):

mh

kmd 4550 ≈⎟⎠⎞

⎜⎝⎛

Depending on the possibility to develop a pre-crash system able to autonomously recognise a potential obstacle and actuate an emergency braking action or a semi-autonomous function allowing the driver to be warned of a dangerous situations, different reaction time could be considered. The assumption of a reaction time of 2s is based taking into account the total time required by the driver to recognise the warning and to actuate the brakes. A lower value could be considered if the aim is to develop a completely autonomous pre-crash function. CMOS vision sensor technology at the moment allows the possibility to consider the more restrictive case of at least 45m of detection range. 2. Angular resolution

⎟⎠⎞

⎜⎝⎛=

dw

dres oarctan)(α , with

d : distance of object assumed as the stopping distance (45m assumed).

ow : width of object: m2.0

°≅ 25.0)45( mresα

Priority: High

Relations: Field of view (FOV)

Source: CRF

ID: 2.1.02.02 – PRE_MFOS_FOV Name: FOV – field of view

Description: Objects at the side of the road have to be visible in the FOV of the camera.

Responsibility: CRF

Public Page 25 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 25 of 78

Rationale: The minimum FOV could be figure out assuming that at low speed the lane border has to be as much as possible in the camera HFOV. Taking into account different camera installations (different internal rear view mirror heights, and different hood lengths) the lane at an average value of 4m in front of the vehicle has to be detected. Details:

α

d

wl

⎟⎟⎟

⎜⎜⎜

⎛==

d

wHFOV

l2arctan22α , with

HFOV : horizontal field of view

d : intersection distance between the HFOV and the border lane in front of the vehicle (<4m assumed)

lw : width of lane (3.75 m assumed)

°≅ 54HFOV

α

d

wl

⎟⎠

⎞⎜⎝

⎛==

dw

VFOV larctanα , with

VFOV : vertical field of view d : distance from vehicle to the ground (<2m assumed)

lw : average installation height (~1.2 m assumed)

°≅ 18VFOV

Priority: High

Relations: Object Detection

Source: CRF

Public Page 26 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 26 of 78

ID: 2.1.02.03 – PRE_MFOS_Perf Name: Performance-Reliability of detection

Description: High reliability of detections has to be achieved (for intervening even more than for warning reaction) in all weather and illumination conditions. Severe weather conditions (e.g. fog) reducing the overall performances below a specified threshold have to be detected thus allowing the temporally deactivation of the object detection function.

Responsibility: CRF

Rationale: To increase the detection rate and reduce the false positive rate, additional object information have to be available. Redundancy has to be generated by a multiple sensor system and sensor data fusion. Therefore applied sensors have to be connected, calibrated and synchronized. Details:

1. Calibration: the MFOS vision sensor has to be calibrated with the other sensors. 2. Synchronization: the MFOS vision sensor has to be synchronous with the other sensors.

The accuracy of the synchronization has to be <= 20 ms (corresponds to ½ frame of NIR camera with a typical frame rate of 25 fps).

3. Deactivation: the MFOS and the other sensors constituting the safety integrated system are affected differently by the weather and lighting conditions. Due to that the reliability of the sensor output can be different and consequently the data fusion has to take into account this parameter. For instance the presence of fog decreases the performance of the MFOS camera (object detection and classification) much more than a LRR radar, which could justify in some cases the deactivation of the function.

Priority: High

Relations: ---

Source: CRF

ID: 2.1.02.04 – PRE_MFOS_Interf Name: Interfaces

Description: The MFOS vision sensor has to be interfaced to other modules for data fusion.

Responsibility: CRF

Rationale: The MFOS sensor has interfaces to the power supply and to the ECU which controls/coordinates all sensors. Details:

1. The functionality of the MFOS camera has to be stable at a supply voltage from 6V to about 16V, because the power supply in vehicles can vary in these ranges.

2. Data interface to the ECU: the MFOS sensor provides the VIS/NIR images via the interface (LVDS) to the ECU having intelligence.

Public Page 27 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 27 of 78

Priority: High

Relations: ---

Source: CRF

ID: 2.1.02.05 – PRE_MFOS_Locat Name: Camera location

Description: The multifunctional camera has to be mounted with forward view direction, behind the windshield near the internal rear view mirror. The camera location has to take into account the possibility to have the windshield portion, matching the MFOS FOV, to be cleaned by the wipers.

Responsibility: CRF

Rationale: The following requirements could be assumed as rough estimation. In particular the camera size, the weight and the power consumption have been derived from the figures of the already developed first MFOS prototype. Details:

1. Camera size (including package, electronics and add-on mechanical components to the internal rear view mirror): i.e. 80 x 100 x 70 (S, L, H) mm

2. Camera weight: i.e. <500 g 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 2 W

Priority: High

Relations: ---

Source: CRF

ID: 2.1.02.06 – PRE_MFOS_Fog/rain Name: Fog and rain detection

Description: In order to have a high performance of the integrated safety system in all weather conditions (e.g. fog, rain, snow, etc …), both perception and actuation systems should work well. The MFOS has to detect the fog in order to be able to preventively deactivate functions experiencing a sensible performance reduction of perception in this weather condition. See requirement 2.1.02.03 – PRE_MFOS_Perf. The activation criteria should take into account the data regarding the weather condition (e.g. rain) in order to adapt as much as possible the braking power to the road surface (dry, slippery, frozen, etc.) in case of emergency braking.

Responsibility: CRF

Public Page 28 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 28 of 78

Rationale: Two thresholds have to be foreseen in order to produce three output indicating respectively good visibility (>150m), poor visibility (150<x<50) and critical visibility (<50m). At last three levels of rainfall have to be discriminated by the sensor to derive the slippery of the road.

Priority: High

Relations: ---

Source: CRF

3.3 Sensor 03: 3DCAM

3.3.1 Scenarios group 2: Pre-crash warning/preparation front impact

[24, 25]

ID: 2.1.03.01 – PRE_3DCAM_ObjDet Name: Object Detection

Description: The aim of the Pre-crash function is to detect obstacles (mainly vehicles, cycles and motorcycles, pillar and pedestrian) that would collide with the ego vehicle if no changes were made to the vehicle speed and direction. Thus it is extremely important to be able to recognise the obstacles positions both longitudinally and laterally. If radar is the most appropriate technology to detect obstacle longitudinal position with good performances at far distances (above 30-50 m by the LRR), 3D range camera can provide better lateral resolution and larger FOV at short distances.

Responsibility: IMEC, CRF

Rationale: Considering that the vision sensor should complement the LRR, in order to detect obstacles at lateral position and short distances, a large FOV is required. Moreover, the detection range must be appropriate to low speed scenarios since for high vehicle speed, the stopping distance could easily be more than 100m thus only LRR could detect first obstacles. Details: 1. Detection range

The stopping distance is defined as:

vta

vvd ⋅+⋅

=2

)(2

, with

v : vehicle speed

a : braking deceleration: 26sm

assumed

Public Page 29 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 29 of 78

t : reaction time (driver and system): s2 assumed Minimal stopping distance (for min. vehicle speed extra-urban / typical vehicle speed intra-urban):

mh

kmd 4550 ≈⎟⎠⎞

⎜⎝⎛

The assumption of a reaction time of 2s is based taking into account the total time required by the driver to recognise the warning and to actuate the brakes. In the context of the pre-crash scenario, the system has to be able to provide collision mitigation below the stopping distance and, eventually, autonomously actuate an emergency braking when no other actions are possible. A detection range of 20m can thus be considered sufficient. 2. Angular resolution

⎟⎠⎞

⎜⎝⎛=

dw

dres oarctan)(α , with

d : distance of object (20m assumed).

ow : width of object: m2.0

°≅ 57.0)20( mresα

Priority: High

Relations: Field of view (FOV)

Source: CRF, IMEC

ID: 2.1.03.02 – PRE_3DCAM_FOV Name: Field of view

Description: Objects at the side of the road have to be visible in the FOV of the 3D camera.

Responsibility: IMEC, CRF

Rationale: The minimum FOV could be figured out assuming that at low speed the lane border has to be as much as possible in the camera HFOV. Taking into account different camera installations (installation height and different hood lengths) the lane at an average value of 5m in front of the vehicle has to be detected. Details:

Public Page 30 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 30 of 78

α

d

wl

⎟⎟⎟

⎜⎜⎜

⎛==

d

wHFOV

l2arctan22α , with

HFOV : horizontal field of view d : intersection distance between the HFOV and the border lane in front of the vehicle (5m assumed)

lw : width of lane (3.75 m assumed)

°≅ 40HFOV

α

d

wl

20m

⎟⎠⎞

⎜⎝⎛+⎟

⎠⎞

⎜⎝⎛ −

≈=d

wVFOV l 5.0arctan

205.0

arctanα , with

VFOV : vertical field of view d : distance from vehicle to the ground(<2m assumed)

lw : obstacle height (~1.5 m assumed)

°≅ 20VFOV A 0.5m average installation height is assumed considering the camera position to be in the frontal part of the vehicle. Different camera positions could be considered leading a slightly different VFOV required.

Priority: High

Relations: Object Detection

Source: CRF, IMEC

Public Page 31 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 31 of 78

ID: 2.1.03.03 – PRE_3DCAM_Perf Name: Performances-Reliability of detection

Description: High reliability of detections has to be achieved (for intervening even more than for warning reaction) in all weather and illumination conditions. Severe weather conditions (e.g. fog) reducing the overall performances below a specified threshold have to be detected thus allowing the temporally deactivation of the object detection function.

Responsibility: IMEC, CRF, MM

Rationale: To increase the detection rate and reduce the false positive rate, additional object information have to be available. Redundancy has to be generated by a multiple sensor system and sensor data fusion. Therefore applied sensors have to be connected, calibrated and synchronized. Details:

1. Calibration: the 3D camera has to be calibrated with the other sensors (sensor fusion). 2. Synchronization: the 3D camera has to be synchronous with the other sensors. The

accuracy of the synchronization depends on the application. 3. Deactivation: the 3D camera and the other sensors constituting the safety integrated

system are affected differently by the weather and lighting conditions. Due to that the reliability of the sensor output can be different and consequently the data fusion has to take into account this parameter. For instance the presence of fog decreases the performance of the active (diode laser based) 3D camera much more than a LRR radar, which could justify in some cases the deactivation of the function.

Priority: High

Relations: Interfaces

Source: CRF, IMEC

ID: 2.1.03.04 – PRE_3DCAM_Interf Name: Interfaces

Description: The 3D camera has to be interfaced to NIR illuminator and to other modules for sensor fusion.

Responsibility: IMEC, CRF, MM

Rationale: The 3D camera has interfaces to the power supply, to the driver of the NIR illuminator and to the ECU which controls/coordinates all sensors. Details:

1. The functionality of the 3D camera has to be stable at a supply voltage from 6V to about 16V, because the power supply in vehicles can vary in these ranges.

2. The camera is interfaced to the driver of an active near infrared pulsed laser illumination to guarantee synchronisation

3. Data interface to the ECU: the 3D camera provides the images (e.g 2D-VIS and 3D-NIR

Public Page 32 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 32 of 78

images) via the interface (parallel, LVDS, …) to the ECU having intelligence (processing, control, …).

Priority: High

Relations: Performances-Reliability of detection

Source: CRF, IMEC

ID: 2.1.03.05 – PRE_3DCAM_EyeSafe Name: Eye safety

Description: Eye-safety requirements have to be satisfied by the Laser module of the 3D range camera.

Responsibility: IMEC, CRF

Rationale: Because of the limits imposed by the regulations on the emitted laser power, there is an inevitable trade-off among essential features such as distance range and precision, angular resolution (i.e. the number of pixels), field of view (i.e. size of the aperture angles), and image repetition rate. Accordingly, the prospects of the camera are most promising for applications in the near to immediate distance range with a lateral field of view of about 50-60 degrees. Details: The laser irradiance has to fulfil the requirements for laser safety class 1 and cannot cause any ocular hazards to humans. Priority: High

Relations: -

Source: CRF, IMEC

ID: 2.1.03.06 – PRE_3DCAM_Locat Name: Location

Description: The camera has to be mounted with forward view direction, possibly at the height of driver’s eye (for a better view of the scene), and protected against water and dirt. The requirements concerning camera size and weight depend on the exact camera location.

Responsibility: IMEC, CRF

Rationale: The possible camera (including the NIR illuminator) locations are as follows:

1. Behind the radiator grill with proper countermeasures against dirt and water 2. Camera close to the internal rearview mirror with standard windshield and illuminator into

the vehicle front side 3. Close to the internal rearview mirror with custom windshield (as required in Night vision

Public Page 33 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 33 of 78

applications and implemented in some products) having reduced NIR opaqueness and reflectance.

The former options depends strongly on other main functional requirements (detection range, accuracy, costs, …) and they will be carefully evaluated in the design phase. Following requirements are assumed but they are depending on the real camera location and can only be seen as values for rough orientation: Details:

1. Camera size: i.e. 40 x 80 x 150 mm 2. Camera weight: i.e. <500 g (excluding illuminator) 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 3 W

Priority: High

Relations: -

Source: CRF, IMEC 3.3.2 Scenarios group 2: Pre-crash warning/preparation rear impact

ID: 2.3.03.01 – PRE_3DCAM_ObjDet Name: Object Detection

Description: The aim of the Rear Pre-crash function is to detect rear obstacles (any vehicles approaching the ego-vehicle with danger of crash from behind) and to release automatic protective actions in order to reduce the occupant’s injuries. False positives have to be reduced as much as possible. Responsibility: IMEC, CRF

Rationale: The scenarios to be covered are urban, extra-urban and highway with a relative speed in range of 110 Km/h. Details:

1. Detection range:

Max detection distance

Considering the worst case, the maximum system reaction time is 500ms that takes into account the sensor response time, the computational time and the actuation time (e.g. 80-300 ms for reversible belt pre-tensioners). During the reaction time a car approaching the ego-vehicle with a relative speed of 110 Km/h makes a path around 16 m: that leads to a maximum detection range up to 18 m considering the tolerance factors (3% spatial accuracy, 10% relative speed accuracy).

Public Page 34 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 34 of 78

2. Angular resolution

It is assumed to have a motorcycle (1m width) at a distance of 18 m and to be able to discriminate it with 0.3m of spatial resolution.

⎟⎠⎞

⎜⎝⎛=

dw

dres oarctan)(α , with

d : distance of object (18m assumed).

ow : width of object: m3.0

°≅ 95.0)20( mresα

3. Radial resolution

The radial resolution is estimated to be 3%. Priority: High

Relations: Field of view (FOV)

Source: MM, CRF

ID: 2.3.03.02 – PRE_3DCAM_FOV

Name: Field of view

Description: The obstacles within the ego-vehicle lane have to be visible at a distance of 5 m

Responsibility: IMEC, CRF

Rationale: The minimum FOV could be figured out assuming that the lane has to be detected at an average value of 5m behind the ego-vehicle, while a small vehicle (1.5m width) is detected at 2 m. Details:

Public Page 35 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 35 of 78

⎟⎟⎟

⎜⎜⎜

⎛==

d

wHFOV

l2arctan22α , with

HFOV : horizontal field of view d : intersection distance between the HFOV and the border lane in front of the vehicle (5m assumed)

lw : width of lane (3.75 m assumed)

°≅ 40HFOV

α

d

wl

20m

α

d

wl

20m

⎟⎠⎞

⎜⎝⎛+⎟

⎠⎞

⎜⎝⎛ −

≈=d

wVFOV l 5.0arctan

185.0

arctanα , with

VFOV : vertical field of view d : distance from vehicle to the ground(<2m assumed)

lw : obstacle height (~1.5 m assumed)

°≅ 20VFOV A 0.5m average installation height is assumed considering the camera position to be in the back part of the vehicle (e.g bumper). Different camera positions could be considered leading a slightly different VFOV required.

Priority: High

Relations: Object detection

Source: MM, CRF

ID: 2.3.03.03 – PRE_3DCAM_Interf Name: Interfaces

Description: The 3D camera has to be interfaced to NIR illuminator and to other electronic modules.

Responsibility: IMEC, CRF, MM

Rationale: The 3D camera has interfaces to the power supply, to the driver of the NIR illuminator and to the ECU which controls the sensor.

Public Page 36 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 36 of 78

Details: 1. The functionality of the 3D camera has to be stable at a supply voltage from 6V to about

16V, because the power supply in vehicles can vary in these ranges. 2. The camera is interfaced to the driver of an active near infrared pulsed laser illumination to

guarantee synchronisation 3. Data interface to the ECU: the 3D camera provides the images (e.g 2D-VIS and 3D-NIR

images) via the interface (parallel, LVDS, …) to the ECU having intelligence (processing, control, …).

Priority: High

Relations: -

Source: CRF, IMEC

ID: 2.3.03.04 – PRE_3DCAM_EyeSafe Name: Eye safety

Description: Eye-safety requirements have to be satisfied by the Laser module of the 3D range camera.

Responsibility: IMEC, CRF

Rationale: Because of the limits imposed by the regulations on the emitted laser power, there is an inevitable trade-off among essential features such as distance range and precision, angular resolution (i.e. the number of pixels), field of view (i.e. size of the aperture angles), and image repetition rate. Accordingly, the prospects of the camera are most promising for applications in the near to immediate distance range with a lateral field of view of about 30-40 degrees. Details: The laser irradiance has to fulfil the requirements for laser safety class 1 and cannot cause any ocular hazards to humans. Priority: High

Relations: -

Source: CRF, IMEC

ID: 2.3.03.05 – PRE_3DCAM_Locat Name: Location

Description: The camera has to be mounted with rear view direction, possibly at the height of the rear window (for a better view of the scene), and protected against water and dirt. The requirements concerning camera size and weight depend on the exact camera location.

Responsibility: IMEC, CRF

Public Page 37 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 37 of 78

Rationale: The possible camera (including the NIR illuminator) locations are as follows:

1. Behind the bumper with proper countermeasures against dirt and water, and illuminator integrated into the bumper or into the existing back lights

2. Camera behind the standard back window and illuminator integrated into the bumper or into the existing back lights

3. Behind a custom back window having reduced NIR opaqueness and reflectance.

The former options depends strongly on other main functional requirements (detection range, accuracy, costs, …) and they will be carefully evaluated in the design phase. Following requirements are assumed but they are depending on the real camera location and can only be seen as values for rough orientation: Details:

1. Camera size: i.e. 40 x 80 x 150 mm 2. Camera weight: i.e. <500 g (excluding illuminator) 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 3 W

Priority: High

Relations: -

Source: CRF, IMEC

3.4 Sensor 04: HR-TAG

3.4.1 Scenarios group 1: Collision avoidance, extra-urban and urban

ID: 1.X.04.01 – CA_HR_ObjDet Name: Object Detection

Description: Objects have to be detected (verified detection) within the stopping distance of vehicle. The required detection ranges in urban and extra-urban areas are estimated in the following assuming a system reaction time of 2 sec and maximum vehicle speeds of 100 km/h in extra-urban and 50 km/h in urban areas. Angular resolution need to be high enough, such that vulnerable road users in risky positions can be separated from those in safe place, such as on walkway at the road side. When vulnerable road user is detected, applicable actions, such as warning, automatic braking or pre-crash preparation, depend on the distance to the vulnerable road user. Therefore, the radial resolution needs to be high enough to make a right decision of applicable action. Responsibility: VTT

Rationale: Detection range has to be longer than stopping distance of vehicle to avoid a collision. Details:

Public Page 38 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 38 of 78

1. Detection range:

The stopping distance is defined as:

vta

vvd ⋅+⋅

=2

)(2

, with

v : vehicle speed (100km/h in extra-urban, 50km/h)

a : braking deceleration: 26sm

assumed

t : reaction time (driver and system): s2 assumed Stopping distance in extra-urban areas:

mh

kmd 120100 ≈⎟⎠⎞

⎜⎝⎛

The stopping distance in urban areas:

mh

kmd 4550 ≈⎟⎠⎞

⎜⎝⎛

Because the largest number of vulnerable road users is in urban areas, urban area has a higher priority than extra-urban areas.

2. Angular resolution: The detection range needs to be approximately 45 m in urban areas.

Within this range, it should be possible to separate a VRU that is straight ahead, from a VRU that is on a walkway at the side of the road. Let us assume that the lateral separation between road and walkway is 1.5 m. Then, the angular resolution is :

⎟⎠⎞

⎜⎝⎛=

dw

dres oarctan)(α , with

d : distance of object assumed as the stopping distance, see 1.

ow : required spatial accuracy, m5.1 assumed

Angular resolution in urban areas: °≈⎟⎠⎞

⎜⎝⎛= − 2

455.1tan)45( 1

mmmresα

3. Radial resolution: It is assumed that a radial resolution of 1 m is sufficient for deciding the right applicable action when a vulnerable road user is detected.

Priority: Urban areas: high, extra-urban areas: medium

Relations: Field of view (FOV)

Source: VTT

Public Page 39 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 39 of 78

ID: 1.X.04.02 – CA_HR_FOV

Name: Field of view

Description: Vulnerable road users at the road side need to be detected in both straight and curved roads. The road curvatures at different areas, and thus also the required field of view is estimated in the following.

Responsibility: VTT

Rationale: Objects have to be visible at: a. Straight road: 45 m and a lateral offset (at the road side: ⋅5.1 width of lane) b. Curved road: distance of 120m and a curve radius of 450m (100 km/h) c. Curved road: distance of 85m and a curve radius of 250m (80 km/h) d. Curved road: distance of 45m and a curve radius of 80m (50 km/h) Details:

1. Straight road use cases (a):

⎟⎟⎟⎟

⎜⎜⎜⎜

⎛ +⋅⋅==

d

ww

dFOVo

l 25.1

arctan2)(2α , with

FOV : field of view d : distance of object

lw : width of lane (3.75 m assumed)

ow : width of object (1 m assumed)

°= 5.15)45( mFOV (45 m: detection range for vehicle speed 50 km/h) °= 10.31)22( mFOV (22 m: detection range for vehicle speed 30 km/h)

2. Curved road use cases (b, c, d):

According to [22] only curve radiuses of min. 250 m are assumed. The limited curve radius is also reasonable, because the view is often obstructed by trees, bushes or other things.

Public Page 40 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 40 of 78

πα

⋅⋅⋅°

==)(2

)(360)(2vr

vdvFOV , with

FOV : field of view )(vd : distance of object )(vr : curve radius

v : vehicle speed

°= 41.31)50(h

kmFOV with: mh

kmr 80)50( = , mh

kmd 85,43)50( =

°= 62.19)80(h

kmFOV with: mh

kmr 250)80( = , mh

kmd 60,85)80( =

°= 26.15)100(h

kmFOV with: mh

kmr 450)100( = , mh

kmd 86,119)100( =

The largest FOV is required, when the object distance is 22 m and the vehicle speed is 30 km/h. Then, the required FOV is 31.1°.

Priority: High

Relations: Object detection

Source: VTT

ID: 1.X.01.03 – CA_HR_Avail Name: Availability

Description: The system has to be available as soon as possible after request.

Responsibility: VTT

Rationale: The start-up time of the system has to be as small as possible. Details: It is assumed that a start-up time of a few seconds is sufficient. Then the system is available from starting the vehicle before collisions may occur.

Priority: High

Relations: ---

Source: VTT

Public Page 41 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 41 of 78

ID: 1.X.01.04 – CA_HR_ReaTime Name: System reaction time

Description: Verified detection of a VRU requires that the VRU is detected during at least at three adjacent detection cycles.

Responsibility: VTT

Rationale: The system reaction time (= time for verified detection, measured from object appearance to confirmation) has to be as short as possible. Details: For a verified detection the object has to be detected at least three times. Assuming that the harmonic radar is able to measure 10 frames per second, the system reaction time is 300 ms.

Priority: High

Relations: Detection range

Source: VTT

ID: 1.X.01.05 – CA_HR_Perf Name: Performance

Description: High reliability of detections has to be achieved (for intervening even more than for warning reaction)

Responsibility: VTT

Rationale: To increase the detection rate and reduce the false positive rate, additional object information has to be available. Redundancy has to be generated by a multiple sensor system and sensor data fusion. Therefore applied sensors have to be connected, calibrated and synchronized. Details:

1. Calibration: The direction information provided by the harmonic radar need to be calibrated with other sensors, such as NIR and FIR. The direction information need to be calibrated with an accuracy better than the angular resolution of the harmonic radar, which is 2°C.

2. Synchronization: the harmonic radar need to be synchronized to other sensors, such as NIR and FIR. The synchronization accuracy need to be approximately the frame refreshment time of 100 ms.

Priority: High

Relations: ---

Source: VTT

Public Page 42 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 42 of 78

ID: 1.X.01.06 – CA_HR_Locat Name: Radar location

Description: The radar has to be mounted at the front of the car with forward viewing direction, but not behind the windscreen, because of its mm-wave filtering characteristics. The radar location shall be protected against water and dirt. The requirements concerning radar size and weight depend on the exact camera location.

Responsibility: VTT

Rationale: Following requirements are assumed but they are depending on the real radar location and can only be seen as values for rough orientation: Details:

1. Radar size: 100 x 100 x 100 mm 2. Radar weight: 1000 g 3. Temperature range: operating temperature -40 ÷ +80°C 4. Power consumption: < 2 W

Priority: Medium

Relations: ---

Source: VTT

ID: 1.X.01.07 – CA_HR_Interf Name: Interfaces

Description: The harmonic radar has to be interfaced to other modules for sensor fusion.

Responsibility: VTT

Rationale: The radar has got interfaces to the power supply and the microprocessor which controls/coordinates the sensors. Details:

1. The functionality of the radar has to be stable at a supply voltage from 6V to about 16V, because the power supply in vehicles can vary in these ranges.

2. Data interface to the microprocessor The interface between the harmonic radar and other sensors need to be considered later.

Priority: High

Relations: ---

Source: VTT

Public Page 43 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 43 of 78

3.4.2 Scenarios group 2: Pre-crash warning/preparation front impact

All relevant system parameters are derived from collision avoidance scenarios and therefore pre-crash warning/preparation scenarios are not considered here in more detail. Pre-crash warning/preparation scenarios do not cause any additional restrictions to harmonic radar system as compared to collision avoidance scenarios. [Source: VTT]

3.5 Sensor 05: SRS sensor

3.5.1 Scenarios group 2: Pre-crash warning/preparation side impact

ID: 2.2.05.01 – PRE_EXT_SRS_DetectionRange

Name: SRS Detection range

Description: Objects have to be detected with high confidence timely before activation of countermeasures.

Responsibility: ARC

Rationale: The detection range d for pre-crash warning/preparation must be greater than the distance a dangerous object could travel during the system reaction time of the countermeasure. This time consists of the overall response time of the SRS sensor (time-resolution of the sensor, number of successive detections, e.g. for confidence checks), and reaction/decision time of the ADAS including activation time of the countermeasure (e.g. preparation of safety belts). Details:

1. System reaction time st :

srsas ttt ⋅+= 10

at : activation time of the countermeasure (e.g. preparation of seat belts)

SRSt : response time of the SRS sensor; for a final decision three successive detections containing of 10 sensor readings are necessary (see “System reaction time: 2.2.05.03 – PRE_EXT_SRS_ReaTime” for a detailed description)

2. Detection range Assuming an approaching speed of the object for side impact (e.g. vehicle on an intersection) of 60km/h, a typical safety belt with an activation time of 300ms, and a response time of the SRS sensor of 5ms, following values can be derived:

msmsmsts 350510300 =⋅+= , system reaction time

mms

mmshkmsd 685,567.163506035.0 ≈=⋅=⋅= , detection range

3. Angular resolution

Hazardous objects have a minimum number of pixels n=24 at a critical distance dmin=5m.

Public Page 44 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 44 of 78

⎟⎠⎞

⎜⎝⎛

=

dw

ndresoarctan

)(α , with

d : critical distance of object ( = 5m)

ow : width of object: m5.0 n : required number of pixels for object detection, 24 px

Angular resolution for smallest object in maximum distance: °

=pxmres 2.4)5(α

Priority: High

Relations: System reaction time

Source: ARC

ID: 2.2.05.02 – PRE_EXT_SRS_FOV

Name: SRS field of view

Description: Objects have to be visible in the sensor field of view to be detected.

Responsibility: ARC

Rationale: Objects have to be visible in the sensors FOV in order to be detected. Under the assumption of a negligible velocity of the ego-vehicle compared to the colliding object, two criteria are to be fulfilled:

• Hazardous objects have a minimum number of pixels n=24 at a critical distance dmin=5m • Monitoring area is restricted to the passenger compartment (i.e. the area between the

wheels), assumed wcomp=2.5 m Details: Derived from worst-case, i.e. impact of small object like motorcycle, according to the following equation, a field of view of the sensor of 30o is necessary.

°≈°=⎟⎟⎠

⎞⎜⎜⎝

⎛⋅

⋅= 30282

arctan2mind

wFOV comp

wcomp : monitoring area ( = 2.5m) dmin : critical distance ( = 5 m ) Based on the necessary field of view, the required minimum sensor width N of 128 pixels can be derived.

Public Page 45 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 45 of 78

pixelsFOVw

ndN 1282

tan2

min

min =⎟⎠⎞

⎜⎝⎛⋅

⋅⋅=

wmin : minimum width of the object ( = 0,5 m) n : needed number of pixels per object at critical distance (=24 pixels per object) dmax : detection distance ( = 6 m )

0m

dmin = 5m

dmax = 6m

wmin

FOV

detection area

Priority: High

Relations: System reaction time

Source: ARC

ID: 2.2.05.03 – PRE_EXT_SRS_ReaTime Name: System reaction time

Description: At least three consecutive detections have to be verified for system reaction.

Responsibility: ARC

Rationale: The system reaction time (= time for verified detection, measured from object appearance to confirmation) has to be as short as necessary to allow the reliable detection of approaching object on collision course, and the subsequent activation of the countermeasures.

Details: The trajectory of the hazardous vehicle has to be assed in terms of time to impact and the

distance from ego-vehicle, where (T, D=0) is the impact point. The system that makes its decision on the basis of sensor data needs time to “track” the incoming object to decide if it is a potential hazard.

The following calculation shows this in the worst-case scenario i.e. approaching speed of 60 km/h:

Public Page 46 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 46 of 78

Table 1 - SRS - Timing of actuations

(Time, Distance) Action Notes (T, 0) Impact (T-5 ms, 8 cm) Airbag/belt prep. finished

Airbag/belt preparation (T-305 ms, 510 cm) System decision on imminent

impact after tracking the potential impacting object

Initiate airbag preparation

Tracking of potential impacting object (T-350 ms, 600 cm) First detection of potential

impacting object (50ms = 10 “quasi frames” @ 5ms before decision)

• For a verified detection, the appearance of the colliding object has to be detected in three

consecutive frames. a. Step zero: appearance of the object in the sensors FOV b. 1st detection – indication for collision course c. 2nd detection – verification of potential impact d. 3rd detection – final decision, initiation of counter-measurements

Priority: High

Relations: Detection range

Source: ARC

ID: 2.2.05.04 – PRE_EXT_SRS_Locat Name: Location

Description: The SRS sensor has to be mounted at the side of the car and it has to be protected against the typical outdoor conditions (water, dirt, dust, etc.). The requirements concerning sensor size and weight depend on the exact sensor location.

Responsibility: ARC

Rationale: For a final product the position of the stereo sensor can be integrated in the B-pillar or in the roof section.

Appearance Indication Verification Decision

0 ms 16 ms [3rd frame]

50 ms [10th frame]

33 ms [6th frame]

Public Page 47 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 47 of 78

Figure 3 - SRS - Mounting position for SRS sensors

For testing and evaluation the sensor could be mounted on the right side of the test vehicle, either on the door or nearby the B-pillar. Additionally, for testing the sensor could be mounted turnable to extend the field-of-view a bit in driving direction. Priority: High

Relations: ---

Source: ARC

Public Page 48 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 48 of 78

3.6 Summary of the general requirements of ADOSE sensors

Table 2 - Requirements of ADOSE sensors.

FIR MFOS 3D-CAM HR-TAG SRS

1.X.01.01 – CA_EXT_FIR/NIR_ObjDet 1.X.02.01 – CA_EXT_NIR/FIR_ObjDet 2.1.03.01 – PRE_3DCAM_ObjDet 1.X.04.01 – CA_HR_ObjDet 2.2.05.01 – PRE_EXT_SRS_DetectionRange

1.X.01.02 – CA_INT_FIR/NIR_FOV 1.X.02.02 – CA_INT_NIR/FIR_FOV 2.1.03.02 – PRE_3DCAM_FOV 1.X.04.02 – CA_HR_FOV 2.2.05.02 – PRE_EXT_SRS_FOV

1.X.01.03 – CA_FIR/NIR_Avail 1.X.02.03 – CA_NIR/FIR_ReaTime 2.1.03.03 – PRE_3DCAM_Perf 1.X.01.03 – CA_HR_Avail 2.2.05.03 – PRE_EXT_SRS_ReaTime

1.X.01.04 – CA_FIR/NIR_ReaTime 1.X.02.04 – CA_NIR/FIR_Perf 2.1.03.04 – PRE_3DCAM_Interf 1.X.01.04 – CA_HR_ReaTime 2.2.05.04 – PRE_EXT_SRS_Locat

1.X.01.05 – CA_FIR/NIR_Perf 1.X.02.05 – CA_NIR/FIR_Interf 2.1.03.05 – PRE_3DCAM_EyeSafe 1.X.01.05 – CA_HR_Perf

1.X.01.06 – CA_FIR/NIR_Interf 1.X.02.06 – CA_NIR/FIR_Locat 2.1.03.06 – PRE_3DCAM_Locat 1.X.01.06 – CA_HR_Locat

1.X.01.07 – CA_FIR/NIR_Locat 1.X.02.07 – CA_NIR/FIR_Fog/rain 2.3.03.01 – PRE_3DCAM_ObjDet 1.X.01.07 – CA_HR_Interf

2.1.02.01 – PRE_MFOS_ObjDet 2.3.03.02 – PRE_3DCAM_FOV

2.1.02.02 – PRE_MFOS_FOV 2.3.03.03 – PRE_3DCAM_Interf

2.1.02.03 – PRE_MFOS_Perf 2.3.03.04 – PRE_3DCAM_EyeSafe

2.1.02.04 – PRE_MFOS_Interf 2.3.03.05 – PRE_3DCAM_Locat

2.1.02.05 – PRE_MFOS_Locat

2.1.02.06 – PRE_MFOS_Fog/rain

Public Page 49 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 49 of 78

4. SPECIFICATIONS OF SENSORS

4.1 FIR Imager

The main scenario for the FIR specifications is the extra-urban collision avoidance.

Spectral range 7÷14 µm

The spectral range describes the sensitivity range of the sensor. For FIR cameras the spectral range of 7÷14 µm is usual because it is the highest spectral power density for objects with ambient temperature. See also requirements 1.X.01.01 in chapter 3.1.1.1.

Thermal sensitivity min. 0.5 K

The Minimum Resolvable Temperature Difference (MRTD) is a measure for assessing the performance of infrared cameras. It is measured between the object temperature and the temperature of the surrounding. The thermal sensitivity has to be smaller than 0.5 K, see also requirements 1.X.01.01 in chapter 3.1.1.1. The Noise Equivalent Temperature Difference (NETD) is a measure of the sensitivity of a detector of thermal radiation. The NETD has to be smaller than 0.3 K.

.

Detection range ≥120m

Detection range has to be equal or longer than stopping distance of vehicle to avoid a collision.

Angular Resolution min. 4.188 px/°

The angular resolution describes the resolving power of the camera. It defines in how many pixels an object of a specified size in a specified distance is represented in the image. The angular resolution is realized by the focal length of the lens and the imager resolution. The scopes of the FIR camera are only add-on functionalities like hot-spot detection for indication and classification for verification at shorter distances. Therefore an angular resolution of min. 4.188 px/° is sufficient; see also requirements 1.X.01.01 in chapter 3.1.1.1.

Field of view (FOV) HFOV 24° VFOV 12°

The field of view is the angular extent of the observable world that is seen. The main scenario for the FIR specification is the extra-urban collision avoidance. Therefore the extra-urban requirements concerning FOV are crucial, so the smallest vehicle speed (50 km/h) of the extra-urban scenario is used for specification, which is °± 44.7 .

Public Page 50 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 50 of 78

Field of view (FOV) HFOV 24° VFOV 12°

The critical curved road use case is the vehicle speed 80 km/h case, which implies a FOV of °± 8.9

Additional to the used FOV, a calibration range has to be considered, which is important for the correct orientation of the camera. The FIR camera has to be adapted to the NIR camera and the world. Therefore a calibration range of +/- 2° is assumed. So the resulting specification for the horizontal FOV is about +/- 12°, see also requirements 1.X.01.02 and 1.X.01.05 in chapter 3.1.1.1.

The vertical FOV can be much smaller than the horizontal FOV. Usually an aspect ratio of 2:1 is sufficient.

Imager Resolution width: min. 100 pxheight: min. 50 px

The minimum imager resolution is determined by the angular resolution and the FOV:

pxpx 100188.4122 š

⋅°⋅ , see also requirements 1.X.01.02 in chapter 3.1.1.1.

Assuming an aspect ratio of 2:1 the resulting height of the imager is specified to about 50 px.

Focal length tbd

The focal length of an optical system is a measure of how strongly it converges or diverges light. A system with a shorter focal length has greater optical power than one with a long focal length. For a given pixel pitch and number of pixels (i.e. imager size) the focal length is defined over the required field of view for the camera. In combination with the aperture diameter the focal length also affects the aperture of the lens. A compromise taking into account cost, performance and technology has to be made defining the focal length of the optics (ADOSE deliverable D2.1).

Pixel pitch tbd

The pixel pitch is a specification for a pixel-based device that describes the distance between the center of the pixels. The pixel pitch thus consists of the sensitive pixel part and the spacing between adjacent pixels. The aim is to reduce the pixel spacing as much as possible to allow a large sensitive area. The pixel pitch is limited by technology limitations and by sensitivity considerations towards small pitches and by cost (array size) towards large pitches.

Also the focal length of the optics depends on the pixel pitch for a given field of view as shown in the drawing below. The pixel-pitch therefore also the affects the optics cost.

A compromise taking into account camera cost, performance and technology has to be made. (ADOSE deliverable 2.1).

Public Page 51 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 51 of 78

Pixel pitch tbd

fdFOV⋅

=⎟⎠⎞

⎜⎝⎛

22tan

d : size of sensor in the measured direction, pixelsofnumberpitchpixeld ⋅=

f : focal length

FOV: field of view = 24°

⎟⎠⎞

⎜⎝⎛⋅=

2tan2 FOV

pixelsofnumberfpitchpixel

Aperture (F-number) tbd

The aperture (or F-number) of an optical system is the focal length divided by the aperture diameter. A small F-number (e.g. 1) minimizes the loss of energy through the lens. Considering cost, the F-number should be higher to yield a small lens sizes - however, the loss of energy has to be compensated then by better and more expensive sensor technology. A cost- and performance-driven compromise taking into account the achievable sensor sensitivity has to be taken (ADOSE Deliverable D2.1).

Frame rate min. 12.5 fps

The frame rate is the measurement of the frequency at which an imaging device produces unique consecutive images called frames. As the FIR images are not represented on the display and the NIR camera has a sufficient frame rate of 25 fps, the frame rate of the FIR camera may be less, see also requirements 1.X.01.04 in chapter 3.1.1.1.

Start up time 2 sec

The start up time describes the time till the system is available. As the FIR camera is an assisting sensor a start up time of about 2 sec is sufficient, see also requirements 1.X.01.03 in chapter 3.1.1.1.

Public Page 52 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 52 of 78

Output format tbd

The output format depends on the designed system architecture. The interface has to be designed dependent on which data have to be transferred, see also requirements 1.X.01.07 in chapter 3.1.1.1.

Camera size i.e. 57 x 58 x 72 mm

As no constructed space for the camera location is specified, the requirements are adopted from other sensors in similar constructed spaces (i.e. FLIR camera in BMW: 57 x 58 x 72 mm), see also requirements 1.X.01.06 in chapter 3.1.1.1.

Camera weight i.e. 500 g

As no constructed space for the camera location is specified, the requirements are adopted from other sensors in similar constructed spaces (i.e. FLIR camera in BMW: 500 g) see also requirements 1.X.01.06 in chapter 3.1.1.1.

Temperature range -40 ÷ +80°C

The operating temperature describes the required temperatures when the FIR camera is able to be operating: -40 ÷ +80°C.

When the FIR camera is not operating, it has to endure temperatures of -40 ÷ +120°C.

Supply Voltage 6 ÷ 16 V

In automotive applications supply voltages from 6 V to 16 V are typical. Within these limits the system has to be operable.

Power consumption < 2 W

Power consumption refers to the electrical energy over time that must be supplied to an electrical appliance to maintain its operation. The power consumption is usually a result of power used to perform the intended function of the device plus additional "wasted" power.

In automotive applications the power consumption has to be as low as possible.

Public Page 53 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 53 of 78

4.2 Multifunctional optical sensor (MFOS)

The MFOS sensor cannot fit at the same time both scenarios 1 and 2 due to different requirements in terms of angular resolution and FOV:

1. Collision avoidance, extra-urban and urban: angular resolution ~0.24°, HFOV=31° 2. Pre-crash warning/preparation: angular resolution=0.25°, HFOV=54°

Taking into account the objective defined in Annex1 (table 1, pag. 13) the following sensor specifications will be defined only for the first scenarios, Collision avoidance - extra-urban and urban.

Spectral range VIS/NIR, NIR

The spectral range describes the sensitivity range of the sensor. For standard CMOS cameras the sensitivity spectral range is about 0.4-0.9 µm allowing to capture images both in visible and near-infrared wavelength bandwidths.

For MFOS sensory applications, the spectral range can be further specified for each ROIs having specific optical filters to select the right wavelengths. Figures shown in the former “Array partition” specification highlight the selected wavelength bandwidths:

• NIR: fog, rain • VIS/NIR: night vision, twilight, tunnel/bridge, solar

Dynamic range >100dB

The Dynamic Range of the CMOS camera allows a complete scene view without any loss of information in all light conditions. In fact, the logarithm answer of the sensor allows to avoid saturation in extreme light conditions (against the light) and dark zones in low light conditions.

Array partition Solutions A & B

The starting point to describe the functional specifications of the MFOS sensor is the selection of the number of functions to be integrated and the definition of the appropriate partition of the CMOS array.

As explained in §1.3 of deliverable 3.1, the functions that will developed in the MFOS sensor are: - Night Vision - Tunnel/bridge - Fog - Twilight - Rain - Solar

which leads to the design of two possible partitions of the array. Considering the dimensions of the ROIs devoted to the detection of the environmental parameters to be 140px x 140px (as a concept evaluation based on the first developed prototype of MFOS sensor), the two solutions are as follows.

Public Page 54 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 54 of 78

Array partition Solutions A & B Solution A

Pixel matrix1024x512 px

Twilight functionWavelength bandw.: VIS

Fog functionWavelength bandw.: NIR

Rain (NIR) /solar functions (VIS)

Night vision+tunnel functions884x512px (>PVGA)

Figure 4 - MFOS - Solution A for the array partition (ROIs of the environmental parameters have been placed on the matrix right side).

In the solution A, the ROIs for the detection of the environmental parameters have been placed on the right side of the pixel matrix in order to take advantage of the extended horizontal dimension of the imager. In fact, as shown in the above figure, more than a PVGA format can be used for the frontal area monitoring applications. The figure highlights that the optical axis of the ROI devoted to the frontal monitoring has been shifted on the left side by 70px. Solution B

Pixel matrix1024x512 px

Twilight functionWavelength bandw.: VIS

Fog functionWavelength bandw.: NIR

Night vision+tunnel functions: VISRain function:NIR

Solar function: VIS

Figure 5 - MFOS - Solution B for the array partition

(ROIs of the environmental parameters have been placed on the matrix corners). In the solution B, the ROIs devoted to the sensing functions have been placed in the corners of the matrix so that the frontal monitoring applications are allowed to use almost the whole imager format. In fact, most of the applications requiring the frontal view monitoring usually do not need the corners, whereas it is important not to limit the vertical FOV, which is needed for calibration purposes, and the horizontal FOV, which is important for applications requiring large FOV such as the pedestrian detection in urban scenarios even if not considered in the project.

Public Page 55 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 55 of 78

Array partition Solutions A & B

Both solutions A and B may be chosen to achieve the ADOSE goals. Taking into account considerations related to the optical chain designs, that is the way in which the radiation is collected and gathered onto the dedicated ROIs the following can be summarized: • Solution A is preferable since the dedicated ROIs for all the sensing functions have been

placed on the side of the CMOS active area, thus allowing the possibility to stop by means of a single diaphragm optical element the undesired straight light coming from the objective.

• Solution B has the advantage to allow the use of unbended optical waveguides as explained in section 4 of D3.1.

Detection range ≥120m

Detection range has to be equal or longer than stopping distance of vehicle to avoid a collision.

Angular Resolution min. 26 px/°

The angular resolution describes the resolving power of the camera. It defines in how many pixels an object of a specified size in a specified distance is represented in the image. The angular resolution is realized by the focal length of the lens and the imager resolution. The scopes of the NIR camera are objects detection and classification. Therefore an angular resolution of min. 26 px/° is sufficient; see also requirements 1.X.02.01 in chapter 3.1.2.1.

Field of view (FOV) – forward looking functions HFOV 30° VFOV 15°

The field of view of the MFOS sensor, considering the forward looking functions (in ADOSE mainly the night vision and tunnel/bridge) has to be not less of the field of view of the FIR camera for object classification and verification. The wider FOV required for urban scenario is 31.4° as reported in the requirements 1.X.02.02 in the case of stopping distance of 45m and the vehicle speed of 50Km/h. Taking into account the proposed array partitions, a HFOV of 30° it could be a realistic achievable value. The vertical FOV can be much smaller than the horizontal FOV. Usually an aspect ratio of 2:1 is sufficient.

Imager resolution > (780x390) px

The imager resolution can be figured out by the angular resolution and the FOV.

That is pxpx 7802630 =°⋅° is the minimum number of pixels required to satisfy the

requirements. The ST CMOS array has 1024x512px and, considering also the partition of solution A in which the dedicated portions is of 880px, the imager resolution fully satisfies the requirements.

Public Page 56 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 56 of 78

Pixel pitch 5.6 um

The pixel pitch, describing the distance between the centers of two contiguous pixels, it is an imager parameter.

In order to figure out the minimum value to fulfil the requirements, the FOV, the imager resolution (number of pixels) and the focal length have to be provided as can be seen by the following formula:

⎟⎠⎞

⎜⎝⎛⋅=

2tan2 FOV

pixelsofnumberfpitchpixel

It appears evident that there are 2∞ combinations of pixel pitch and focal length that lead to the required FOV with the defined minimum resolution (the only boundaries are technology and physical constrains). In order to determine the focal length of the objective, the pixel pitch of the ST CMOS matrix has been considered: 5.6um .

Focal length ~ 8 mm

From the above mentioned considerations, the focal length of the objective of the MFOS sensor could be figured out. Its value should be around 8mm.

Aperture (F-number)

Since F-number affects both costs and performances, a precise value could be figure out during the optical design phase. As a rule of thumb, since the f-number is defined as the ration between the focal length and the optics aperture, lower values mean higher costs and better the performances.

Frame rate ≥ 25 fps

The frame rate is the measurement of the frequency at which an imaging device produces unique consecutive images called frames.

Current NIR night vision systems shows that it should be equal or higher than 25 fps.

Start up time 2 sec

The start up time describes the time till the system is available. As the MFOS camera is an assisting sensor a start up time of about 2 sec is sufficient, see also requirements 1.X.02.03 in chapter 3.1.2.1.

Output format tbd

The output format depends on the designed system architecture. The interface has to be

Public Page 57 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 57 of 78

Output format tbd designed dependent on which data have to be transferred, see also requirements 1.X.02.05 – CA_NIR/FIR_Interf .

Camera size i.e. 80 x 100 x 70 mm

As no constructed space for the camera location is specified, the requirement is adopted from the already developed MFOS prototype: 80 x 100 x 70 mm, See also requirements 1.X.02.06 in chapter 3.1.2.1.

Camera weight ~ 500 g

As no constructed space for the camera location is specified, the requirement is adopted from the already developed MFOS prototype: MFOS 4 functions: ~500 g (value derived from the sum of the sensor and the internal rear view mirror). See also requirements 1.X.02.06 in chapter 3.1.2.1.

Temperature range op. (-40 ÷ 80)°C

The temperature range refers to the required operating temperatures of the MFOS sensor: (-40 ÷ +80)°C.

Supply Voltage 6÷16 V

In automotive applications supply voltages from 6 V to 16 V are typical. Within these limits the system has to be operable.

Power consumption < 2 W

Power consumption refers to the electrical energy over time that must be supplied to an electrical appliance to maintain its operation. The power consumption is usually a result of power used to perform the intended function of the device plus additional "wasted" power.

The specification is derived from the already developed MFOS prototype in which the power consumption of the imager and related electronics board is about 1W and the optical emitters and driving board contribute to another 1W.

Field of view (FOV) – fog function FOV 10°

The fog detection function is an active function based on the backscattered radiation. The emitter, operating in the NIR bandwidth, should be made by a IRLED source generally with the use of a lens to create the desired FOV. The receiver is made by a collecting lens focalizing the impinging radiation (on the optical channel devoted to gather that specific radiation) on the selected array ROI.

Public Page 58 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 58 of 78

Field of view (FOV) – fog function FOV 10°

In order to maximise the backscattered radiation, the required field of views of the emitter and the receiver have to be the same. Based on simulations, a 10° FOV is the better value.

4.3 3D-Range Camera (3DCAM)

The main scenario for the 3DCAM specification is the Pre-crash warning/preparation.

Spectral range NIR

The spectral range describes the sensitivity range of the sensor. For cameras in CMOS technology the sensitivity spectral range is about 0.4-0.9 µm allowing to capture images both in visible and near-infrared wavelength bandwidths. However, taking into account the 3DCAM functional principle (active technique by the use of NIR illuminator), the spectral range to be considered is only the NIR one. It could be designed the imager providing also 2D images in the VIS range (with the illuminator off) as added functionality. The integration of this functionality will be evaluated during the design phase.

Sensitivity tbd The sensitivity has to be as high as possible in order to increase the performances (e.g. distance measurement accuracy) and/or reduce the number of required illumination power. The use of 3D integration technologies gives already by default a fill-factor of 100% of the pixel area for the photosensor. Additionally, the design of the photosensor is independent from the technology chosen for the electronics, which allows the use of the most efficient solution. In that way, quantum efficiencies of more than 80% are easily achievable.

Dynamic range >100dB

The Dynamic Range (DR) of the 3D camera allows a complete scene view without any loss of information or distorted distance values due to pixel crosstalk. The reflectivity variation of the objects occurring in real traffic scenes is very high. There are three main kinds of reflectors in the traffic scenarios (e.g. traffic signs, license plates, lane marks): specular, diffuse and retro-reflecting. Additionally the irradiance on the sensor depends strongly on the distance of the objects.

Two main contributions have to be considered in order to estimate the required dynamic range: the eco signal is proportional to the inverse of the squared distance, and proportional to the solid angle of the target emission. DRd for the distance:

2min

2maxlog20

dd

DRd =

Public Page 59 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 59 of 78

Dynamic range >100dB DRΩ for the solid angle:

v

v

h

h

resFOV

resFOV

DR π2log20log20min

max ≥ΩΩ

where FOVh and FOVv are the horizontal and vertical FOV and resx and resy are the horizontal and vertical resolution of the sensor. The emission solid angle for a cooperative object is underestimated by the field of view observed by each pixel.

In case of d=2-20m, FOV=40°x20° and 150x75 pixel:

DR = DRd + DRΩ ≅ 40dB + 100dB ≅ 140dB

Detection range 20m

Taking into account that the 3DCAM has to satisfy the 2.1.03.05 requirement regarding the eye safety issue, which leads to a limited laser output power in the selected FOV, and that the 2.1.03.01 requirement states a collision mitigation functionality below the stopping distance, a detection range of 20m will pursued.

Angular Resolution min. 3.75 px/°

The angular resolution describes the resolving power of the camera. It defines in how many pixels an object of a specified size in a specified distance is represented in the image. The angular resolution is realized by the focal length of the lens and the imager resolution. The scopes of the 3D camera are objects detection and classification. Assuming that, taking into account requirements 2.1.03.01, at least 1px has to cover an object of 0.2m width at a distance of 20m, an IFOV of 0.57° follows, meaning about 1.74px/° resolution. Since the resolution parameter is a particular importance objective of the 3DCAM sensor in the ADOSE project, a resolution of 3.75px/° will be pursued.

Field of view (FOV) HFOV 40° VFOV 20°

Considering that requirements 2.1.03.02 set the bases of the minimum FOV that the 3DCAM as to accomplish, a minimum HFOV of 40° has to be considered. The VFOV can be smaller than the HFOV; as already stated in 2.1.03.02 , 20° will be achieved.

Imager resolution width: min. 150px height: min 75px

(square pixel)

Public Page 60 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 60 of 78

Imager resolution width: min. 150px height: min 75px

(square pixel)

The imager resolution can be figured out by the angular resolution and the FOV.

That is pxpx 15075.340 =°⋅° is the minimum number of pixels required to satisfy the

requirements. Regarding the imager height, different considerations should be done. In fact, assuming a square pixel design and considering a 20° VFOV, the imager height should be minimum of 75px. Otherwise, if a rectangular shape pixel is considered, with the extended dimension in the vertical direction, less pixels could be required.

Pixel pitch tbd

The pixel pitch, describing the distance between the centers of two contiguous pixels, it is an imager parameter.

In order to figure out the minimum value to fulfil the requirements, the FOV, the imager resolution (number of pixels) and the focal length have to be provided as can be seen by the following formula:

⎟⎠⎞

⎜⎝⎛⋅=

2tan2 FOV

pixelsofnumberfpitchpixel

As can be seen, the pixel pitch is both affected by the focal length that, at this stage, can be still considered a design variable. A compromise taking into account costs and performances should be made.

Focal length tbd

From the above mentioned considerations, also the focal length has to be decided together with the pixel pitch.

Aperture (F-number) tbd

Consequentially to the choice of the objective focal length, the diameter of the entrance pupil will be defined and the F-number figured out accordingly.

Frame rate >50 fps

The frame rate is the measurement of the frequency at which an imaging device produces unique consecutive images called frames.

The specification is derived from the state of the art on 3D camera prototypes.

Public Page 61 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 61 of 78

Output format tbd

The output format depends on the designed system architecture. The interface has to be designed dependent on which data have to be transferred, see also requirements 2.X.03.04 in chapter 3.3.1.

Start up time 2 sec

The start up time describes the time till the system is available. As the 3D camera is an assisting sensor a start up time of about 2 sec is sufficient.

Camera size i.e. 40 x 80 x 150 mm

As no constructed space for the camera location is specified, the requirement is adopted from the already developed 3D camera prototypes (e.g. UserCams-PReVENT): 40 x 80 x 150 mm.

Camera weight < 500 g

As no constructed space for the camera location is specified, the weight requirement (excluding illuminator) is adopted from standard camera prototypes.

Temperature range op. (-40 ÷ 80)°C

The temperature range refers to the required operating temperatures of the 3D camera: (-40 ÷ +80)°C.

Supply Voltage 6÷16 V

In automotive applications supply voltages from 6 V to 16 V are typical. Within these limits the system has to be operable.

Power consumption < 3 W

Power consumption refers to the electrical energy over time that must be supplied to an electrical appliance to maintain its operation. The power consumption is usually a result of power used to perform the intended function of the device plus additional "wasted" power.

The specification is derived from the state of the art on 3D camera prototypes.

Public Page 62 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 62 of 78

4.4 Harmonic Radar and Tags (HR-PTAG, HR-ATAG)

Detection Range 45m, urban 120m, extra-urban

The detection range defines the largest distance from which a target can be detected. The detection range depends on the radar antenna gain, antenna gain of the radar reflector, the conversion loss of the radar reflector and the radar sensitivity.

The detection range of passive reflector is considerably smaller than that of the active reflector. Therefore, inexpensive passive reflectors might be more suitable for equipping pedestrians and bicyclists in urban areas, whereas active reflectors could be used for equipping pedestrians in extra-urban areas, motorcyclists and traffic signs. The radial resolution defines the distance that is required between two targets in order to be separate these targets from each other. The radial resolution depends on the bandwidth of the radar, and it assumed 1m for HR-TAG.

.

Angular Resolution 2°

The angular resolution describes the minimum angle between two objects that can be separated from each other. The angular resolution of the radar is limited by the beam width of the radar antenna. In general, the larger is the antenna in wavelengths, the narrower is its beam width and thus better is its resolution.

Field of view (FOV) 31.1°

The field of view is the angular extent of the observable world that is seen. It is required that a VRU at a 22 m distance and at 3.75 m lateral offset is detected.

Frame rate min. 10 fps

The frame rate is the measurement of the frequency at which an imaging device produces unique consecutive images called frames. As the harmonic radar image is not displayed on a screen, a frame rate of 10 fps is assumed to be satisfactory.

Start up time 2 sec

The start up time describes the time till the system is available. As the harmonic radar is an assisting sensor a start up time of about 2 sec is sufficient.

Output format tbd

The output format depends on the designed system architecture. The interface has to be designed dependent on which data have to be transferred.

Public Page 63 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 63 of 78

Radar size about

100x100x100 mm

The radar should be forward-looking and thus be located in the front of the car e.g. in the grill. As no constructed space for the radar location is specified, the requirements are adopted from other sensors in similar constructed spaces.

Radar weight 1000 g

As no constructed space for the radar location is specified, the requirements are adopted from other sensors in similar constructed spaces.

Temperature range -40 ÷ +80°C

The operating temperature describes the required temperatures when the harmonic radar is able to be operating: -40 ÷ +80°C.

When the harmonic radar is not operating, it has to endure temperatures of -40 ÷ +120°C.

Supply Voltage 6÷16 V

In automotive applications supply voltages from 6 V to 16 V are typical. Within these limits the system has to be operable.

Power consumption < 2 W

Power consumption refers to the electrical energy over time that must be supplied to an electrical appliance to maintain its operation. The power consumption is usually a result of power used to perform the intended function of the device plus additional "wasted" power.

In automotive applications the power consumption has to be as low as possible.

Public Page 64 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 64 of 78

4.5 Silicon Retina Sensor (SRS)

Spectral range 400 ÷ 1000 nm

The spectral range describes the sensitivity range of the sensor.

Dynamic range 120 dB

The Dynamic Range of the SRS allows a complete scene view without any loss of information. It avoids saturation of the sensor under high light conditions and dark areas in the grabbed image under low light conditions.

Detection range 5 ÷ 6 m

The detection range d for pre-crash warning/preparation must be greater than the distance a dangerous object could travel during the system reaction time of the countermeasure.

Angular resolution 9.6 px/deg

The angular resolution describes the resolving capacity of the SRS sensor. It defines in how many pixels an object of a specified size in a specified distance is represented in the image.

From requirements definition results that the angular resolution should be > 4.2 pix/deg (with object size 0.5m and distance 5m).

Field of view (FOV) HFOV 30° VFOV 20°

The horizontal field of view of the SRS sensor is 30° as reported in the requirement 2.2.05.02. The vertical FOV can be smaller than the horizontal FOV.

Imager resolution 320x200 px

The width of the imager must be 128 pixels or larger to fulfil the FOV requirement (see 2.2.05.02). Furthermore, major criterion is that the stereo disparity is approx. 2 pixels at the required 10 different vehicle positions in the required detection range from 5 m to 6 m distance. Therefore, a 320 pixel lateral resolution of the final version of the SRS is necessary to fulfil the detection requirements.

Public Page 65 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 65 of 78

Pixel pitch tbd

The pixel pitch, describing the distance between the centers of two contiguous pixels, it is an imager parameter.

In order to figure out the minimum value to fulfil the requirements, the FOV, the imager resolution (number of pixels) and the focal length have to be provided as can be seen by the following formula:

⎟⎠⎞

⎜⎝⎛⋅=

2tan2 FOV

pixelsofnumberfpitchpixel

As can be seen, the pixel pitch is both affected by the focal length that, at this stage, can be still considered a design variable. A compromise taking into account costs and performances should be made. Pixel pitch should be in the range of 15÷30 µm (see deliverable report D6.1).

Focal length tbd

From the above mentioned considerations, also the focal length has to be decided together with the pixel pitch. Focal length should be in the range of 7÷14 mm (see deliverable report D6.1).

Aperture (F-number) tbd

Consequentially to the choice of the objective focal length, the diameter of the entrance pupil will be defined and the F-number figured out accordingly (see deliverable report D6.1).

Stereo system parameters See description below

The major criterion is that the stereo disparity is approx. 2 pixels at the required 10 different vehicle positions in the required detection range from 5 m to 6 m distance.

Therefore, an at least 320 pixel lateral resolution is necessary to fulfil the detection requirements. Two different variants are taken into account, a 320x200 sensor with 30µm pitch designed at ARC and a 320x200 pixel sensor with 15µm pitch design at INI/ETH-Zürich.

Vision Sensor Pixel Resolution: 320 x 200 pixels

Public Page 66 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 66 of 78

Stereo system parameters See description below

Figure 6 - SRS - Side impact model option A

Option A

Vision sensor pixel pitch: 30µm Focal length: 14mm Stereo baseline: 0.45m Minimum stereo disparity: 2.1px @ 6m distance for 33cm motion Stereo FOV: 33.7 Option B

Vision sensor pixel pitch: 15µm Focal length: 7mm Stereo baseline: 0.45m Minimum Stereo disparity: 2.1px @ 6m distance for 33cm motion Stereo FOV: 33.7° Options for a final product version

As seen from the calculations above the requirement for a 30° FOV for stereo detection can be satisfied for all options. A rather large stereo baseline of 0.45 m is necessary to reach the required depth resolution of 33cm at a distance of 6m (equals 3 consecutive detections during 1 meter of movement). This leads to the necessity of simplified and fast stereo calibration procedures and to enhanced requirements for the sensor data connection to the main board.

The useful stereo range in both configurations is from approx. 2.5 m to 6m from the ego-vehicle.

As a 0.45m stereo baseline is not acceptable for a final product and in-car integration, an assessment of a possible option for a smaller stereo baseline is given in this section.

It is realistic that an optimized pixel layout can be achieved in an advanced process that yields a 13µm pixel pitch for the optical sensor. The increase of the sensor resolution to 512 pixels will decrease the necessary baseline to an acceptable 0.23m and increase the sensor array size to 6.7mm. The following parameters have been evaluated:

Vision sensor pixel pitch: 13µm Vision sensor pixel: 512 Focal length: 12mm

Public Page 67 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 67 of 78

Stereo system parameters See description below

Stereo baseline: 0.23m Minimum Stereo disparity: 2.1px @ 6m distance for 33cm motion Stereo FOV: 28°

Frame rate 200 fps

The frame rate is the measurement of the frequency at which an imaging device produces unique consecutive images called frames.

The equivalent frame rate and temporal resolution for SRS sensors are: 200 fps and 5 ms. The detection latency time is 15ms (object is clearly visible in 3D data after 3 consecutive frames).

Start up time tbd

The start up time describes the time till the system is available.

Output format See description below

Data Interface to ECU or PC/Laptop

In normal operation mode the sensor will transfer only object depth information in space segments at relatively low data rates. A maximum of 100 depth values at 200 values per second coded in 16bit values are expected. The necessary address space is given below.

Range Bits space segment

1..100 7 ~0.3° segments

stereo depth 0..256 8 unused 1

16 bit

Table 3 - SRS - Necessary address space

For the algorithm development and debugging purposes however, access to raw sensor data is necessary. The necessary bandwidth for the data streaming operation mode has to be reached by the system. A 32bit address format will have to be used as an address-space of two times 320x200 pixels have to be covered. The data can be represented in 19bit, but due to alignment reasons for the used 16bit processor a 32bit format will be used.

Range Bits pixel array x 1..320 9 pixel array y 1..200 8 stereo channel 0..1 1

Public Page 68 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 68 of 78

Output format See description below

event polarity 0..1 1 sum 19 bit disparity t.b.d. up to 13 optional

32 bit

Table 4 - SRS - Data format used

The following table estimates the typical and maximum expected raw data rates and the corresponding address event rate for recording from the stereo sensor based on experience with the 128x128 stereo sensors.

Cases Address space

Data rate

(bit/s)

Approx. address

event rate

(events/s) 3D depth information @ Normal operation

16bit 320 k 20 k

CAN data transfer @ data streaming 32bit 1 M 31 k Typical data rate from sensors @ data streaming

32bit 15.4 M 480 k

Maximum data rate from sensors @ data streaming

32bit 25.6 M 800 k

Ethernet data transfer @ data streaming 32bit 100 M 3.1 M

Table 5 - SRS - Data rates for data recording

(a) Interface to PC/Laptop

A 10/100Mbit/s Ethernet connection is specified for recording raw and stereo test data in real time from the system to a PC or laptop. UDP protocol will be used as it allows maximum throughput at minimal effort on the sensor side2.

An internal data buffer of 20 Mbyte for internal recording of approx. 30 sec of raw data under very high data rate conditions is foreseen where Ethernet data transfer are not sufficient to transfer all data online. This data buffer can be transferred offline after each experiment / test case.

(b) Interface to ECU

A CAN bus interface (1 Mbit/s) is foreseen additionally for direct ECU connection3. Furthermore access to raw sensor data will be provided (online transfer for low and medium data rate conditions and for internal recording and offline transfer for high data rate conditions) via this interface to allow sensor level debugging during all project phases.

Temporal resolution/time stamping

Raw AER data from the optical sensors is time stamped with 1ms resolution. These raw data can be accessed via streaming in an address event format to be specified.

Stereo depth information will be available at 5ms resolution corresponding to a quasi-frame-rate of 200 frames per second. These raw data can be accessed via streaming in an address event format to be specified. Processing Power for Stereo Algorithm

Public Page 69 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 69 of 78

Output format See description below

rate for a 320x200 pixel stereo sensor system.

The sustained maximal rate must be processed by the stereo algorithm.

Temperature range 0 ÷ 55°C

The temperature range refers to the required operating temperatures of the SRS sensor.

As the SRS sensory system is intended for research purposes, relaxed environmental conditions (no automotive qualification) are taken into account.

Supply Voltage 12 V A 12V power supply from a stabilized source will be used.

Public Page 70 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 70 of 78

4.6 Summary of the specifications of ADOSE sensors

Table 6 - Specifications of ADOSE sensors.

SENSORS SPECS FIR MFOS 3D-CAM HR-TAG SRS

Spectral range (7÷14) µm VIS-NIR, NIR NIR NA (400÷1000) nm

Sensitivity (thermal, QE, …) MRTD=min. 0.5 K NETD=0.3 K - tbd NA -

Dynamic range (DR) NA > 100 dB > 100 dB NA 120 dB

Detection range ≥120 m ≥120 m 20 m 45 m, urban 120 m, extra-urban (5 ÷ 6) m

Horizontal angular resolution min. 4.188 px/° min. 26 px/° min. 3.75 px/° 2° 9.6 px/° Field of view (FOV) HFOV 24°, VFOV 12° HFOV 30°, VFOV 15° HFOV 40°, VFOV 20° HFOV 31.1° HFOV 30°, VFOV 20° Imager Resolution min. (100 x 50) px > (780x390) px min. (150x75) px NA (320x200) px Pixel pitch tbd 5.6 um tbd NA tbd Focal length tbd 8 mm tbd NA tbd Aperture (F-number) tbd tbd tbd NA tbd Frame rate min. 12.5 fps ≥ 25 fps > 50 fps 10 fps 200 fps Start up time 2 sec 2 sec 2 sec 2 sec tbd Output format tbd tbd tbd tbd tbd Sensor size i.e. (57x58x72) mm i.e. (80x100x70) mm i.e. (40x80x150) mm i.e (100x100x100) mm tbd Sensor weight i.e. 500 g < 500 g < 500 g 1000 g tbd Operating temperature range (-40 ÷ +80) °C (-40 ÷ +80) °C (-40 ÷ +80) °C (-40 ÷ +80) °C (0 ÷ 55) °C Supply Voltage 6÷16 V 6÷16 V 6÷16 V 6÷16 V 12 V Power consumption < 2 W < 2 W < 3 W < 2 W tbd Array partition NA solutions A & B NA NA NA Field of view – fog function NA 10° NA NA NA

Public Page 71 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 71 of 78

5. QUALITY INDICATORS AND DEPENDABILITY

Sensor requirements, categorized according to the applicable scenarios (see paragraph 1.2, “Scenario scheme and partners”), are collected in tabular form in paragraph 3. Sensor specifications to be able to fulfil the requirements are described in paragraph 4, including an overview provided by table 6.

Requirements and specifications are basis for quality indicators of the sensor development. The scenarios as described in D1.1 are the test basis, specific tests (Use Cases) will be derived from that for evaluation of the technical quality achieved (other measures are the research, performance and cost indicators as described in Table 7).

Monitoring and evaluation against requirements and specifications is performed in WP7, Task 7.4, with annual reporting (D7.4, “Monitoring and review of sensor technology development”). With ongoing development of sensors, detailed quality parameters will be derived and compared to requirements to fulfil expectations with respect to scenarios defined in D1.1. A final evaluation is planned with respect to dependability and performance parameters taking into account as far as applicable the overall targets of automotive standards (such as the evolving ISO 26262) and the generic functional safety standard ISO/IEC 61508.

This analysis is part of WP7 in the last period of ADOSE (month 24 – 36). As indicated already in D1.1 under “risks”, the impact on road safety, driver behaviour and public acceptance in both cases, false positives and real failures, has to be considered.

Main indicators

Baseline description Relevant objectives Type

(performance, research, cost)

Planned deliverables milestones

Month

Present FIR-array resolution is not adapted to minimum specs for hot spot detection of add on sensors for sensor fusion.

Application optimized resolution, chip size, low array cost.

Performance: • Adapt specification for

minimum sensor requirements to achieve required performance and cost

D1.3 D2.1 M2.1

6 9 9

Present automotive FIR array technology (with VOx-detector) is not compatible with semiconductor processing and requires dedicated equipment.

Cost reduction by avoiding additional backend processing steps for the sensor part; process compatibility with semiconductor fabrication.

Research: • Development of a

semiconductor processing compatible design and process flow for the sensing and the micromechanical part of the integrated FIR device

• Design and implementation of sensing elements produced simultaneously during the standard read out circuit process, making most backend steps obsolete.

D2.1 D2.2 D2.3 D2.4

9 15 20 25

Public Page 72 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 72 of 78

Present automotive FIR array technology (with VOx-detector) applies materials difficult to control in production.

Reliable production, quality and uniformity of array, low production cost.

Research: • Use high quality mono-

crystalline silicon as sensor material for well established controllable manufacturing and reproducible uniform sensor array elements

• Develop designs for suspended pixel elements with epitaxial Si layers containing the Si-diode sensing elements

D2.1 D2.2 D2.4 D2.7

9 15 25 36

Current Bosch micromechanical and semiconductor process modules are available but not yet fully adapted and evaluated for FIR process flow and design

Cost effective production process and corresponding design point for the specified array

Research: • Adapt and combine volume

proven process modules from integrated micromechanical sensors for manufacturing of the FIR Array

• Optimize wafer-level vacuum packaging

• Work out optimized FIR sensor array design based on adapted process modules

D2.1 D2.2 D2.3 D2.4

9 15 20 25

Established FIR bolometer readout requires active circuitry at each pixel site for decoupling

Reduced circuit complexity; ease of integration and possibility for lateral electronic design, array cost

Research: • Develop optimized low noise

readout circuit using advantageous partial multiplexing of intrinsically decoupled diode arrays

D2.2 D2.4

15 25

Competing diode arrays under R&D provide thermal pixel insulation with a critical micromachining process on costly pre-processed CMOS wafers

Early in-process testability, yield, cost of array

Research: • Adapt process with critical

cavity generation step at the beginning of the process chain on low value wafers - not on expensive fully processed CMOS-wafers

D2.1 D2.4

9 25

Present automotive FIR camera applies multi-lens optics

Low camera cost Performance: • Design camera system and

optics for a single lens approach

D2.1 D2.4 D2.6

9 25 30

Present automotive FIR camera requires separate germanium window for protection in automotive use

Automotive reliability, low camera cost

Performance: • Apply DLC-coating on cheap,

but soft GASIR single lens

D2.4 D2.7

25 36

Cost of present automotive imaging bolometers is in the range of several 100 €

Low cost FIR imager chip with medium resolution

Cost: • Target cost of approximately

20€ for adapted FIR imager chip

D2.6 D7.5

30 36

Table 7 - FIR camera - Research, performance and cost indicators

Public Page 73 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 73 of 78

6. CONCLUSIONS

The goal of ADOSE is to improve performance, robustness and reliability of ADAS systems by sensor enhancements at hardware and/or software levels and sensor data fusion, and by introducing new sensor technologies (e.g. silicon retina stereo sensor, harmonic radar tag). Chapter 2 describes the State-of-the-Art of the selected five sensors to be developed, enhanced and optimized in ADOSE; further details on technology gaps and economic considerations are to be found in Annex I. The five selected sensors and the main responsible partner for the relevant work packages are:

• Sensor 01: FIR (Far infrared sensor) (Bosch) • Sensor 02: MFOS (Multi-functional and multi-spectral CMOS vision sensor) (CRF) • Sensor 03: 3DCAM (three dimensional) range camera) (IMEC) • Sensor 04: HR-TAG (Harmonic radar tag) (VTT) • Sensor 05: SRS (Silicon retina stereo sensor) (ARC) Sensor requirements have been analyzed and described in sufficient detail (in tabular form) for each sensor in relation to the major scenarios relevant for the application of the respective sensor type. These tables with rationale and details are based on the analysis of the scenarios as having been performed in D1.1 and on different objectives to be fulfilled. Therefore, for several sensor/scenario combinations there is more than one requirement table available. The basic specifications of each of the five sensors are collected in chapter 4, including options and mounting specifications for the devices to be mounted on the vehicle, interface descriptions etc. besides the physical sensor parameters. Further elaboration and design issues are tackled in separate detailed specifications and design documents D2.1, D3.1, D4.1, D5.1 and D6.1, due either month 6 or month 9.

Public Page 74 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 74 of 78

7. BIBLIOGRAPHY

[1] T. Strobel, A. Servel, C. Coue, T. Tatschke, Compendium on Sensors - State-of-the-art of Sensors and Sensor Data Fusion for Automotive Preventive Safety Applications, PREVENT-ProFusion, Deliverable report v1, 19 July 2004.

[2] N. Pallaro, F. Visintainer, M. Darin, E. Balocco, E. Borello, M. Gottardi, N. Massaro,

Multifunctional sensor to detect environmental parameters, AISEM 2003, Trento, 11 February 2003.

[3] N. Pallaro, F. Visintainer, M. Darin, E. Mosca, Multifunctional Sensor to Detect

Environmental Parameters, AMAA2004, Berlin, 25-26 March 2004. [4] P. Mengel et al.,Three-Dimensional CMOS Image Sensor for Pedestrian Protection and

Collision Mitigation, AMAA 2006. [5] G. Alessandretti, PReVENT Collision Mitigation and Road Users Protection: First test

results, ITS World Congress, Oct 2006. [6] O. Elkhalili et al., A 64×8 Pixel 3-D CMOS Time Of Flight Image Sensor for Car Safety

Applications, IEEE 2006. [7] T. Oggier et al., An all-solid-state optical range camera for 3D real-time imaging with sub-

centimeter depth resolution (SwissRanger), Proceedings of the SPIE, Vol. 5249 nr. 65, 2003.

[8] R. Lange and P. Seitz, Solid-State Time-of-Flight Range Camera, IEEE J. Quantum

Electronics, Vol. 37 (3), 390-397, March 2001. [9] T. Oggier, P. Seitz and N. Blanc, Miniaturized all-solid-state 3D camera for real-time range

imaging, CSEM. [10] B. Büttgen, T. Oggier, M. Lehmann, R. Kaufmann, F. Lustenberger, CCD/CMOS Lock-In

Pixel for Range Imaging: Challenges, Limitations and State-of-the-Art, CSEM. [11] S. Burak Gokturk, Hakan Yalcin, Cyrus Bamji, A Time-Of-Flight Depth Sensor – System

Description, Issues and Solutions, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’04).

[12] S. Hsu, S. Acharya, A. Rafii and R. New, Performance of a Time-of-Flight Range Camera

for Intelligent Vehicle Safety Applications, Canesta, Inc. [13] Harold Staras; Joshua Shefer; Harmonic Radar Detecting and Ranging System for

Automotive Vehicles, US patent 3781879, 1972. [14] E.T. Cant, A.D. Smith, D.R. Reynolds, J.L. Osborne, Tracking butterfly flight paths

across the landscape with harmonic radar, Proceedings of the Royal Society B: Biological Sciences, Volume 272, Issue 1565, 22 Apr 2005, Pages 785 - 790.

[15] Yang, X.; Liu, L.; Vaidya, N.H.; Zhao, F.; A vehicle-to-vehicle communication protocol for

cooperative collision warning, Mobile and Ubiquitous Systems: Networking and Services, 2004. The First Annual International Conference on, 22-26 Aug. 2004 Page(s):114 – 123.

Public Page 75 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 75 of 78

[16] P. Lichtsteiner, C. Posch, T. Delbruck, A 128×128 120dB 30mW Asynchronous Vision Sensor that Responds to Relative Intensity Change, Proceedings of the International Solid State Circuits Conference, pp. 25-27, San Francisco, 2006.

[17] E. Schoitsch, W. Kubinger, Autonomous Systems - Safety Critical Embedded Systems and

Intelligence, ERCIM News 67, Special Issue on Embedded Intelligence, October 2006. [18] T. Gruber and E. Schoitsch, Automotive Visions beyond In-Car Driver Assistance:

Integrated Traffic Management with COOPERS, ERCIM News 67, Special Issue on Embedded Intelligence, October 2006.

[19] H. Hemetsberger, J. Kogler, W. Travis, R. Behringer, W. Kubinger, Reliable Architecture of

an Embedded Stereo Vision System for a Low Cost Autonomous Vehicle, Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), pp. 945-950, Toronto, Canada, September 17-20, 2006.

[20] J. Kogler, H. Hemetsberger, B. Alefs, W. Kubinger, W. Travis, Embedded Stereo Vision

System for Intelligent Autonomous Vehicles, Proceedings of the IEEE Intelligent Vehicles Symposium, June 13-15, 2006, Tokyo, Japan.

[21] H. Hemetsberger, J. Kogler, W. Travis, R. Behringer, W. Kubinger, Reliable Architecture of

an Embedded Stereo Vision System for a Low Cost Autonomous Vehicle, Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), pp. 945-950, Toronto, Canada, September 17-20, 2006.

[22] Transport information and control systems - Forward vehicle collision warning systems -

Performance requirements and test procedures, ISO/FDIS 15623, 2002. [23] T. Makinen, J. Irion, M. Miglietta, F. Tango, A. Broggi, M. Bertozzi, N. Appenrodt, J.

Nilsson, A. Sjogren, T. Sohnke, J. Kibbel, SP Deliverable APALACI Final report 50.10b, APALACI Project, 31.01.2007 (Subproject of PreVENT)

[24] P. C. Antonello, L. Ertl, P. Flegel, L. Nilsson, P. Mengel, O. Schrey, M. Sailer, R.

Wertheimer, SP Deliverable D52.300.1 Camera Requirements and Specifications, UseRCams Project, 10.12.2004 (Subproject of PreVENT)

[25] L. Listl, B. König, U. Wagner, R. Wertheimer, P. C. Antonello, J.-F. Camart, H. Andersson,

W. Brockherde, M. Kaupper, SP Deliverable D52.200.1 UseRCams Final Report, UseRCams Project, 14.02.2008 (Subproject of PreVENT)

Public Page 76 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 76 of 78

8. LIST OF FIGURES

Figure 1 - Roadmap for vision devices (PREVENT-ProFusion).........................................................................6 Figure 2 – MFOS sensor prototype ....................................................................................................................6 Figure 3 - SRS - Mounting position for SRS sensors .........................................................................................6 Figure 4 - MFOS - Solution A for the array partition...........................................................................................6 Figure 5 - MFOS - Solution B for the array partition...........................................................................................6 Figure 6 - SRS - Side impact model option A.....................................................................................................6 9. LIST OF TABLES

Table 1 - SRS - Timing of actuations .................................................................................................................6 Table 2 - Requirements of ADOSE sensors.......................................................................................................6 Table 3 - SRS - Necessary address space ........................................................................................................6 Table 4 - SRS - Data format used ......................................................................................................................6 Table 5 - SRS - Data rates for data recording....................................................................................................6 Table 6 - Specifications of ADOSE sensors.......................................................................................................6 Table 7 - FIR camera - Research, performance and cost indicators .................................................................6 10. LIST OF ACRONYMS

2D Bi-dimensional

3D Three-dimensional

3DCAM 3D range camera

ACC Adaptive Cruise Control

ADAS Advanced Driver Assistance Systems

ADOSE Reliable Application Specific Detection of Road Users with Vehicle on-board Sensors

AE Address-Events

AER Address Event Representation

AlGaAs Aluminium gallium arsenide

APS Active Pixel Sensor

BP Boron Phosphide

BSD Blind Spot Detection

CAN Controller Area Network

CCD Charge Coupled Device

CMOS Complementary Metal–Oxide–Semiconductor

CW Continuous Wave

CWTOF Continuous Wave Time-Of-Flight

DLC Diamond Like Carbon

Public Page 77 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 77 of 78

DR Dynamic Range

DSP Digital Signal Processor

ECU Electronic control unit

FIR Far Infrared

FOV Field of view

fps Frame per second

GaAs Gallium arsenide

Ge Germanium

HFOV Horizontal field of view

HgCdTe Mercury Cadmium Telluride

HR-ATAG Harmonic radar with Active Tags

HR-PTAG Harmonic radar with Passive Tags

HR-TAG Harmonic radar with Tags

ICT Information & Communication Technologies

ID Indentification

IFOV Instantaneous field of view

In Indium

InSb Indium Antimonide

IP Integrated Project

IRLED Infrared Light Emitting Diode

ISO International Organization for Standardization

ITOF Indirect Time-Of-Flight

LDW Lane Departure Warning

LED Light Emitting Diode

LIN Local Interconnect Network

LRR Long Range Radar

LVDS Low Voltage Differential Signal

MEMS Micro Electro Mechanical System

MFOS Multifunctional Optical Sensor

MMIC Monolithic Microwave Integrated Circuit

MRTD Minimum Resolvable Temperature Difference

NETD Noise Equivalent Temperature Difference

NIR Near Infrared Range

PC Personal computer

Public Page 78 of 78

DELIVERABLE D1.2 SENSOR REQUIREMENTS AND SPECIFICATIONS

Ver. 1.4 w Date 07.10.2008 Page 78 of 78

PMD Photonic Mixer Device

PVGA Panoramic VGA (Video Graphics Array)

RADAR Radio Detection And Ranging

RFID Radio Frequency Identification

ROI Region Of Interest

ROIC Read Out Integrated Circuit

SAM Scanning Acoustic Microscopy

SDRAM Synchronous Dynamic Random Access Memory

Si Silicon

SiGe Silicon-germanium

SME Small Medium Enterprise

SNR Signal Noise Ratio

SRS Silicon Retina Stereo sensor

TOF Time-Of-Flight

UDP User Datagram Protocol

USB Universal Serial Bus

UWB Ultra Wide Band

VFOV Vertical field of view

VGA Video Graphic Array

VIS VISible

VOx Vanadium-Oxide

VRU Vulnerable Road User

WI-FI Wireless Fidelity

WP Workpackage