Main Content

Radar sensor module to bring added safety to autonomous driving

Researchers at a Fraunhofer Institute in Berlin are developing a combined camera and radar module that can react 160 times faster than a human driver. The project is called KameRad and aims to bring added safety to autonomous driving.

When a child runs out onto the road, the average human driver takes 1.6 seconds to hit the brake pedal. The reaction time is cut to 0.5 seconds for automated vehicles fitted with radar/lidar sensors and a camera system. But at a speed of 50 km/h, that still means that the vehicle will continue for another seven meters before the brakes are applied and it comes to a standstill.

In response, the Fraunhofer Institute for Reliability and Microintegration IZM has teamed up with a range of partners from both industry (InnoSenT, Silicon Radar, Jabil Optics Germany, AVL, John Deere) and research institutes (Fraunhofer Institute for Open Communication Systems FOKUS, DCAITI) to develop a camera radar module that is significantly faster in capturing changes in traffic conditions. The new unit, no bigger than a smartphone, will have a reaction time of less than 10 milliseconds – which, according to a study conducted by the University of Michigan (see source), makes it 50 times faster than current sensor systems and 160 times faster than the average human driver. With the new system, the vehicle from our earlier example would travel on for just 15 cm before the system intervenes and initiates the braking maneuver – potentially eliminating many inner-city road accidents.

Integrated signal processing reduce reaction time
The real innovation in the new system is its integrated signal processing capacity. This allows for all processing to take place directly within the module, with the system selectively filtering data from the radar system and stereo camera so that processing can either take place immediately or else be intentionally delayed until a subsequent processing stage. Non-relevant information is recognized, but not forwarded. Sensor fusion is applied to combine the data from the camera and radar. Neural networks then evaluate the data and determine the real-world traffic implications based on machine learning techniques. As a result, the system has no need to send status information to the vehicle, but solely reaction instructions. This frees up the vehicle’s bus line to deal with important signals, for instance detecting a child suddenly running out onto the road. “Integrated signal processing drastically cuts down reaction times,” says Christian Tschoban, group head in the RF & Smart Sensor Systems department. Together with his colleagues, Tschoban is currently working on the KameRad project (see info box). The functioning demonstrator he and his team have developed looks like a grey box with eyes to the right and left – the stereo cameras. The project runs until 2020. Until then, project partners AVL List GmbH and DCAITI will be busy testing the initial prototype, including road testing in Berlin. Tschoban hopes that in a few years’ time his “grey box” will be fitted as standard in every vehicle, bringing added safety to automated inner-city traffic.

Source: Study by the University of Michigan, August 2017: Sensor Fusion: A Comparison of Sensing Capabilities of Human Drivers and Highly Automated Vehicles”

Link to article