AI sensor technology brings new optimization for autonomous driving

In order for driving assistance and safety systems in modern cars to perceive their environment and function reliably in all conceivable situations, they have to rely on sensors such as cameras, lidar, ultrasound and radar. The latter in particular are indispensable components. Radar sensors provide the vehicle with location and speed information from surrounding objects. However, they have to deal with numerous disruptive and environmental influences in traffic. Interference from other (radar) equipment and extreme weather conditions create noise that negatively affects the quality of the radar measurement.

TU Graz is working together with Infineon on new, robust radar sensors for autonomous driving. © Infineon
CREDIT
© Infineon

“The better the denoising of interfering signals works, the more reliably the position and speed of objects can be determined,” explains Franz Pernkopf from the Institute of Signal Processing and Speech Communication. Together with his team and with partners from Infineon, he developed an AI system based on neural networks that mitigates mutual interference in radar signals, far surpassing the current state of the art. They now want to optimize this model so that it also works outside of learned patterns and recognizes objects even more reliably.

Resource-efficient and intelligent signal processing

To this end, the researchers first developed model architectures for automatic noise suppression based on so-called convolutional neural networks (CNNs). “These architectures are modelled on the layer hierarchy of our visual cortex and are already being used successfully in image and signal processing,” says Pernkopf. CNNs filter the visual information, recognize connections and complete the image using familiar patterns. Due to their structure, they consume considerably less memory than other neural networks, but still exceed the available capacities of radar sensors for autonomous driving.

Compressed AI in chip format

The goal was to become even more efficient. To this end, the TU Graz team trained various of these neural networks with noisy data and desired output values. In experiments, they identified particularly small and fast model architectures by analysing the memory space and the number of computing operations required per denoising process. The most efficient models were then compressed again by reducing the bit widths, i.e. the number of bits used to store the model parameters. The result was an AI model with high filter performance and low energy consumption at one and the same time. The excellent denoising results, with an F1 score (a measure of the accuracy of a test) of 89 per cent, are almost equivalent to an object detection rate of undisturbed radar signals. The interfering signals are thus almost completely removed from the measurement signal.

Expressed in figures: with a bit width of 8 bits, the model achieves the same performance as comparable models with a bit width of 32 bits, but only requires 218 kilobytes of memory. This corresponds to a storage space reduction of 75 per cent, which means that the model far surpasses the current state of the art.

Focus on robustness and explainability

In the FFG project REPAIR (Robust and ExPlainable AI for Radar sensors), Pernkopf and his team are now working together with Infineon over the next three years to optimize their development. Says Pernkopf: “For our successful tests, we used data (note: interfering signals) similar to what we used for the training. We now want to improve the model so that it still works when the input signal deviates significantly from learned patterns.” This would make radar sensors many times more robust with respect to interference from the environment. After all, the sensor is also confronted with different, sometimes unknown situations in reality. “Until now, even the smallest changes to the measurement data were enough for the output to collapse and objects not to be detected or to be detected incorrectly, something which would be devastating in the autonomous driving use case.”

Shining a light into the black box

The system has to cope with such challenges and notice when its own predictions are uncertain. Then, for example, it could respond with a secured emergency routine. To this end, the researchers want to find out how the system determines predictions and which influencing factors are decisive for this. This complex process within the network has previously only been comprehensible to a limited extent. For this purpose, the complicated model architecture is transferred into a linear model and simplified. In Pernkopf’s words: “We want to make CNNs’ behaviour a bit more explainable. We are not only interested in the output result, but also in its range of variation. The smaller the variance, the more certain the network is.”

Either way, there is still a lot to be done for real-world use. Pernkopf expects the technology to be developed to the point where the first radar sensors can be equipped with it in the next few years.

This research is anchored in the Field of Expertise “Information, Communication and Computing”, one of five strategic focus areas of TU Graz.

Source: GRAZ UNIVERSITY OF TECHNOLOGY