Published: November 02, 2020 |
Nicolas du Lac
CEO
Intempora – a dSPACE company
The technical challenges associated with advanced driver assistance systems (ADAS) and autonomous driving functions are steep. Engineering teams have to tackle an exorbitant number of development tasks with an extremely high level of confidence.
For an autonomous system to be fully functional, multiple features have to be integrated, such as automatic emergency braking, blind spot detection or lane keep assist, for example. The process of integrating highly complex, cross-functionality requires numerous software tasks to run in parallel within the vehicle, along with communication channels across multiple systems and subsystems.
To perceive its environment, different types of sensors (i.e. camera, radar, lidar) are positioned all around the autonomous vehicle, and these sensors have to be inter-connected to electronic control units (ECUs). A computationally-intensive electronic and computing infrastructure is required to collect and process all of the data that is generated by sensors. On top of this, highly sophisticated software has to run in parallel to ensure that tasks are carried out correctly and in a coordinated manner.
As applications become more and more complex, sensor set up also becomes more and more complex, making it necessary to test many different configurations. Additionally, multiple algorithms have to be integrated together, and the entire vehicle infrastructure has to demonstrate proper functionality across a wide spectrum of diverse scenarios.
This entire, networked infrastructure not only has to be developed, but it has to be thoroughly tested to validate that ADAS and AD functions perform optimally and in a safe manner.
One of the key components that makes ADAS and AD functions possible is artificial Intelligence (AI) algorithms. AI algorithms are used to carry out multiple tasks associated with perception, accurate positioning, path planning & decision making, and driver monitoring, and they have to be trained before they can be deployed in the field.
AI algorithms are usually trained using “supervised learning” methods. The algorithm model is fed with input data (i.e. sensor raw data from cameras, lidars, radar, etc., for perception and positioning models, or reconstructed environment description for path planning and decision-making models, etc.), along with reference data (also known as ground truth). The reference information teaches the algorithm to recognize and classify the input data (i.e. this is an approaching vehicle, this is a pedestrian, this is a lane marking, this is a traffic sign).
AI algorithms have to be trained to cope with an unlimited number of diverse traffic scenarios – from simple to complex, hazardous and even unpredictable situations. Training and validating safe ADAS /AD functions requires billions of testing miles, but it is simply impossible to predict every conceivable traffic scenario that could be encountered in the real world.
With multiple sensors on board, huge amounts of raw data are generated from autonomous vehicles – up to 50 Gigabits per second – with totals of tens or hundreds of PetaBytes (PB) per data acquisition campaign! This data has to be collected, stored and managed for post-processing. A big data infrastructure with appropriate network and bandwidth capability, such as public or private clouds, are well suited for this purpose.
With so much raw sensor data being collected, one may wonder if all of this data is needed to train an AI algorithm? In short … yes, it is. The multi-sensor playback of mass quantities of data is necessary to carry out AI algorithm training. The more data that is provided for AI algorithm training, the better that algorithm will perform. To train an algorithm for autonomous driving, Nvidia estimates that between 200 (conservative) and 600 (less conservative) PB of total raw data are necessary (source: https://devblogs.nvidia.com/training-self-driving-vehicles-challenge-scale/) .
Data fuels autonomous driving development. To train and validate perception and deep-learning (neural networks) algorithms, engineers need three essentials:
Additionally, high quantities of raw sensor data are necessary to test and validate algorithms under test. To validate an algorithm in a test, it has to be fed raw sensor data in playback or simulation mode. In parallel to feeding the algorithm raw sensor data, ground truth labels can be played back in a synchronized manner. Though the algorithm will not use the reference data (or “ground truth”), the results can be compared to check whether the algorithm is performing well in all situations.
When a system under test is operating at an SAE 3, 4 or 5 level of autonomy, where responsibility is transferred from the driver to the system, the system has to perform perfectly.
This graphic depicts a typical workflow for executing an ADAS/AD algorithm test using either real data management or simulated data management.
So what is the best approach for training AI algorithms when you have to deal with so many scenarios and so much data? A dual approach may offer the best of both worlds.
One possibility is to use simulators for software-in-the-loop (SIL) and hardware-in-the-loop (HIL) testing. The AI algorithm under test is fed with synthetic data generated by the simulators (i.e. environment simulation, sensors simulation, vehicle dynamics, etc.). This allows you to quickly accumulate virtual driving miles to test many situations under precise control. Other advantages of using simulators is that they provide a means to test dangerous situations in the safety of the lab, and they are big time savers in completing lengthy tests.
However, while simulators offer many benefits, they are not sufficient – alone – to train algorithms and ultimately develop and test ADAS/AD functions. Simulations are not always perfect in terms of realism of the sensor data and reality’s imagination always goes beyond what can be expected and modelled in simulators. Therefore, the virtual world needs to be bridged with the real world. This can be achieved by supplementing simulated scenarios with real test drives. Only real test drives can result in real scenarios that deliver real sensor data. This data needs to be brought into the simulation process to ensure a proper level of realism.
This graphic shows the same workflow with an overlay of solutions from dSPACE, Intempora (a dSPACE Company), and UAI (a dSPACE Company) for each stage of the ADAS/AD development and testing process.
When dealing with multiple data sources, it’s easier to have the same tooling and workflows. Whether you are training or testing AI algorithms in simulation mode, data replay mode, or in real time during real test drives on the road, utilizing one cohesive tool chain can make the testing process easier. The tool chain should:
dSPACE offers a seamless workflow and an end-to-end tool chain to meet all of your ADAS / AD development and testing needs. This solution is intuitive and unique, but still modular by design and open to certain third-party tools (i.e. data formats, middleware, post-processing tools, etc.). For more information, click here.
NOTE: Intempora has been providing advanced software solutions for real-time applications for the past 20 years. In July of 2020, after a long partnership, Intempora officially joined dSPACE. Read press release.
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.