Holger Krumm, Product Manager MotionDesk / Release Management, dSPACE GmbH
Onboard sensors are critical for autonomous vehicles to navigate the open road, but the real environment is full of obstacles. Everyday interferences such as reflective surfaces, whiteout conditions, fog, rain, traffic congestion, and objects (i.e. pedestrians, parked vehicles, buildings, signs) clutter the atmosphere and can result in sensor misreads or false targets.
Lidar systems emit laser pulses to measure the light reflected from an object
To avoid miscalculations, in-depth testing is critical. But with the sheer volume of testing that is required to evaluate every possible driving scenario, conducting real test drives on the road just isn’t feasible. The solution lays with the creation of virtual driving scenarios and sensor-realistic simulation, which can be done in the safety of the lab.
This means virtually reproducing road traffic in the laboratory as it is perceived and recorded by the sensor (i.e. camera, radar, lidar, ultrasonic, map, V2X. It is imperative that the sensor functions are validated in the laboratory because of many millions of test kilometers due to the wide range of traffic situations. This cannot be done on the road.
Using realistic, off-the-shelf simulation models (i.e. such as dSPACE Automotive Simulation Models – ASM) and a virtual testing platform, engineers can validate autonomous driving functions by virtually reproducing entire test scenarios, including the environment sensors (camera, radar, lidar, etc.), the vehicle under test, traffic, roads, driving maneuvers, and the surrounding environment.
Sensor-realistic simulation is the most efficient method for verifying and validating the environment sensors that are onboard an autonomous vehicle. The basic premise behind sensor-realistic simulation is that the real sensors are replaced with sensor models, which send out the same signals as the real sensors.
The sensor models use a geometrical approach to calculate distance, velocity, acceleration, horizontal and vertical angles to the nearest point of every detected object. The software models generate raw data from the sensors (camera, radar, lidar, etc.) to stimulate the environment (i.e. traffic objects, weather, lighting conditions, etc.) in the same way that the real vehicle would perceive the environment.
To establish whether the desired outcome was achieved, the entire workflow has to be validated. Much of this process can be completed in a virtual environment with sensor-realistic simulation.
Graphical representation of the processing stages of a sensor signal
Referencing the graphic above, the sensor simulation process involves the following stages:
Sensing – The sensors (i.e. camera, radar, lidar, etc.) are stimulated by sending a signal representative of one or more objects. The virtual targets are detected by the sensors, just as real objects would be detected, and the sensors begin to capture vital real-time information such as distance, angular position, range and velocity.
Perception – Through imaging or signal processing, the presence of the object(s) is recognized by the sensors.
Data fusion – The validation process begins as the raw data collected from the various sensors is fed to the central processing unit of the electronic control unit (ECU). Here, the information is combined and processed (this is also known as sensor fusion) in real time to create a target list (or point cloud) of objects, both static and moving.
Application - The object list is run through a perception algorithm where object classification, situation analysis, trajectory planning and decision-making activities take place. Based on the outcome, the ECU determines what autonomous vehicle behavior should be executed.
Actuation – The ECU sends an output signal to the appropriate actuator to carry out the desired action.
For validation purposes, the sensor data that is gathered during the testing process needs to be recorded and stored in a time-correlated manner (i.e. time stamped, tagged, synchronized) so that it can be played back in the laboratory at a later time.
Sensor-realistic simulation is an effective way to support the development and validation of sensor systems with deterministic and reproducible test execution.
Various sensor models, ranging from ideal ground truth to real, are used to validate autonomous sensor systems
To address the high-complexity needs associated with autonomous sensor systems (i.e. decision algorithms, motion control algorithms), more detailed and realistic models are required. The more realistic the sensor model is, the better the results that can be achieved.
Depending on the level of complexity, sensor models can be grouped into three general types:
Ideal ground truth/probalistic sensor models are technology-independent models. They are primarily used for object-list-based injection (i.e. 3D and 2D sensors used to detect traffic lights, traffic signs, road objects, lanes, barriers, pedestrians, etc.). These kinds of models are used to check if an object is detectable within a set range.
In a sensor simulation experiment, these kind of sensor models provide ideal data (ground-truth information), which can optionally be superimposed with probabilities of events (probabilistic effects). For example, superimposition is used to simulate a typical measurement noise of radar. The simulation returns a list of classified objects (vehicles, pedestrians, cyclists, traffic signs etc.) as well as their coordinates and motion data (distance, relative speed, relative acceleration, relative azimuth and elevation angle).
Ideal ground truth/probalistic sensor models are typically validated in software-in-the-loop (SIL) simulations, which are faster than real time, and in hardware-in-the-loop (HIL) simulations, which are carried out in real time. They can also be deployed on cluster systems to complete high volumes of tests.
Within the dSPACE tool chain, these sensors are part of the automotive simulation model (ASM) tool suite (e.g. ASM Ground Truth sensor models). They are calculated on a CPU, together with the vehicle, traffic and other relevant environment models. These models are easy to configure, and simulation is always performed synchronously.
Phenomenological/physical sensor models are physics-based models. These models are based on the measurement principles of the sensor (i.e. camera uptakes, radar wave propagation) and are used to simulate phenomenas such as haze, glare effects or precipitation. They can generate raw data streams, 3-D point clouds, or target lists.
Because these models address physical effects, their complexity level is much higher. Calculation typically takes place on a graphics processing unit (GPU). These models are typically validated in a SIL or HIL test setup.
Within the dSPACE tool chain, phenomenological / physical sensor models are visualized in MotionDesk and calculated on a dSPACE Sensor Simulation PC, which features a high-performance GPU card and can facilitate deterministic, real-time sensor simulations with a high degree of realism.
Real/over-the-air sensor models are also physics-based models. They are used in tests with real physical signals and real sensor ECUs to analyze real-world sensor behavior.
Validation can be performed by stimulating the whole sensor over the air on a radar test system (i.e. dSPACE Automotive Radar Test Systems – DARTS). This is ideal for object detection scenarios. Alternatively, validation can be performed on a complete radar test bench when there is a need to integrate other vehicle components (i.e. front bumper, chassis).
To be able to carry out sensor-realistic simulation, a bus system like CAN, CAN FD, FlexRay, LIN or Ethernet has to be in place to enable signal exchanges and vehicle network communication. Bus simulation tests, ranging from simple communication tests and rest bus simulation to complex integration tests, have to be carried out to ensure proper functionality of the communication channels.
Additionally, the sensor model has to be connected to the device under test using an interface, so it can receive data injection for simulation testing. A high-performance FPGA can be used to feed raw sensor data, target lists and/or object lists into the sensor ECU in a synchronized manner. The dSPACE Environment Sensor Interface (ESI) Unit was designed to do just that. It receives raw sensor data, separates it according to the individual sensors, and then inserts the time-correlated data into a digital interface behind the respective sensor front end.
Some other interfaces that are supportive of autonomous driving development include: FMI, XIL-API, OpenDrive, OpenCRG, OpenScenario or Open Sensor Interface. These interfaces give engineers the option of integrating valuable data from accident databases or traffic simulation tools for co-simulation activities.
3-D photo realistic quality
Traditionally, the use of a camera box has been a widely used approach for testing camera-based systems, but this method has limitations in terms of hardware setup and the provision of stimulus to the sensor.
A better approach is to use over-the-air stimulation to feed raw image data directly into the camera’s image processing unit. As the camera sensor captures the image data stream, the animated scenery is displayed on a monitor and engineers can detect ranges and sensor outputs to the nearest point of an object (i.e. distance, relative velocity, vertical and horizontal angles, etc.), as well as sensor timings (i.e. cycles, initial offset, output delay time).
To validate camera-based sensors, different lens types and distortion effects, such as fish-eye, vignetting and chromatic aberration, have to be taken into account. Additionally, the use of multiple image sensors, as well as sensor characteristics (i.e. monochromatic representation, Bayer pattern, HDR, pixel errors, image noise, etc.) need to be factored into test scenarios.
The next generation of sensor simulation products coming to the marketplace is capable of producing highly realistic visualization using technologies such as 3-D remodeling, physics-based rendering, ray tracing and dynamic lighting. The detail of different terrains, environmental lighting (i.e. haze, shadows), lens flare, glare effects, dynamic materials (i.e. rain, snow, fog), and much more can be achieved with 3-D photo-realistic quality, further advancing sensor-realistic simulation.
To address the needs of physics-based sensors such as lidar and radar, measurement principles come into play. Ray tracing technology is generally used to detect an object by tracing the path of reflections for a lidar signal or an electromagnetic wave for a radar signal.
This involves sending beams into a 3-D scene and capturing their reflections, which allows for the integration of physical effects such as multipath propagation into the modeling. The result is a physically correct simulation of the propagation of radar waves or a near infrared laser beam, which is essential for the stimulation and emulation of sensors.
As values (i.e. reflection points, angles, distance, range, Doppler speed, diffuse scattering, multipath propagation, etc.) are collected, they are calculated and processed to calculate the vehicle’s distance from an object and to describe the surrounding environment (i.e. in the form of a point cloud). Based on the data that is gathered, a target list is generated that includes information on distance and the intensity of the reflected light (for lidar sensors) or the frequency of an echo signal (for radar sensors). This enables sensor-realistic simulation so that engineers can validate sensor models by reproducing the behavior of the sensor path.
Sensor-realistic simulation can be achieved with a tool chain that can support high-precision sensor simulation throughout all steps of the development process. In building such a tool chain, key components to consider include:
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.