Published: December 13, 2018 |
Mahendra Muli, Director of Marketing & New Business Development, dSPACE Inc.
Are you in the race towards engineering the automotive future by cloning the driver function? Now, you can accelerate your autonomous driving development and testing efforts in a well-thought-out, high-performance hardware and software environment. Find out how you can save time and reach the million-mile test drive mark without delays.
3287*. This is the average number of road deaths occurring on a daily basis by distracted drivers around the world. Autonomous driving has the potential to bring this number down to the single digits, with the long-term objective of zero deaths. Minimizing risks is the ultimate goal behind the mobility industry’s push to clone the driver.
As the industry is steadily making headway towards full automation, there are three pivotal areas that are raising multiple challenges for engineers.
By applying certain strategies, tools, and methodologies, many of these challenges can be resolved. Read on to learn which steps you can take to tame these fears.
When it comes to autonomous vehicles, several core development areas have to be extremely well thought out and thoroughly tested. These include:
Risks can be minimized by applying the principles of over-engineering, system redundancy, and fail-safe operations. Additionally, other development guidelines and tools can be applied to support core development activities.
An important starting point is the ISO 26262 V-cycle. Autonomous vehicles will include a large number of safety-critical subsystems powered by embedded software. To enable an efficient and effective development process, one of the key recommendations of ISO 26262 is model-based development (MBD), which is considered the proven process for developing and testing embedded software.
The MBD process is also the established methodology for the development of autonomous driving perception and sensor fusion algorithms. This method enables tracing requirements throughout the entire development process, which is vital but does not provide a complete picture. Scenario-based testing is the other critical process that has to be taken into account by addressing the unknowns of autonomous driving.
When developing and testing autonomous driving features, numerous different artifacts and large data volumes are collected. Environment models, plant models, controller models, test procedures, and virtual electronic control units (V-ECUs), for example, now have to be factored into the V-cycle development process. Additionally, there are other key elements to consider, including perception and sensor fusion algorithms, trajectory planning, and artificial intelligence for decision-making.
With an end-to-end, continuous tool chain, as provided by dSPACE, the artifacts and data can be used in the initial development stage as well as other stages, such as hardware-in-the-loop (HIL) testing.
This figure illustrates important development steps for developing and testing software that is relevant for autonomous driving and lists artifacts and data typically produced during the development process.
To enable an autonomous vehicle to detect its surrounding environment and know its position at any given time, it must be equipped with various onboard sensors (i.e., cameras, lidar, radar, GPS/IMUs, ultrasound, etc.). These sensors are continuously collecting data to generate a high-precision map of the vehicle and its surrounding environment.
Time synchronization and correlation of sensor data is essential to know the exact location and lane position of the vehicle and to reliably detect dynamic objects around the ego vehicle, like passenger cars, trucks, pedestrians, cyclists etc. It is also vital to anticipate moving trajectories. However, some sensor management configurations have limitations – they are not always capable of collecting and/or processing all of the data received. Furthermore, bandwidth restrictions can interfere with communications and signal processing.
For this to work properly, the data has to be captured, synchronized, processed in real time and subsequently communicated. During the test and development phase, it is essential to be able to record, visualize, and play back time-correlated data.
All of these steps can be accomplished easily by integrating the RTMaps platform in the tool chain. RTMaps, developed by Intempora and distributed by dSPACE, has multi-threading capabilities. These allow you to acquire data from any number of sensors simultaneously, process this data and merge it in real time (or in post-processing). And because the data is acquired asynchronously, the data is time-stamped. This means, you can play back recordings and work offline in reproducible conditions.
RTMaps is a very intuitive graphical programming environment. It includes a packed library with interfaces to various types of sensors – cameras, lidar, radar, etc. It also provides built-in functions for image processing. The environment is flexible and extensible. The functionality developed in RTMaps can be executed on an engineer’s desktop unit and on the in-vehicle ECUs.
The autonomous vehicle relies on information from multiple sources (i.e., cameras, lidar, radar, etc.) to sense its surrounding environment. To function autonomously, the vehicle must have a complete 360°view at all times, both day and night and in all weather conditions.
The sensor data from various sensors has to be merged to enable a complete understanding of the vehicle environment at a particular time. Due to various physical, environmental, or system errors, the sensor information might become distorted resulting in inaccurate information.
Vignetting, hotspots (caused by the sensor position and the sun), blurring, chromatic aberration, and lens distortion are some examples of how images transmitted by sensors can become impaired.
From left to right: radial lens distortion, chromatic aberration, and vignetting.
Similarly, lighting conditions can cause distortions. Night driving, shadow casting, and other light sources (e.g., street lamps and headlights), for example, can cause dazzling effects, glare, lens flares, road reflection, etc.
Examples of lighting conditions that can distort images.
To test the implications of these image distortions in the real world and to make the software sufficiently robust to handle these situations, simulation-based testing is key. Using software-in-the-loop (SIL) and hardware-in-the-loop (HIL) simulation, you can test your sensor-image-processing ECUs in advance and correct any issues that are found at an early development stage in a realistic, virtual environment.
dSPACE offers a solution to support this kind of simulation testing. Video streams are fed to an FPGA system. Afterwards, errors can be injected on the fly. The video is converted to the proper protocol to be injected into the camera ECU (via plug-on device (POD), if required).
This test system makes manual configuration and automating for testing purposes easy. The test setup includes: 1) dSPACE SCALEXIO to perform HIL simulations, 2) dSPACE MotionDesk to generate and stream raw data from multiple sensors and to visualize video/camera-simulated scenarios in real time, and 3) the dSPACE Environment Sensor Interface (ESI) Unit to feed time-correlated raw sensor data to the sensor ECUs, such as camera, lidar and radar ECUs.
Connectivity is essential for autonomous vehicles. Connectivity enables autonomous vehicles to communicate with their surroundings. This includes other vehicles (V2V), pedestrians (V2P) via smartphones, the infrastructure (V2I) (i.e., traffic signs, traffic lights, tolls), network communication systems (V2N - i.e., data centers that can relay safety messages, for example, when approaching accidents and road construction warnings), and the cloud (V2C) for faster processing power and data storage capabilities.
All of these communication layers have to be tested to ensure that messages are forwarded and received correctly. dSPACE offers a solution for both prototyping and testing V2X (vehicle-to-everything) applications. Please note: This solution addresses dedicated short range communication (DSRC) via WLAN 802.11p. Development and testing options for V2Cloud are planned for 2019.
Example setup for prototyping V2X applications
The dSPACE solution includes a special mapping tool in ControlDesk where engineers can visualize both the ego vehicle and other road users sending and receiving the V2X data. With a Cohda wireless modem makes this communication possible. The solution includes a Simulink® blockset for MicroAutoBox and a Cohda Wireless device to simplify and ease the programming effort.
Example HIL test environment for V2X applications
In the HIL environment, test engineers can develop specific test scenarios - including specific V2X messages combined with the exact position of the vehicle under test via GNSS simulation. This environment can be further extended to include additional traffic participants with V2X communication, as well as the simultaneous stimulation of the ego vehicle sensors scanning the environment.
Software use in modern vehicles has increased steadily over the past decade. The development of autonomous systems leads to an exponentially growing reliance on software, which in turn makes management of this software critical.
To prevent errors and aid iterations in later development steps, there has to be a clear understanding of software components, their functions, interdependencies, and requirements. A software architecture is typically diagramed to map out these properties.
To meet the added demands of autonomous driving, many OEMs turn to the AUTOSAR Adaptive Platform to design their software architectures. The standard seeks to support new requirements for autonomous driving that the classic AUTOSAR platform does not address. The standard introduces more flexibility in software configuration, for example, updates can be performed over the air. It also provides high-performance computing (via a service-oriented architecture) to support communication technologies.
With dSPACE SystemDesk, you can efficiently model your architectures and systems for software components, including the hardware topology and network communication descriptions. Moreover, when used with a dSPACE simulation platform, SystemDesk makes it possible to generate virtual ECUs (V-ECUs) from the application software under test so you can also validate the ECU software – a big added bonus.
This feature supports the AUTOSAR Adaptive Platform by simplifying the development of ECU functions for automated driving and enabling the update of ECU software during operation. Furthermore, with SystemDesk, Ethernet modules can be integrated in the generated V-ECUs, making it possible to simulate and test the Ethernet communication (i.e., by using the VEOS simulation platform on your own PC).
It seems unfeasible to predict everything that could possibly happen while driving autonomously, but that is precisely what today’s engineers are striving to do.
Engineers have to consider how the vehicle could and should react in any given scenario. How can other drivers or pedestrians affect the reaction? What is the impact of weather and road conditions?
There are so many scenarios and subscenarios that have to be tested and they can make the process seem downright overwhelming. But everyone can agree that the more test scenarios are performed, the safer driving autonomously will be for everyone. So how do you test millions of possible autonomous driving scenarios in a reasonable time frame?
dSPACE offers a very efficient, fast, and reproducible way to test ECU functions for highly automated test drives – and you do not even need a test bench or a long line-up of HIL simulators. The answer lies in virtual test drives on PC clusters.
The dSPACE VEOS PC-based simulation platform lets you easily execute a high number of test drive scenarios. By coupling VEOS with a cluster of PCs, you can set up as many virtual test drives as you want and run these test drives in parallel. This not only allows you to efficiently complete thousands of test drives for validation purposes, it also allows you to study a specific scenario and any related subscenarios at the same time.
The PC cluster is controlled by one central unit that schedules the execution of test cases in real time. Depending on how many VEOS/PC clusters you use, you can complete millions of test drive miles each day. If a test drive fails, it can be reproduced and debugged in detail with VEOS.
Example setup of VEOS and cluster PCs.
As raw data is collected from the various sensors onboard, the data has to be processed in real time. The entire data is integrated to represent the most accurate depiction of the environment and to detect objects to enable intelligent decision-making while driving on the road.
To handle the constant influx of sensor data, a powerful central processing platform has to be installed in the vehicle. This platform is used to merge the raw sensor data in real time. To reduce latency, the platform must have: 1) fast processing power, 2) the ability to compress data, and 3) a large storage capacity.
To tackle the challenges of multi-sensor processing, including the prototyping of algorithms in the vehicle, dSPACE offers four tools that are designed to interact smoothly:
A MicroAutoBox II shown inside the Ford Fusion at the CES 2017 show.
The MicroAutoBox II is a compact and robust real-time prototyping system for performing in-vehicle, fast function prototyping. It offers comprehensive automotive I/O and is designed to handle extreme shock and vibration. With autonomous driving in mind, the unit features several monitoring functions to constantly monitor the real-time processor and the correct execution of real-time applications. Additionally, FPGA functionality is available for applications requiring extremely fast control loops.
The MicroAutoBox Embedded PC is also a compact, shock- and vibration-proof PC system designed for in-vehicle use. It provides processing power for Windows- or Linux-based applications, such as digital road maps and computation-intensive functions for automated driving. The processing power behind this unit adds more computing capability, and can interface to cloud-based resources and to data storage units. It is ideal for developing autonomous driving, infotainment, telematics and image processing applications. (This unit can be used as a stand-alone system or as an extension to the dSPACE MicroAutoBox II prototyping unit.)
The MicroAutoBox Embedded Sensor Processing Unit (SPU), when used in combination with RTMaps, is designed to process and merge data from the vehicle network and various sensors, such as cameras, lidars, radars and GNSS receivers. It primarily calculates perception and sensor fusion algorithms and connects, for example, to HMIs. (This unit can also be used as a stand-alone system or as an extension to the dSPACE MicroAutoBox II prototyping unit.)
The MicroAutoBox Data Storage Unit (DSU) offers mass data storage capacity for high-performance data logging/recording. (This unit is an extension to the MicroAutoBox Embedded PC or Embedded SPU.)
To help our customers tackle ongoing challenges for autonomous driving applications, dSPACE invests much time and resources on research and development. Below is a summary of some of our newest innovations to help engineers overcome issues with software development and testing.
For developing and testing autonomous driving systems that react with other vehicles or objects, dSPACE offers ASM Traffic. This tool lets you define traffic scenarios. Any kind of scenario can be created, including complex road networks, traffic signs and pedestrians. Sensor models are available to detect objects.
Interpreting Signs and Traffic LightsSensor-based autonomous driving requires the capability to detect and interpret variable message signs and traffic lights. dSPACE’s sophisticated sensor model from its Automotive Simulation Models tool suite – ASM – supports this feature for autonomous driving simulations.
Simulating Lighting and Physical ConditionsTo create realistic lighting conditions, such as night drives, for simulated test scenarios, dSPACE’s MotionDesk visualization software (version 4.1) makes it possible to change the shape, color, and intensity of all light sources for models, including headlights and street lamps. Additionally, dSPACE provides physical sensor models for lidar and radar.
Autonomous Emergency Braking (AEB)Preprogrammed NCAP test scenarios for features such as AEB can be realistically simulated with dSPACE Automotive Simulation Models (ASM). This includes detecting vulnerable pedestrians and initiating emergency braking to prevent accidents.
Radar Test BenchdSPACE offers a reliable solution for realistically testing radar-based vehicle functions using a synchronized closed-loop HIL system. This test bench enables verification tests of the radar sensor and its components.
Simulation Test BenchesFor testing complex mechatronic systems in the fields of autonomous driving or vehicle dynamics, for applications such as steering and brake systems, dSPACE offers highly dynamic test benches. These turn-key solutions can be adapted to your requirements.
MicroAutoBox Embedded Sensor Processing Unit (SPU)To prototype algorithms and process extremely large volumes of data, autonomous driving applications require a dedicated hardware and software environment that can process and merge data from various sensors, such as cameras, lidars, radars and GNSS receivers, calculate motion control algorithms, and connect to actuators or HMIs.
MicroAutoBox Embedded Data Storage Unit (DSU)dSPACE now offers an optimal solution for mass data logging use cases. The MicroAutoBox Embedded DSU is ideal for applications that produce large volumes of sensor and vehicle network data, which has to be continuously recorded and played back.
Environment Sensor Interface (ESI) UnitThe new dSPACE ESI Unit assists with the simulation of multiple environment sensors (i.e. camera, lidar, radar, etc.) for sensor fusion and function testing. The unit accurately synchronizes the stimulation of individual sensors and supports time-correlated feeding of raw sensor data to one or more sensor ECUs.
To clone the driver, autonomous driving features have to be thoroughly tested and validated. The most efficient way to accomplish this is by testing and validating the control software early on – before the vehicle and environment sensors are even ready, which you can accomplish by using the dSPACE virtual validation tool chain.
With the right strategies, tools and methodologies, you can establish an intelligent and efficient test environment to validate individual sensors, components, and software systems. This environment will be key in playing out all conceivable test drive scenarios to ensure that the best possible outcome is achieved for the autonomous vehicle driving in the real world.
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.