Newsletter August 2017 ()

Deep learning algorithms for autonomous driving fuctions

The validation of real world based imitation algorithms using CarMaker

Over the past two decades, the use of deep learning methods in the field of vehicle development has steadily grown. Especially in the context of autonomous driving functions, impressive results have been achieved in recent years. The last stage of development was a model capable of driving a vehicle autonomously on highways for 98 % of the time.

We are presenting you a master’s thesis on this subject to which virtual testing contributed significantly. At Chalmers University of Technology in Sweden, the author, in cooperation with Volvo Car Corporation, studied how so-called imitation learning may be exploited for autonomous driving on highways using only grayscale images from a single front view camera. Imitation learning refers to a special deep learning method which aims at imitating human behavioral patterns. The application of this method is based on real-world reference data, which, in this case, stem from the driving behavior of a professional human driver.

The reference data provided by Volvo Cars contain video from a front view camera mounted on the test vehicle, which delivers grayscale images with a resolution of 640 × 480 pixels at a sampling rate of 20 Hz, combined with the corresponding steering wheel angle. In order to keep the collected data as generic as possible, the steering wheel angle was converted into the vehicle’s turn radius. The author thus obtained a result of approximately 1.4 million pairs of steering wheel angles and images.


Closed-loop testing in virtual test driving

Ordinarily, systems such as this are based on a variety of sensors, combining camera and lidar or several cameras, for instance. Since camera images provide data with rich information, the researchers at Chalmers University have begun investigations into whether robust driving behavior can be learned based on the data collected from a single grayscale front view camera. The developed method draws on neural networks which serve as a type of policy for the driving behavior of the vehicle. This policy is based on real data which are tested for robust driving behavior and a high level of adaptability to new situations in simulated closed-loop environments. The closed-loop tests were carried out using the open integration and test platform CarMaker by IPG Automotive.

The main focus of the research is the successful development of the above policy based on the camera data. Moreover, the author aims to demonstrate that a data set compiled using only a single front view camera is sufficient for a vehicle to reliably stay within the lane on the highway and react appropriately to disturbances. In addition, the work continues to show that a learned policy based on real data works well in a virtual environment, allowing for a cost-efficient and fast model evaluation using HIL tests.

Since the learned policy was subsequently validated in a closed-loop simulation in virtual test driving with CarMaker, the simulation solution was able to contribute to the success of this study. The road geometry was imported based on data from Google Maps. For the study, the authors chose a route of approximately 85 km in total with two to four lanes and speed limits ranging from 50 to 90 kmh. A virtual camera streamed images in real time which the simulated vehicle subsequently mapped to the corresponding steering motions. The results of the work allow for a comprehensive evaluation but may be extended to further investigations in the future comprising additional driving scenarios or tests in real traffic.