Highly automated vehicles have the potentials to address the challenges of current road transportation system such as traffic flow, fuel efficiency, and vehicle safety. However, the recent report published by the House of Lords in the UK has highlighted that although highly automated vehicles have the potential to lower the number of road fatalities, casualties, and traffic congestion, the eradication, or near eradication, of human error and traffic load will only be realised with full automation. Major international automotive industries believe artificial intelligence and deep learning will be the bridge between current automated vehicles and fully autonomous vehicles. One critical part of the effective deployment of fully autonomous vehicle is for them to be trusted by the public in terms of safety. Without this being well understood, the potential benefits of autonomous vehicles could fail to materialise. The opaqueness of deep neural networks means that traditional validation and verification techniques cannot provide the safety guarantees required to deploy these models in safety-critical systems such as autonomous vehicles. Therefore, new techniques for ensuring the safety of deep neural network-driven autonomous vehicles are required. In this talk, I will introduce the challenges of verification and validation of deep learning techniques for autonomous driving and novel solutions to address safety concerns.
Safe and trustworthy deep AI for autonomous driving
EB2012-IBC-004 • Paper • EuroBrake 2012 • IBC
Upgrade your ICC subscription to access all Library items.
Congratulations! Your ICC subscription gives you complete access to the FISITA Library.
Retrieving info...
Available for purchase on the FISITA Store
OR