top of page

MENU

Q&A from “Unleashing the benefits of Autonomous Vehicle (AV) testing and simulation”


Following the FISITA Technical Online Conference "Unleashing the Benefits of Autonomous Vehicle Testing and Simulation" held on 24 June 2021, keynote speaker, Dr Hajar Moussanif of Cadi Ayyad University in Morocco has kindly provided us with some further amplification to some of the questions that she was asked on the day.

As well as the questions below, our panel moderated by Ouafae El Ganaoui Mourlan, FISITA DVP Education, and made up Nadine Leclair, FISITA President; Dr Cecile Pera, Founder of Orovel Ltd and Patricia Villoslada, CEO Transdev Autonomous Transport Systems also considered a number of other questions.


Free access for members to the video of this conference (able to fast forward and rewind) is available in the FISITA Digital Library. Non-members can access the original broadcast free of charge after email registration, but no fast forward or rewind capability.


To collect data for different scenarios, have you used any physical Driving simulator hardware?

Yes, we used a Physical driving simulator with pedal, brake and wheel as well as a GPU powered server station.

Data were continuously recorded throughout each drive with a sampling frequency of 60 Hz through. The simulator collects, through UDP protocol, records of driver inputs (e.g. throttle/brake pedal position, steering wheel position), vehicle dynamics (e.g. speed, yaw angle), weather condition at the time of the accident namely clear, heavy fog or heavy rain. Furthermore, a MI band was used to record the HRV signal. The band was paired with a Xiaomi Android Mobile device running an application implemented for the experiment which recorded the HRV data.


How well does the model represent the real-world scenario? Or the correlation level with real world predictions?

We tried to replicate real world scenarios as much as we could, including different weather conditions, different geometries, and different driving conditions.


The adopted driving scenario aimed to simulate various intricacies and aspects that real-world driving entails in order to explore the impact of the factors on driving behaviour and to collect enough raw data before the crash.


Furthermore, when developing the study, the effects of repeated crash imminent events within a single experimental session were indeed a concern. These effects could result in drivers anticipating crash events and driving more cautiously. The virtual simulator minimized these effects by randomly generating dissimilar driving situations and by including many outer events where it appeared that a road user would pose a threat but ultimately did not promoting the participants to scan the layout more thoroughly and drive cautiously. Therefore, some outer events generated by the simulator were designed to imitate possibly crash conditions in which an event did not materialize, thereby making the driving process less predictable.


On another note, the adopted simulation software controlled the generated road users in a way that some of them were not necessarily visible the entire time. In some scenarios, external vehicles are hidden behind representative buildings and landmarks influencing the participant’s sight view, then the systems vehicles are triggered to begin moving to become visible for the subject’s vehicle, as a result, the participant would not be able to observe the incursion from the start. Post-drive interviews revealed that the driving scenario was highly realistic, and the participants did not feel like they were experiencing repeated conduct situations nor driving an obstacle course.


Which is the most efficient algorithm for DB prediction?

According to the literature: SVM and Deep Learning lead to the best result in terms of DB prediction.

As can be seen, when comparing the recall and precision measures, the recall levels achieved by the two models are, in general, much higher than precision. The higher the recall, the more accurate the prediction result as it represents the correct crash prediction overall actual crash records.


Put differently, it depicts the proportion of crash occurrences that were correctly predicted by the models. Most of the highest recall values were obtained with MLP with performance over 90%, especially in clear, overcast and snow conditions, in which MLP recall values were beyond 94%. Conventionally, precision and recall metrics have an inherent trade-off as one comes at the cost of the other. Thus, F1 score is a special measure that conveys the balance between the precision and recall in order to find an effective and efficient trade-off, as evidenced, good measures were obtained of F1-score with a minimum of 87%, with superior results (above 90%) in both overcast and rain weather for MLP and snow conditions for SVM.


In all weather conditions, the G-mean metric values, which has been found to yield more accurate performance measures when the under- lying data is impacted with imbalance, achieved levels over 90%, most particularly in fog and rain conditions with G- mean over 94% for MLP and SVM for snow conditions.


The average performance measures for MLP and SVM compared with three other adopted models: Naïve Bayes (NB), Hidden Markov Model (HMM) as well as Logistic Regression (LR) are depicted in Table 2. The MLP model appears to be the best performing classifier in terms of average performance for all the modelling metrics. This clearly indicates that, in this context, the use of MLP is preferable.

 

Join us in October for the first in a series of online technology discussions that will present the results of the FISITA Technical Committee’s assessment of future global technology trends, where we will be considering Sensing and Perception Future Technologies. More information will be available on the FISITA website closer to the time. For links to previous FISITA online conferences and FISITA PLUS, see this Spotlight post.

 

This FISITA online conference was held in support of INWED 2021 (International Women in Engineering Day) organised by FISITA Strategic Partner WES (Women Engineering Society)

 

bottom of page