top of page

Country

Mr. John Smith

Job title

Company

People

The OPREVU and VULNEUREA projects have conducted virtual reality (VR) experiments to evaluate the behavior and reaction of pedestrians in potential collision situations in urban scenarios. The objective of this research work focuses on the design of a face and eye recognition system for pedestrians for optimizing the decision algorithm of an Autonomous Emergency Braking (AEB) system. This ad-hoc technology is capable of determining whether the pedestrian is observing an approaching vehicle with an on-board AEB system and with what level of attention, as a function of the interpupillary distance and the relative distance between vehicle and pedestrian. For this purpose, the minimum angle of rotation of the pedestrian's head to be able to see the vehicle has been determined, considering the characteristics of the road and the movement of the vehicle and the pedestrian. In addition, the level of lateral perception is set as a function of the angle of rotation, with 0-22.5 degrees being no perception, 22.5-67.5 degrees for partial perception, and 67.5 to 90 degrees for maximum perception. For this, supervised Machine Learning classification models are used: k-Nearest Neighbors (KNN) and Support Vector Machine (SVM). Face and eye detection is performed through code in Python, Mediapipe and OpenCV, which is embedded in a Raspberry Pi Model 4b. The generated algorithm allows to obtain a bounding box that includes the face and some relevant key points (eyes, nose, mouth, and ears). The hardware is completed by a high-resolution camera system with an adjustable varifocal lens. During the calibration of the system, the framing detection range of the system was determined (8 m to 30 m for longitudinal distance and 3 to 6 m of total lateral amplitude). Furthermore, several validation tests are performed on the INSIA track, with different relative positions between vehicle and pedestrian, obtaining a dataset of 4697 observations. The classification models are trained on 80% of the sample, the other 20% being the test set. Previously, the features have been scaled through a standardization process. The hyperparameters of the classification models are obtained through an optimization function, which allows maximizing the accuracy results and the main classification metrics (precision, recall and f1-score), using a cross-validation with 5 iterations. The models which determine whether the pedestrian is looking do so with an accuracy of over 84% (SD<0.02), and those that can obtain the level of attention with an accuracy of 80-81% (SD<0.018). The main limitations for the implementation of the technology developed in this project are the lack of automatic regulation of the camera lens as a function of the variation of the relative distance to the pedestrian, and the lower value of the precision and recall metrics for cases where the level of attention is intermediate (around 70% in the most accurate model). For the extreme cases, these values are above 80% in all cases. The models developed in this work may represent a leap forward in the technological development of AEB systems, since their formulation is based on the attentional behavior of pedestrians in potential hit-and-run situations through an innovative and cutting-edge technology as VR is. These models make it possible to determine how attentive the pedestrian is when crossing with a sufficiently high degree of accuracy. They are also computationally fast models, facilitating their integration into AEB systems, which generally require short response times. The probability of a road collision is highly dependent on pedestrian visual perception. Therefore, a calibrated and validated facial and visual identification system, based on a single-board computer and an adjustable high-definition camera, is proposed. Both the code for reading the pedestrian face biometric data and the Machine Learning classifiers algorithm can be run in parallel with low computational overhead. Furthermore, the high accuracy values of the classifiers and the balance between accuracy and recall values for each class show a good fit and lay the groundwork for possible further development in the field of active safety.



Ing. Ángel Losada Arias, Associate automotive researcher, INSIA-Universidad Politécnica de Madrid

Design of a facial and ocular recognition system for pedestrian identification to optimize an AEB system

FWC2023-SCA-004 • Integrated safety, connected & automated driving

DOWNLOAD PAPER PDF
DOWNLOAD POSTER PDF
DOWNLOAD SLIDES PDF

Sign up or login to the ICC to download this item and access the entire FISITA library.

Upgrade your ICC subscription to access all Library items.

Congratulations! Your ICC subscription gives you complete access to the FISITA Library.

BUY NOW

Retrieving info...

Available for purchase on the FISITA Store

OR

bottom of page