The FISITA Automotive HMI Online Conference was moderated by Christophe Aufrère, CTO of Faurecia, with speakers Bryan Reimer from the MIT Center for Transportation and Logistics, Philipp Kemmler-Erdmannsdorffer from NIO, and Omar Ben Abdelaziz from Faurecia.
The following transcript is taken from the presentation by Omar Ben Abdelaziz from Faurecia on the Cockpit of the Future.
Christophe: So now let me introduce Omar Ben Abdelaziz, He is research engineer within Faurecia and he is currently managing innovations related to user experience within the Faurecia Cockpit of the Future team and he's confronted with sensing technologies and all kinds of technologies to achieve closed-loop functions. So Omar, the floor is yours.
Omar: Thank you Christophe. Thank you Bryan for this interesting presentation. So thank you for all of you participating. Today I will try to speak about the technology and the approach we use to decide on the approach to use to decide on technology to use to monitor what's happening inside the cockpit.
So this is why and being part of the Cockpit of the Future I would like to share with you these two approaches: we have a blue approach and a red approach. As you can see on this slide, the blue one is a customer - it's a user centric approach - where we start from the pain point, the need. Where we start from identifying the need of the user, we try to think about the experience we should offer to the user, and then we link it to a product and then we think about the technology. This is the approach we are using a Cockpit of the Future today and it is how we are working within the team. There is another approach, which is the red one where we start from the technology, we build a product, and when we do this product we build an experience and we push it to the user. At Faurecia, we call it the techno-push approach so both of them are - let's say - interesting as approaches, but the blue one is making more and more sense especially for the Cockpit of the Future to tackle all the pain points of the user.
Now if I move to the next slide, I will speak a bit about the subject of today's conference - the HMI is one of the experiences we would like to ensure safe activity inside the Cockpit of the Future and to do that, and to keep the user experience, we are working on five trends here and we try to keep the HMI as lean as possible inside this cockpit. Here we have five trends. The first one is the optimal first-time user experience (FTUE): when we first start using HMI we would like that the HMI is as intuitive as possible. We also have another trend we call it Interaction with a Living Creature and this is linked to multi-modality, linked to personalized experience - we give here the example of Nomi from NIO, which is a very good example of such interaction. We also speak about seamless interface where the interface and the connectivity should be as seamless as possible. We think about also an HMI which is adaptive and smart enough to appear and disappear depending on the context and depending on the situation. And the last we have a trend, we have ageless HMIs where we have some golden rules which would like to keep and to use in the car of the future within the Cockpit of the Future.
So this is one example of cognitive experience, but in the Cockpit of the Future we are considering several experiences. On top of the HMI, we can think about all those experiences and the difficulty to monitor what's happening inside the cockpit and to be able to detect this situation, and to be able to offer the service at the right time, and to the right equipment. So this is why these experiences will bring us to speak about sensor technology and the way we should monitor the cabin and the occupant to be able to keep the experience at the same level as we had in mind when we designed the product.
So if we look to this approach, which is a top-down approach, where we start from the UX with several concerns linked to wellness, health, safety and HMI. Each of them has some attribute to measure or will need to measure attributes on the occupant body or the cabin or attribute from context. These attributes will be measured with technology and this is helping us to decide on the technology - which sensor we'll use to measure that attribute. So it's a top-down approach which help the automotive industry - or it's helping us - to offer a system capable to tackle the needs and to address the pain points of a specific occupant. Now and, as we can see the question here is: is one sensor with one technology capable to detect a situation as the one I presented just before - for a comfort situation, or a safety situation, or a very complex one with different occupants where each of the occupants has a specific experience.
So this is why here we speak about data fusion and closed-loop monitoring concept. This is a concept where we will try to make use of all the data source we have inside the cockpit, fuse them and offer the right services afterwards. So let's have a look and let's see how this system works. So we start from the use case this case as I said could be a simple use case like detection of an occupant on the seat and then maybe one answer will be enough but when we think about stress detection, when we speak about the mood detection, when we speak about the drowsiness detection, one sensor will not be enough and multiple data sources will be mandatory to monitor well what's happening inside the cabin.
So this is why we have here a bench of available sensors and find the Cockpit of the Future which will give us a lot of data. So the first sensor I put here is an interior radar. This interior radar is a millimetre wave with a multiple-input, multiple-output antenna. We speak here about a different frequency range like 60GHz, 79GHz, there are all sorts of ranges also like 24GHz and 120GHz. We also have some technology like beamforming to deal with the radar waves.
We also have camera sensors. I will say the camera is maybe the most important sensor for the in-cabin monitoring system. We have different technology of cameras: we have near-infrared (NIR) camera - here I mentioned the wavelength of 945nm sensor. We also have thermal camera which is capable to measure the temperature of the human body especially the temperature of the face and decide on their comfort level, or on their stress level, and things like that. We also have an NIR RGB camera. This NIR RGB is a hot topic now because there's a lot of requests today asking for such kind of sensor because we are trying to enlarge the scope of a single camera to do NIR-capability camera with a full colour camera to make, for example, selfies or to make conference calls use case we also have some 3d camera like a time-of-flight camera, structured light camera or a stereo camera. We also speak about a stereo camera which is not mentioned here. We have the seat sensors. The seat is a very important product to monitor what's happening and to assess the situation because the occupant will be always sitting on a seat will even remain - the future will always have this remaining in contact between the body and the seat and there we speak about some sensor technology like piezo sensors, capacitive, and resistive sensors. We have also biometric sensors where a photoplethysmogram (PPG) is used to measure the light reflection of the skin and to decide on the heart rate, the breath rate and the oximetry. We also have some air quality sensors and context, smartphone and wearable sensor. All the sensor data are sent later on to a data fusion engine where we fuse all this data to decide on the situation. When we decide on the situation then we have a dynamic rule engine which is able to select the right action depending the situation and then send it to our smart actuator.
We will not stop there because our system will be capable to loop and send again to loop a loop to monitor what's happening and what is the effect of the action on the occupant and we continue looping until the solving of the situation and until enhancement of that situation. So these are the slides I had for you today, this is the presentation I prepared for you. Hopefully it was clear. Don't hesitate to put your questions and we will try to answer them and during the Q&A session.
The transcripts for the other three parts of this Online Conference are available as follows:
Originally broadcast on Wednesday 25 March 2020 at 14:00 GMT, the FISITA Automotive HMI Online Conference explored the interactions between human and machine in an automated car; the technology to sense, measure, control and close the loop between the human and the car; and the human-machine interfaces that have already been implemented, such as voice control, as well as some others yet to be realised.
There were 272 registrations for the conference, with 234 joining the session for some of the time and all of the others watching the replay.
In addition to the 28 who didn’t attend, but watched the replay, 45 others who did attend also watched the replay. Of those who joined the call, 133 were there for over 50 minutes and 168 for over 30 minutes.
The webinar attracted attendees from 23 countries with more than 10 attendees from China, Finland, France, Germany, India, the UK, and the US. There were people from over six major OEMs, many Tier Ones and a number of research institutes.