top of page

MENU

FISITA Automotive HMI Online Conference Transcript: Introduction and Q&A

Updated: Feb 21, 2021

Originally broadcast on Wednesday 25 March 2020 at 14:00 GMT, the FISITA Automotive HMI Online Conference explores the interactions between human and machine in an automated car; the technology to sense, measure, control and close the loop between the human and the car; and the human-machine interfaces that have already been implemented, such as voice control, as well as some others yet to be realised.

There were 272 registrations for the conference, with 234 joining the session for some of the time and all of the others watching the replay.

In addition to the 28 who didn’t attend, but watched the replay, 45 others who did attend also watched the replay. Of those who joined the call, 133 were there for over 50 minutes and 168 for over 30 minutes.

The webinar attracted attendees from 23 countries with more than 10 attendees from China, Finland, France, Germany, India, the UK, and the US. There were people from over six major OEMs, many Tier Ones and a number of research institutes.

WATCH THE FULL REPLAY — register free of charge for FIEC to access the HMI replay.

The FISITA Automotive HMI Online Conference was moderated by Christophe Aufrère, CTO of Faurecia, with speakers Bryan Reimer from the MIT Center for Transportation and Logistics, Philipp Kemmler-Erdmannsdorffer from NIO, and Omar Ben Abdelaziz from Faurecia.

The following transcript is taken from the introduction and final Q&A sections of the Online Conference. The transcripts for the three presentations are available as follows:

Introduction Transcript

Welcome to all attendees from various countries in this difficult period of time with COVID-19. I’m Christophe Aufrère the CTO of Faurecia and the Deputy VP of the FISITA Technical Committee. I’m going to moderate the conference but first of all, we are going to watch a quick video delivered by Chris Mason the CEO of FISITA.

Hello, good afternoon, good morning, good evening to you wherever you may be joining us from. My name is Chris Mason. I’m CEO of FISITA talking to you from my study here in the United Kingdom today, as we pull this new program together at this most challenging of times: to support the FISITA community in a way that only FISITA can. I have to say I’m absolutely delighted with the work of the team and our technical community and volunteers in putting this FISITA online conference program together for you so quickly and to such a high-quality standard. This is the first and may well become a series of FISITA online conferences looking at different, topical, technical subjects over the coming months. I’m absolutely delighted with the speaker line-up that’s been pulled together for us today and my thanks go to Faurecia, to MIT, and to NIO and the speakers we have lined up but especially my thanks to Christophe who will be chairing today’s session. So, without holding up proceedings any longer why don’t I hand over to the real experts and to Christophe who will support us as we spend the next hour or so exploring today’s subject matter which is, of course, human-machine interface. So over to you guys and thank you for joining us and I’ll see you again very soon.

OK, thank you to Chris, who is supporting this online conference. Coming back to the conference itself. This is the first FISITA online conference of this kind that we are going to organize today. The goal of the online conferences is not to give you a deep knowledge of the different topics we are going to cover. The time spent is too short for that, but we want to enlighten you on key engineering trends that are relevant for the automotive industry and give you the possibility to ask questions to our speakers or panellists. For this time we have chosen the human-machine interface or human-machine interaction as the first topic we are going to deal with today.

The relationship between the vehicle and the occupant is going to change a lot in the future and, at the same time, the car is becoming more and more automated and intelligent. For example, regarding automated vehicles, the traditional control of the car by the driver will change to more control of the driver by the car, especially for transition phases when the driver will have to re-engage after an autonomous driving sequence and this is all about human-machine interaction or human-machine collaborations I would say. But there will also be other functions integrated with the car which will need to identify the occupants like an occupant ID and proposed customized experiences and services for comfort, infotainment, safety, working sessions, gaming etc, and this is also all about human-machine interfaces with a lot of predictive associated functions. The technologies are coming. The way to interact with the vehicle is changing and not fully stabilized today so we propose you to attend the conference where we tackle some of the issues related to human-machine interfaces, starting with human-centric investigations, then relevant technologies, and at the end implementation for a non-traditional OEM, which is new. One practical comment you have on the right-hand side of your screen: the possibility to ask questions by chat: don’t hesitate to do it, we’ll collect them and ask them at the end of the conference.

 

Moderator


Christophe Aufrère Senior Vice President & Chief Technical Officer Faurecia

 

Presenters


Bryan Reimer Research Scientist MIT Center for Transportation & Logistics

 

Philipp Kemmler-Erdmannsdorffer Senior Manager Communications & Public Affairs Europe NIO

 

Omar Ben Abdelaziz Cockpit of the Future Manager Faurecia


 

Q&A Transcript

Christophe Q: So now I think we have some minutes to go to the question and answer so we can take five to ten minutes so maybe if we start with Philipp: from your point of view Philipp when will NIO Eve be put into the market?

Philipp A: I hope as soon as possible. At the moment we are exploring, of course, different market opportunities and we are working on the implementation for the European market, but it’s not that easy. We do have - and this is at the moment the biggest problem - different charging standards. In China, we have a fast-charging GB/T standard so in China we do have a different standard than CCS (Combined Charging System) in Europe and it’s quite difficult and this is again machine-to-machine interaction because the car has to interact with the charging column, for example, to provide the best charging opportunities for the battery and for the car itself so we are working on it. We’re looking into a period from one to two years right now.

Christophe Q: Okay, thank you. So, we have some questions to Omar so maybe I’m going to ask one and then I will go to Bryan so Radar or Lidar which one is preferable? Is the use of mm wave radar allowed in all countries for ADAS or Autonomous cars? of course, we are not speaking about the interior of the car…

Omar A: The presentation I made is for the interior of the car so we are speaking about the mm wave Radar. Well, there is also some investigation of using Lidar inside the car, but we don’t think that we will need such an expensive sensor inside the car and the Radar will be more than enough to cover the use cases we have in mind. Now the second part of the question was about the usability of the Radar inside the car and if it is allowed or not depending on the country. This is a very good question because today there is a very big discussion about what is the frequency range we should use for the interior Radar. The discussion is about 79GHz and 60GHz, but the discussion is ongoing and there is no decision yet, but we see that in the US, for example, 60GHz is close to being selected as the main frequency range for this interior radar technology and just one main use case for this Radar will be to detect the child left inside the car so it’s the main application for this and your radar sensor.

Christophe Q: Okay thank you, Omar. We go to Bryan now. So your presentation Bryan highlighted the need to monitor the driver’s attention even for Level 2 cars that are being sold legally. Do you think that such cars without driving monitoring should be forbidden, taking into consideration proven cases of sleeping drivers?

Bryan A: That’s a very good question and I think that in context of what we have done in the past one has to really focus forward. I do believe the industry should collaborate together to set a deadline. Call it, you know 2022, 23, 24, 25 model year where no L2 vehicles are sold without driver management. I don’t find it really ethical for manufacturers to be selling vehicles where drivers can fall asleep behind the wheel of a car. The technology exists to prevent that at this point. I think there are a whole lot of technical questions around the specifications of deployment, but this is one reason that I called in my January Forbes post for the industry to work together in the U.S. much like the joint collaboration between HIS Global Insight and NHTSA (National Highway Traffic Safety Administration) to form an industry collaborative agreement around AEB (Automatic Emergency Braking). To do the same for driver monitoring and management of the car and I am fully supportive of the Euro NCAPS efforts in the EU to do the same. I think we need to take responsibility with the deployment of automation that remembers, with the loss of control, comes a loss of responsibility, or change in responsibility, and from the design side, we need to embrace that, much like the aviation industry has over the last few decades.

Christophe Q: Okay thank you another question for you Bryan. Humans will not be aware of the level of car autonomy when they sit in a car. They might have different mental models on the provided level of automation. Is then a sequential development from L0-L1-L2-L3-L4 meaningful?

Bryan A: I think that’s also a very good question. I think the development of the levels is applying what the Society for Automotive Engineers meant to be a guide for engineers developing systems. That may make sense from the engineering side. From the consumer side, it makes no sense. It’s not sequential. We need to develop the mental model of the use case, which means that education is imperative. There is no mystery, with increased automation of all levels, we are going to need to develop the education pipeline that goes with that. That means a future vehicle at the L2 or L4 level will need to develop onboard and perhaps off-board education approaches to support the development of an appropriate mental model. I for one don’t believe that L3 systems as they are being construed today really do make sense in this pipeline from the consumer perspective, but may very well need to be there from the testing and engineering perspective to develop meaningful L4 systems of the future.

Christophe Q: Okay thank you, Bryan. A question to Omar: are you integrating augmented reality with HMI or do you plan to use it?

Omar A: Today we don’t have it, but we plan to use it. We have a project to develop this kind of HMI inside of the car. It’s not only for the outside of the car, but we are trying to use this augmented reality inside of the car to enhance the user experience and to communicate and interface with our occupants then the answer is yes, it’s something we are working on.

Christophe Q: Okay, we come back to Bryan, the following question: how to optimize the driver assistance and warning system to avoid unnecessary false warnings?

Bryan A: Yeah that’s a good question. I think unnecessarily false warnings are a big problem with active safety development today and part of that is often the deployment of systems that are really in need of broader testing and operational characteristics outside of their design domain. But some of the false alarms are really a failure of expectations in understanding when and where a system should alert, based upon how it’s built. So while we may have a hard time suppressing all of what a user may believe is false alarms, understanding the context of use is probably the key there and advancing sensor design and development further and faster before we bring systems and deploy systems in front of the user. So if we look at you know early stage 4 collision alert systems as a great example, the early systems had very high false alert rates. Systems being developed today are much more manageable and again it is deploying things too early that may erode the trust in future systems. It’s very much a balancing point.

Christophe Q: Okay, maybe two additional questions because the time is running and we are already out of the time we wanted to spend, but one additional question to you. How do we overcome the challenge when the machine thinks the human is in control and the human thinks the machine is in control? What awareness model can help in design?

Bryan A: Yeah that’s also a good question and that’s why if you have look at the one figure I put up, it’s very much about the Machine having a conceptual strong understanding of the human and the human having an understanding of the Machine and working with the driver management system to fill that intermediary gap. I don’t think we exist today with the technology to do a robust enough monitoring and management job in the car and deployment systems to do that, but as your future cab context illustrates, the sensing technology to have replicable and overlapping sensing characteristics fused together in a model to really do that is on the horizon. So it is very much about using multimodal and sensual information from the human to feed into the automation system and from the automation system to feedback to the human. If there is not trust between those two systems, it’s an illustration of a case of where traditional driving may be the fallback mechanism we need to move to.

Christophe Q: Okay maybe I’ll ask questions to Philipp and he will tell us if he is able to answer or not. Could the AI like Nomi actually teach the driver to pay attention to the road? Help him or her develop good habits, and based on the result, enable or disable driving assistance?

Philipp A: Well that’s that’s a really good question and I really like it. I guess it’s a very philosophical question, isn’t it? Is it the machine to teach us something? So probably yes, probably in the context of safe driving and committing something, but in terms of self-fulfilment and self-fulfilled being and human beings, I don’t know if I want to be taught and educated by a machine so I probably leave that question to the philosophical department, but it’s very interesting to think about that and to discuss this.

Christophe So I think we are at the end of this session. It’s an interesting online conference on HMI so I would like to thank all the panellists and also all the attendees and we hope that it has been valuable for you. We are considering to do another online conference on June 24th and it will be dedicated to Intelligent Vehicle Dynamics and Control so if you are available and interested, you can, of course, connect and register online with FISITA so thank you to all of you and I would say take care. Thank you very much.

The transcripts for the other three parts of this Online Conference are available as follows:

WATCH THE FULL REPLAY — register free of charge for FIEC to access the HMI replay.

bottom of page