The answers below are from Professor Ye Zhuang, State Key Laboratory of Automotive Simulation and Control at Jilin University in China, who gave a presentation on on Vehicle Drifting Dynamics and Control for Active Safety Under Extreme Manoeuvre.
For CACC and V2X presentation, congratulation, what are the next steps for traffic monitoring taking advantage of V2X?
Although this question was posed to Professor Zhang. I would also try to answer it by talking about the vehicle control if more intelligent traffic monitoring hardware is available in the future. Once the V2X is ready, the states of cars on the road could be better observed, even for the extreme states (large slip angle, yaw rate, etc), that would make the control of the cars under those extreme conditions possible and could improve the global optimal control in the traffic.
What if there is any incoming vehicle when the car is drifting since part of the vehicle has pass the separating of the road?
Thanks for this good question. For the drifting car, if there is some sudden disturbance on the road, it could still respond as common vehicles，it would have to decide to still track the drift equilibria states or to reach another new state goal. That would be dependent on the specific situation of the drifting car and the traffic.
Can you give us more descriptions of why you use the mutual learning between conventional control and reinforcement learning for drifting control?
That is a good question too. In my presentation, the pros and cons with the two drifting control methods are respectively discussed and compared. Coincidently, the pro of the one could help to overcome the con of the other. The offline computing of the DP based control could help to overcome the problem of the online computing efficiency with the sliding mode controller. The theoretical boundary with the conventional control could help to ensure the robustness with the model-based DP control methods.
Each of the two methods has its own shortcoming. The combination of them might be a good choice. A joint controller already tried in the presentation, just in a simple form. It could already demonstrate the effect by the joint implement. Therefore, we plan to further the joint controller idea. And the probabilistic dynamic would be included in the drifting dynamic and control, more model- based reinforcement learning algorithms might be proposed and test in the next step.
Related Links for IVDC Online Conference
IVDC Moderator Summary by Dr Mike Ma, Professor at Jilin University in China and VP Technical, FISITA