An Effective Decision-Making Strategy Based on Multi-Objective Optimization for Commercial Vehicles in Highway Scenarios
Maneuver decision-making plays a critical role in high-performance intelligent driving. This paper proposes a risk assessment-based decision-making network (RADMN) to address the problem of driving strategy for the commercial vehicle. RADMN integrates two networks, aiming at identifying the risk degree of collision and rollover and providing decisions to ensure the effectiveness and reliability of driving strategy. In the risk assessment module, risk degrees of the backward collision, forward collision and rollover are quantified for hazard recognition. In the decision module, a deep reinforcement learning based on multi-objective optimization (DRL-MOO) algorithm is designed, which comprehensively considers the risk degree and motion states of each traffic participant. To evaluate the performance of the proposed framework, Prescan/Simulink joint simulation was conducted in highway scenarios. Experimental results validate the effectiveness and reliability of the proposed RADMN. The output driving strategy can guarantee the safety and provide key technical support for the realization of autonomous driving of commercial vehicles.
 T. Gindele, D. Jagszent, B. Pitzer, R. Dillmann and Ieee, “Design of the planner of Team AnnieWAY's autonomous vehicle used in the DARPA Urban Challenge 2007,” in: 2008 IEEE Intelligent Vehicles Symposium, Vols 1-3, 2008, pp. 862-871.
 C.R. Baker and J.M. Dolan, “Traffic interaction in the urban challenge: Putting boss on its best behavior,” in: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008.
 S. Ulbrich, M. Maurer and Ieee, “Probabilistic Online POMDP Decision Making for Lane Changes in Fully Automated Driving,” in: 2013 16th International IEEE Conference on Intelligent Transportation Systems -, 2013, pp. 2063-2070.
 S. Noh, “Decision-Making Framework for Autonomous Driving at Road Intersections: Safeguarding Against Collision, Overly Conservative Behavior, and Violation Vehicles,” IEEE Trans. Ind. Electron., vol. 66, pp. 3275-3286, Apr. 2019.
 W. Liu, S.W. Kim, S. Pendleton and M.H.A. Jr, “Situation-aware decision making for autonomous driving on urban road using online POMDP,” in: Intelligent Vehicles Symposium, 2015.
 Z. Huang, J. Zhang, R. Tian and Y. Zhang, “End-to-End Autonomous Driving Decision Based on Deep Reinforcement Learning,” in: 2019 5th International Conference on Control, Automation and Robotics (ICCAR), 2019.
 I.H. Lee and B.C. Luan, “Design of Autonomous Emergency Braking System Based on Impedance Control for 3-Car Driving Scenario,” in: Sae World Congress & Exhibition, 2016.
 D. Li, D. Zhao, Q. Zhang and Y. Chen, “Reinforcement Learning and Deep Learning Based Lateral Control for Autonomous Driving,” IEEE Computational Intelligence Magazine, vol. 14, pp. 83-98, 2019.
 Lillicrap, T., J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D, “Silver and Daan Wierstra. Continuous control with deep reinforcement learning,” International Conference on Learning Representations (ICLR), 2016.
 D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 C.Z. Wu, L.Q. Peng, Z. Huang, M. Zhong and D.F. Chu, “A Method of Vehicle Motion Prediction and Collision Risk Assessment with a Simulated Vehicular Cyber Physical System,” Transportation Research Part C (Emerging Technologies), vol. 47(2), pp. 179-191, 2014.