Francis X Govers III develops autonomous vehicles for Bell Helicopter. Previously, he has worked as Chief Robotics Officer for Gamma 2 Robotics; Chief Engineer of Elbit Systems Land Solutions; Special Missions Manager for Airship Ventures; Lead Engineer for Command and Control of International Space Station and Deputy Chief Engineer of US Army Future Combat Systems. He has also participated in DARPA Grand Challenge and DARPA EATR project. He has been author of over 40 articles on Robotics and Technology.
Abstract
Unmanned vehicles, drones, self-driving cars and other sorts of advanced autonomous vehicles are being announced on an almost daily basis. Uber is working on flying taxis, every car company has a self-driving car in the works, and drones are the hottest Christmas toy for people of all ages. Inside these, autonomous vehicles are systems based on advanced artificial intelligence, including artificial neural networks (ANN), machine learning based systems, probabilistic reasoning, and monte-carlo models providing support for complex decision making. One of the common concerns about autonomous vehicles, be they flying or driving, is for safety. Safety testing is usually based on deterministic behavior, the aircraft or car or boat, which faced a similar situation, behaves the same way every time. But, what happens when the vehicle is learning from its environment, just as we humans do. Then it may behave differently each time based on experience. How then to predict and evaluate in advance? How safe an autonomous system might be? This paper will present two complementary approaches to this problem. One is a stochastic model for predicting how an autonomous system might behave as it learns over time, providing a range of behavioral responses, to be used as a risk assessment tool? The other is a set of methods and standards for writing test procedures for such vehicles.