Safe Multi-Agent Reinforcement Learning Towards the Engineering of Safe Robotic Teams
posterposted on 2019-11-19, 15:36 authored by Joshua Riley
Multi-agent systems are a collection of agents working in shared environments, often with shared goals while being required to adhere to limited resources. These systems have universal applications and are often deemed as the future of automation in industry; however, an open issue within these systems is ensuring a degree of trustworthiness, allowing human counterparts to be confident that these systems and their individual agents will adhere to expected behaviours even when issues occur. The need for “Safety”, which is often defined in the literature in a post hoc fashion, in these systems can be seen at its most crucial within sensitive operations such as within military application and search and rescue operations. The current state of safety in agents, learning or otherwise, shows much promise with the use of quantitative analysis methods, to deliver a statistical foundation of how likely safety standards will be adhered to. In Multi-agent systems, a large area of literature is dedicated to Petri-net modelling, and using these models to constrict agent behaviour, however, Petri-nets require expertise to design, and analysis of these tools for safety remains an open question. This project aims to look further into the use of the Petri-net tool in modelling multi-agent systems to constrict “unsafe” behaviour while they learn to relatively optimise their behaviours and after this learning has concluded. The project aims to do this by increasing the accessibility of Petri-nets when modelling robot teams, and also further investigate ways to analysis these Petri-net models to deliver a high quality of trustworthiness.