Verifiable Self-Aware Agent-Based Autonomous Systems

Research output: Contribution to journalArticlepeer-review


In this article, we describe an approach to autonomous system construction that not only supports self-awareness but also formal verification. This is based on modular construction where the key autonomous decision making is captured within a symbolically described “agent.” So, this article leads us from traditional systems architectures, via agent-based computing, to explainability, reconfigurability, and verifiability, and on to applications in robotics, autonomous vehicles, and machine ethics. Fundamentally, we consider self-awareness from an agent-based perspective. Agents are an important abstraction capturing autonomy, and we are particularly concerned with intentional, or rational, agents that expose the “intentions” of the autonomous system. Beyond being a useful abstract concept, agents also provide a practical engineering approach for building the core software in autonomous systems such as robots and vehicles. In a modular autonomous system architecture, agents of this form capture important decision making elements. Furthermore, this ability to transparently capture such decision making processes, and especially being able to expose their intentions, within an agent allows us to apply strong (formal) agent verification techniques to these systems.

Bibliographical metadata

Original languageUndefined
JournalIEEE. Proceedings
Early online date18 May 2020
Publication statusPublished - Jul 2020