On specifying for trustworthiness
As autonomous systems (AS) increasingly become part of our daily lives, ensuring their trustworthiness is crucial. In order to demonstrate the trustworthiness of an AS, we first need to specify what is required for an AS to be considered trustworthy. This roadmap paper identifies key challenges for specifying for trustworthiness in AS, as identified during the “Specifying for Trustworthiness” workshop held as part of the UK Research and Innovation (UKRI) Trustworthy Autonomous Systems (TAS) programme. We look across a range of AS domains with consideration of the resilience, trust, functionality, verifiability, security, and governance and regulation of AS and identify some of the key specification challenges in these domains. We then highlight the intellectual challenges that are involved with specifying for trustworthiness in AS that cut across domains and are exacerbated by the inherent uncertainty involved with the environments in which AS need to operate.
Funding
UKRI Trustworthy Autonomous Systems Node in Governance and Regulation
UK Research and Innovation
Find out more...PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions
Engineering and Physical Sciences Research Council
Find out more...History
Publication
Communications of the ACM, 2023, 67, (1) pp 98–109Publisher
Association for Comuting MachineryRights
© 2023 ACS This document is the Accepted Manuscript version of a Published Work that appeared in final form in Journal Title, copyright © American Chemical Society after peer review and technical editing by the publisher. To access the final edited and published work see https://doi.org/10.1145/3624699Also affiliated with
- LERO - The Science Foundation Ireland Research Centre for Software
Sustainable development goals
- (4) Quality Education