EU Horizon 2020
Horizon 2020
HomeNewsResearch ThemesPeopleKey Prior PublicationsPublications
[KZ23] Marta Kwiatkowska, Xiyue Zhang. When to Trust AI: Advances and Challenges for Certification of Neural Networks. In Proceedings of the 18th Conference on Computer Science and Intelligence Systems (FedCSIS 2023). To appear. 2023. [pdf] [bib]
Downloads:  pdf pdf (2.94 MB)  bib bib
Abstract. Abstract—Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing. Early adoption of AI technology for real-world applications has not been without problems, particularly for neural networks, which may be unstable and susceptible to adversarial examples. In the longer term, appropriate safety assurance techniques need to be developed to reduce potential harm due to avoidable system failures and ensure trustworthiness. Focusing on certification and explainability, this paper provides an overview of techniques that have been developed to ensure safety of AI decisions and discusses future challenges.