EU Horizon 2020
Horizon 2020
HomeNewsResearch ThemesPeopleKey Prior PublicationsPublications
[Kwi19] Marta Kwiatkowska. Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper). In 30th International Conference on Concurrency Theory (CONCUR'19), Dagstuhl Publishing. August 2019. [pdf] [bib]
Downloads:  pdf pdf (1.37 MB)  bib bib
Notes: AVailable from: https://drops.dagstuhl.de/opus/volltexte/2019/10903/
Abstract. Computing systems are becoming ever more complex, increasingly often incorporating deep learning components. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning. This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations. This includes novel algorithms based on feature-guided search, games, global optimisation and Bayesian methods.