Incremental Bounded Model Checking of Artificial Neural Networks in CUDA

Research output: Chapter in Book/Report/Conference proceedingConference contribution

  • Authors:
  • Luiz H Sena
  • Iury V Bessa
  • Mikhail R Gadelha
  • Lucas Cordeiro
  • Edjard Mota


Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs/patterns. However, implementations of ANNs for safetycritical systems might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarialcasesandcoveragemethodsinmulti-layerperceptron (MLP). In particular, we further develop the efficient SMTbasedContext-BoundedModelCheckerforGraphicalProcessing Units (ESBMC-GPU) in order to ensure the reliability of certain safety properties in which safety-critical systems can fail and make incorrect decisions, thereby leading to unwanted material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.

Bibliographical metadata

Original languageEnglish
Title of host publicationIX Brazilian Symposium on Computing Systems Engineering
Publication statusAccepted/In press - 19 Sep 2019
EventIX Brazilian Symposium on Computing Systems Engineering - , Brazil
Event duration: 19 Nov 201922 Nov 2019


ConferenceIX Brazilian Symposium on Computing Systems Engineering