Skip to content

Repository for the experiments presented in the paper "Generation and Mitigation of Precision-Based Evasion Attacks in Neural Networks Verification" submitted to ICLP2025

Notifications You must be signed in to change notification settings

NeVerTools/ICLP_2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generation and Mitigation of Precision-Based Evasion Attacks in Neural Network Verification

Stefano Demarchi, Luca Oneto, Davide Anguita and Armando Tacchella

Submitted for publication in ICLP 2025


Installation and setup

Clone this repository using the recursive option

git clone --recursive https://github.com/nevertools/ICLP_2025.git

To reproduce the experiments presented in the paper you will need to use different tools with little compatibility between each other, so you will need to create the following environments (examples are in conda):

1. Benchmark generation

To recreate the benchmarks presented in the paper you will need MATLAB and Python 3.11. The same environment will be used for verification with NeVer2 and Marabou.

You can get MATLAB academic licenses here and create the Python environment with the following commands (assuming conda is installed):

conda create -n iclp_base python=3.11
conda activate iclp_base

pip install pynever maraboupy
conda deactivate

2. Benchmark verification

To test the benchmarks with verification tools you will both need to get the sources and create the corresponding environments. The scripts in the Experiments directory assume that the verifier sources are in the same root directory of this repository.

  • alpha-beta-CROWN
# Get the sources in the same root
cd ..
git clone --recursive https://github.com/Verified-Intelligence/alpha-beta-CROWN.git

cd alpha-beta-CROWN
conda env create -f complete_verifier/environment.yaml --name alpha-beta-crown
  • pyRAT
# Get the sources in the same root
cd ..
git clone https://git.frama-c.com/pub/pyrat.git

cd pyRAT
conda env create -f pyrat_env.yml
  • NeVer2
# Get the sources in the same root
cd ..
git clone https://github.com/NeverTools/pyNever.git

3. Mitigation

To test the mitigation using intervals you will need the interval-based verifier in this repository, and the last required environment:

conda create -n iclp_iv python=3.7
conda activate iclp_iv

pip install pyinterval
conda deactivate

Usage

This repository contains all the code to reproduce the experiments of the paper. First you will generate the attacks, then you will perform verification on the instances.

1. Benchmarks generation

The vulnerable points that are unsafe under a low-precision implementation are obtained with the MATLAB scripts in the MATLAB directory. A Python script builds the ONNX networks and VNNLIB specifications after the MATLAB scripts saved their data. The bash script generate_experiments.sh runs all the code necessary1.

2. Experimental evaluation

The evaluation of the verification tools on the benchmarks is possible with the bash script test_verifies.sh. For linear models, the preliminary version of the interval-based verifier that mitigates the attack can be called with the test_intver.sh script.

Footnotes

  1. Please note that all bash scripts will likely fail due to this issue that doesn't allow to run conda activate in bash scripts.

About

Repository for the experiments presented in the paper "Generation and Mitigation of Precision-Based Evasion Attacks in Neural Networks Verification" submitted to ICLP2025

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published