Attacking Image Classification Models
=====================================
Overview
--------
This repository contains the code to evaluate adversarial robustness on classification models. This project provides 19 adversarial attacks (e.g., PGD, FGSM) and 65 robust models.
Preparation
-----------
- Dataset
We use ``ImageNet Validation Set`` as the default dataset to evaluate adversarial robustness of classification models. Please download `ImageNet `_ dataset first. If you want to use your own datasets, please define their ``torch.utils.data.Dataset`` class and corresponding ``transform``.
- Classification Models
To build a image classification model, you can create a model class from `timm `_ library or you can define custom network of ``torch.nn.Module``.
Adversarial Attack
--------------------
Before start, modify the corresponding parameters in ``attack_configs.py`` if needed. The configs will be automatically loaded in the attack script.
Then, you can run the following command to start.
.. code-block:: bash
cd classification
python run_attack.py --gpu 0 --crop_pct 0.875 --input_size 224 --interpolation 'bilinear' --data_dir DATA_PATH --label_file LABEL_PATH --batchsize 20 --num_workers 16 --model_name 'resnet50_at' --attack_name 'pgd'
All adversarial attacks can be accessed by ``Registry`` class as following:
.. code-block:: python
from ares.utils.registry import registry
attacker_cls = registry.get_attack(attack_name)
attacker = attacker_cls(model)
We also provide model zoo of robust models on ImageNet and Cifar10. Taking ImageNet model as an example, the model can be loaded as following:
.. code-block:: python
model_cls = registry.get_model('ImageNetCLS')
model = model_cls(model_name)