Attacking Object Detection Models

Overview

Adversarial attacks impact not just image classification but also object detection tasks. In this project, we’ve introduced both global perturbation and patch-based adversarial attacks to assess the robustness of object detection models. Our framework seamlessly integrates with the widely-used mmdet library, providing an accessible platform for researchers and developers.

Preparation

  • Dataset

    We use coco2017 val as the default dataset to evaluate the adversarial robustness of detection models. Please download coco2017 dataset first. If you want to use your datasets, please convert them to coco-style.

  • Detection Models

    To build a mmdet-style detection model, two files are only needed. One is the detector config file ending with .py, and the other is the detector weight file. You can download these files directly from mmdet or train a detection model using mmdet to obtain them. Before starting, please make sure that all data_root attributes in the config file are right and ann_file attribute of the test_evaluator in the config file is right.

Global Perturbation Attack

Before start, modify detector attribute in the config files configs/global_demo.py as follows:

detector = dict(cfg_file='/path_to_your_cfg_file/xxx.py',
                weight_file='/path_to_your_weight_file/xxx.pth')

Then, you can run the following command to start.

cd detection
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/global_demo.py

For more attack settings, please refer to configs/global/base.py. You can overwrite them in global_demo.py as you want. Up to now, FGSM, BIM, MIM, TIM, DI_FGSM, SI_NI_FGSM, VMI_FGSM, and PGD attack methods are supported for the global perturbation attack.

Patch Attack

Before start, modify detecotr attribute in the attacking config files configs/patch_demo.py as follows:

detector = dict(cfg_file='/path_to_your_cfg_file/xxx.py',
       weight_file='/path_to_your_weight_file/xxx.pth')

Then, you can run the following command to start.

cd detection
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/patch_demo.py

For more attack settings, please refer to configs/patch/base.py. You can overwrite them in patch_demo.py as you want.

Evaluating non-mmdet detectors

If you want to evalutate non-mmdet detection models, you are suggested to try following steps:

  • Convert your datasets to coco-style.

  • Generate a mmdet-style config file containing a test_dataloader, train_dataloader (if needed), and a test_evaluator.

  • Modify your detection model code. Specifically, you are required to add a data_preprocessor, a loss function and a predict function. See ares.attack.detection.custom.detector.CustomDetector for details.

  • Replace detector = MODELS.build(detector_cfg.model) in the run.py file with your detector initialization code.

Note

For the patch attack, we only support one patch for each object now.