ares.attack.detection package

ares.attack.detection.base module

class ares.attack.detection.trainer.Trainer(cfg, model, train_dataloader, test_dataloader, evaluator, logger)[source]

Bases: object

Base trainer class.

Parameters:
  • cfg (mmengine.config.ConfigDict) – Attack config dict.

  • model (torch.nn.Module) – Model to be trained or evaluated.

  • train_dataloader (torch.utils.data.Dataloader) – Dataloader for training.

  • test_dataloader (torch.utils.data.Dataloader) – Dataloader for testing.

  • evaluator (class) – Evaluator to evaluate detection performance.

  • logger (logging.Logger) – Logger to record information.

__init__(cfg, model, train_dataloader, test_dataloader, evaluator, logger)[source]
after_epoch()[source]

Do something after each training epoch.

after_train()[source]

Do something after finishing training.

before_epoch()[source]

Do something before each training epoch.

before_eval()[source]

Do something before evaluating.

before_start()[source]

Initialization before starting training or evaluating.

before_train()[source]

Automatically scale learning rate, build optimizer and lr_scheduler before training.

eval(eval_on_clean=False)[source]

Evaluate detection performance.

eval_clean()[source]

Evaluate detection performance on clean data.

run_epoch()[source]

Train for one epoch.

scale_lr()[source]

Automatically scale learning rate based on base batch size and real batch size

train()[source]

Train model.

class ares.attack.detection.attacker.UniversalAttacker(cfg, detector, logger, device=device(type='cuda', index=0))[source]

Bases: Module

Class supports both global perturbation attack and patch attack.

Parameters:
  • cfg (mmengine.config.ConfigDict) – Configs for adversarial attack.

  • detector (torch.nn.Module) – Detector to be attacked.

  • logger (logging.Logger) – Logger to record logs.

  • device (torch.device) – torch.device. Default: torch.device(0).

__init__(cfg, detector, logger, device=device(type='cuda', index=0))[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

bbox_predict(batch_data, need_preprocess=True, return_images=False)[source]
Parameters:
  • batch_data (dict) – A dict contains inputs and data_samples attributes. See self.forward() for details.

  • need_preprocess (bool) – Whether to preprocess batch_data.

  • return_images (bool) – Whether to return input images.

Returns:

If list, return preds which is list of mmdet.structure.DetDataSample containing pred_instances attribute. If tuple, return (preds, images) where images are batch_data[‘inputs’], torch.Tensor with shape [N,C,H,W].

Return type:

list or tuple

eval()[source]

Set self to eval mode.

filter_loss(losses)[source]

Collect losses not in self.cfg.loss_fn.excluded_losses.

forward(batch_data, return_adv_images_only=False)[source]
Parameters:
  • batch_data (dict) – Input batch data. Example: {‘inputs’: torch.Tensor with shape [N,C,H,W], ‘data_samples’:list of mmdet.structures.det_data_sample.DetDataSample with length N. }

  • return_adv_images_only (bool) – Whether to return adv images only without bboxes prediction. Default: False

Returns:

dict. It may contain keys losses, adv_images.

freeze_layers(modules)[source]

Freeze given modules via setting their requires_grad attribute False.

global_forward(batch_data, return_adv_images_only=False)[source]

For global perturbation attack.

init_for_global_attack()[source]

Initialize attack method for global attack.

init_for_patch_attack()[source]

Initialize adversarial patch, patch applier and attacked labels for patch attack.

init_patch(init_mode='gray')[source]

Initialize adversarial patch with given init_mode.

load_detector_weight()[source]

Load detector weight from file.

load_patch(patch_path)[source]

Initialize patch with given patch_path

patch_forward(batch_data, return_adv_images_only=False)[source]

For patch attack

save_patch(epoch=None, is_best=False)[source]

Save adversarial patch to file.

set_gt_ann_empty(data_samples)[source]

Set gt bboxes and gt labels zero tensors for object_vanish_only goal.

train(mode: bool = True)[source]

Set self to training mode.

ares.attack.detection.custom module

class ares.attack.detection.custom.coco_dataset.CocoDataset(kept_classes=(), **kwargs)[source]

Bases: BaseDetDataset

Dataset for COCO. This class is similar to mmdet.datasets.CocoDataset. Differently, we add kept_classes attribute to filter out samples.

ANN_ID_UNIQUE = True
COCOAPI

alias of COCO

METAINFO: dict = {'classes': ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'), 'palette': [(220, 20, 60), (119, 11, 32), (0, 0, 142), (0, 0, 230), (106, 0, 228), (0, 60, 100), (0, 80, 100), (0, 0, 70), (0, 0, 192), (250, 170, 30), (100, 170, 30), (220, 220, 0), (175, 116, 175), (250, 0, 30), (165, 42, 42), (255, 77, 255), (0, 226, 252), (182, 182, 255), (0, 82, 0), (120, 166, 157), (110, 76, 0), (174, 57, 255), (199, 100, 0), (72, 0, 118), (255, 179, 240), (0, 125, 92), (209, 0, 151), (188, 208, 182), (0, 220, 176), (255, 99, 164), (92, 0, 73), (133, 129, 255), (78, 180, 255), (0, 228, 0), (174, 255, 243), (45, 89, 255), (134, 134, 103), (145, 148, 174), (255, 208, 186), (197, 226, 255), (171, 134, 1), (109, 63, 54), (207, 138, 255), (151, 0, 95), (9, 80, 61), (84, 105, 51), (74, 65, 105), (166, 196, 102), (208, 195, 210), (255, 109, 65), (0, 143, 149), (179, 0, 194), (209, 99, 106), (5, 121, 0), (227, 255, 205), (147, 186, 208), (153, 69, 1), (3, 95, 161), (163, 255, 0), (119, 0, 170), (0, 182, 199), (0, 165, 120), (183, 130, 88), (95, 32, 0), (130, 114, 135), (110, 129, 133), (166, 74, 118), (219, 142, 185), (79, 210, 114), (178, 90, 62), (65, 70, 15), (127, 167, 115), (59, 105, 106), (142, 108, 45), (196, 172, 0), (95, 54, 80), (128, 76, 255), (201, 57, 1), (246, 0, 122), (191, 162, 208)]}
__init__(kept_classes=(), **kwargs)[source]
filter_data() List[dict][source]

Filter annotations according to filter_cfg.

Returns:

Filtered results.

Return type:

List[dict]

load_data_list() List[dict][source]

Load annotations from an annotation file named as self.ann_file

Returns:

A list of annotation.

Return type:

List[dict]

parse_data_info(raw_data_info: dict) Union[dict, List[dict]][source]

Parse raw annotation to target format.

Parameters:

raw_data_info (dict) – Raw data information load from ann_file

Returns:

Parsed annotation.

Return type:

Union[dict, List[dict]]

class ares.attack.detection.custom.coco_metric.CocoMetric(ann_file: Optional[str] = None, metric: Union[str, List[str]] = 'bbox', classwise: bool = False, specified_classes: Sequence[str] = (), proposal_nums: Sequence[int] = (100, 300, 1000), iou_thrs: Optional[Union[float, Sequence[float]]] = None, metric_items: Optional[Sequence[str]] = None, format_only: bool = False, outfile_prefix: Optional[str] = None, file_client_args: Optional[dict] = None, backend_args: Optional[dict] = None, collect_device: str = 'cpu', prefix: Optional[str] = None, sort_categories: bool = False)[source]

Bases: CocoMetric

Custom COCO evaluation metric. This is similar to mmdet.evaluation.metrics.CocoMetric. Differently, we support to calculate metrics only for kept classes.

Evaluate AR, AP, and mAP for detection tasks including proposal/box detection and instance segmentation. Please refer to https://cocodataset.org/#detection-eval for more details.

Parameters:
  • ann_file (str, optional) – Path to the coco format annotation file. If not specified, ground truth annotations from the dataset will be converted to coco format. Defaults to None.

  • metric (str | List[str]) – Metrics to be evaluated. Valid metrics include ‘bbox’, ‘segm’, ‘proposal’, and ‘proposal_fast’. Defaults to ‘bbox’.

  • classwise (bool) – Whether to evaluate the metric class-wise. Defaults to False.

  • specified_classes (Sequence[str]) – a tuple to specify the classes to be evaluated. Defaults to ().

  • proposal_nums (Sequence[int]) – Numbers of proposals to be evaluated. Defaults to (100, 300, 1000).

  • iou_thrs (float | List[float], optional) – IoU threshold to compute AP and AR. If not specified, IoUs from 0.5 to 0.95 will be used. Defaults to None.

  • metric_items (List[str], optional) – Metric result names to be recorded in the evaluation result. Defaults to None.

  • format_only (bool) – Format the output results without perform evaluation. It is useful when you want to format the result to a specific format and submit it to the test server. Defaults to False.

  • outfile_prefix (str, optional) – The prefix of json files. It includes the file path and the prefix of filename, e.g., “a/b/prefix”. If not specified, a temp file will be created. Defaults to None.

  • file_client_args (dict, optional) – Arguments to instantiate the corresponding backend in mmdet <= 3.0.0rc6. Defaults to None.

  • backend_args (dict, optional) – Arguments to instantiate the corresponding backend. Defaults to None.

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

  • sort_categories (bool) – Whether sort categories in annotations. Only used for Objects365V1Dataset. Defaults to False.

__init__(ann_file: Optional[str] = None, metric: Union[str, List[str]] = 'bbox', classwise: bool = False, specified_classes: Sequence[str] = (), proposal_nums: Sequence[int] = (100, 300, 1000), iou_thrs: Optional[Union[float, Sequence[float]]] = None, metric_items: Optional[Sequence[str]] = None, format_only: bool = False, outfile_prefix: Optional[str] = None, file_client_args: Optional[dict] = None, backend_args: Optional[dict] = None, collect_device: str = 'cpu', prefix: Optional[str] = None, sort_categories: bool = False) None[source]
compute_metrics(results: list) Dict[str, float][source]

Compute the metrics from processed results.

Parameters:

results (list) – The processed results of each batch.

Returns:

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

Return type:

Dict[str, float]

default_prefix: Optional[str] = 'coco'
summarize(coco_eval, logger=None, show_results=True)[source]

Compute and display summary metrics for evaluation results. Note this functin can only be applied on the default parameter setting

class ares.attack.detection.custom.detector.CustomDetector(detector, mean, std, *args, **kwargs)[source]

Bases: Module

__init__(detector, mean, std, *args, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

loss(batch_data)[source]

Loss function used to compute detection loss.

Parameters:

batch_data (dict) – Dict with two keys: inputs and data_samples, where inputs are input images (batched image tensor or list of image tensor), data_samples are a list of each sample annotation. We use mmdet.structure.DetDataSample to represent sample annotation. Please make sure that your sample annotation uses mmdet.structure.DetDataSample. To obtain batch_data, you may need define a collate_fn and pass it to your dataloader (torch.utils.data.Dataloader).

Returns:

A dict with loss names as keys, e.g., {‘loss_bboxes’:…, ‘loss_cls’:…}

Return type:

dict

predict(batch_data)[source]

Predict function used to predict bboxes on input images.

Parameters:

batch_data (dict) – Dict with two keys: inputs and data_samples, where inputs are input images (batched image tensor or list of image tensor), data_samples are a list of each sample annotation. We use mmdet.structure.DetDataSample to represent sample annotation. Please make sure that your sample annotation uses mmdet.structure.DetDataSample. To obtain batch_data, you may need define a collate_fn and pass it to your dataloader (torch.utils.data.Dataloader).

Returns:

List of mmdet.structure.DetDataSample where each element should be added a pred_instances attribute. Like gt_instances in it, pred_instances should have keys: bboxes and labels. The predicted bboxes coornidates should be scaled to the orginal image size.

Return type:

list

class ares.attack.detection.custom.detector.DataPreprocessor(mean, std, *args, **kwargs)[source]

Bases: Module

If you have finished all preprocess of your input images and annotations in your dataloader, the batch_data[‘inputs’] should be batched with shape [N, C, H, W], and batch_data[‘data_samples’] should be a list with length N. If so, just return the batch_data directly in the forward function. If not, you can realize your preprocess here. The mean and std are required. They will be used to denormalize image tensors as the input of adversarial attack method. If you do not normalize input images in your detection pipeline, just fill mean with [0,0,0] and std with [1,1,1]. :param mean: Image mean used to normalize images. Length: 3. :type mean: list or torch.Tensor :param std: Image std used to normalize images. Length: 3. :type std: list or torch.Tensor

__init__(mean, std, *args, **kwargs)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(batch_data)[source]

Just return the input if batch_data[‘inputs’] is batched image tensor.

class ares.attack.detection.custom.lr_scheduler.ALRS(optimizer, loss_threshold=0.0001, loss_ratio_threshold=0.0001, decay_rate=0.97, patience=10, last_epoch=- 1, verbose=False)[source]

Bases: object

Reference:Bootstrap Generalization Ability from Loss Landscape Perspective.

__init__(optimizer, loss_threshold=0.0001, loss_ratio_threshold=0.0001, decay_rate=0.97, patience=10, last_epoch=- 1, verbose=False)[source]
step(loss, epoch=None)[source]
update_lr(loss)[source]
class ares.attack.detection.custom.lr_scheduler.CosineLR(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)[source]

Bases: CosineAnnealingLR

See torch.optim.lr_scheduler.CosineAnnealingLR for details

__init__(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)[source]
step(epoch=None, **kwargs) None[source]
class ares.attack.detection.custom.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1, verbose=False)[source]

Bases: ExponentialLR

See torch.optim.lr_scheduler.ExponentialLR for details

__init__(optimizer, gamma, last_epoch=- 1, verbose=False)[source]
step(epoch=None, **kwargs) None[source]
class ares.attack.detection.custom.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)[source]

Bases: MultiStepLR

See torch.optim.lr_scheduler.MultiStepLR for details

__init__(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)[source]
step(epoch=None, **kwargs) None[source]
class ares.attack.detection.custom.lr_scheduler.PlateauLR(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)[source]

Bases: ReduceLROnPlateau

See torch.optim.lr_scheduler.ReduceLROnPlateau for details

__init__(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)[source]
step(metrics, epoch=None, **kwargs) None[source]
ares.attack.detection.custom.lr_scheduler.build_lr_scheduler(optimizer, **kwargs)[source]

build learning rate scheduler based on given optimizer, lr scheduler name and its arguments

class ares.attack.detection.custom.lr_scheduler.warmupALRS(optimizer, warmup_epoch=50, loss_threshold=0.0001, loss_ratio_threshold=0.0001, decay_rate=0.97, last_epoch=- 1, verbose=False)[source]

Bases: ALRS

Reference:Bootstrap Generalization Ability from Loss Landscape Perspective

__init__(optimizer, warmup_epoch=50, loss_threshold=0.0001, loss_ratio_threshold=0.0001, decay_rate=0.97, last_epoch=- 1, verbose=False)[source]
step(loss, epoch=None)[source]
update_lr(update_fn)[source]

ares.attack.detection.patch module

class ares.attack.detection.patch.patch_applier.PatchApplier(cfg)[source]

Bases: Module

This class transforms adversarial patches and applies them to bboxes.

Parameters:

cfg (mmengine.config.ConfigDict) – Configs of adversarial patches.

__init__(cfg)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

apply_patch(images, adv_patches)[source]
build_transforms(training=True)[source]
forward(img_batch: ~torch.Tensor, adv_patch: ~torch.Tensor, bboxes_list: ~torch.Tensor, labels_list: [<class 'torch.Tensor'>])[source]

This function transforms and applies corresponding adversarial patches for each provided bounding box.

Parameters:
  • img_batch (torch.Tensor) – Batch image tensor. Shape: [N, C=3, H, W].

  • adv_patch – Adversarial patch tensor. Shape: [num_clasess, C=3, H, W].

  • bboxes_list – List of bboxes (torch.Tensor) with shape [:, 4]. Length: N.

  • labels_list – List of labels (torch.Tensor) with shape [:]. Length: N.

Returns:

Image tensor with patches applied to. Shape: [N,C,H,W].

Return type:

torch.Tensor

pad_patches_boxes(adv_patch, bboxes_list, labels_list, max_num_bboxes_per_image)[source]
class ares.attack.detection.patch.patch_transform.Compose(transforms)[source]

Bases: object

Composes several transforms together. This transform does not support torchscript. Please, see the note below.

Parameters:

transforms (list of Transform objects) – List of transforms to compose.

Example

>>> Compose([
>>>     MedianPool2d(7),
>>>     RandomJitter(),
>>> ])
__init__(transforms)[source]
class ares.attack.detection.patch.patch_transform.CutOut[source]

Bases: object

Cutout areas of image tensor.

Parameters:
  • cutout_ratio (float) – Cutout area ratio of the patch.

  • cutout_fill (float) – Value(>0) to fill the cutout area.

  • rand_shift (float) – Cutout area to shift.

  • level (str) – Which level to randomly cut out. Supported levels: ‘instance’, ‘batch’ and ‘image’.

  • p_erase (float) – Probability to carry out Cutout.

  • verbose (bool) – Whether to print information of parameters.

class ares.attack.detection.patch.patch_transform.MedianPool2d(kernel_size=3, stride=1, padding=0, same=False)[source]

Bases: object

Median pool.

Parameters:
  • kernel_size (int or 2-tuple) – Size of pooling kernel.

  • stride (int or 2-tuple) – Pool stride.

  • padding (int or 4-tuple (l, r, t, b)) – Pool padding. It is the same as torch.nn.functional.pad.

  • same (bool) – Override padding and enforce same padding.

__init__(kernel_size=3, stride=1, padding=0, same=False)[source]
class ares.attack.detection.patch.patch_transform.RandomHorizontalFlip(p=0.5)[source]

Bases: RandomHorizontalFlip

See torchvision.transforms.RandomHorizontalFlip for details.

class ares.attack.detection.patch.patch_transform.RandomJitter(min_contrast: float = 0.8, max_contrast: float = 1.2, min_brightness: float = - 0.1, max_brightness: float = 0.1, noise_factor: float = 0.1)[source]

Bases: object

This RandomJitter class applies jitter of contrast, brightness and noise to the given tensor.

Parameters:
  • min_contrast (float) – Min contrast.

  • max_contrast (float) – Max contrast.

  • min_brightness (float) – Min brightness.

  • max_brightness (float) – Max brightness.

  • noise_factor (float) – Noise factor.

__init__(min_contrast: float = 0.8, max_contrast: float = 1.2, min_brightness: float = - 0.1, max_brightness: float = 0.1, noise_factor: float = 0.1)[source]
class ares.attack.detection.patch.patch_transform.ScalePatchesToBoxes(size: int, scale_rate: float = 0.2, rotate_angle: float = 20, rand_shift_rate: float = 0.4, rand_rotate: bool = False, rand_shift: bool = False)[source]

Bases: object

This class scales the given pathes to proper sizes and shifts them to the given bounding boxes

positions in the all-zero image tensors.

Parameters:
  • size (int) – Size of the square patch.

  • scale_rate (float) – Patch scale rate compared to the target bboxes sizes.

  • rotate_angle (float) – Max rotate angle.

  • rand_shift_rate (float) – Max random shift rate.

  • rand_rotate (bool) – Whether to randomly rotate.

  • rand_shift (bool) – Whether to randomly shift.

__init__(size: int, scale_rate: float = 0.2, rotate_angle: float = 20, rand_shift_rate: float = 0.4, rand_rotate: bool = False, rand_shift: bool = False)[source]
random_shift(x, limited_range)[source]

ares.attack.detection.utils module

class ares.attack.detection.utils.EnableLossCal(model: Module)[source]

Bases: object

This context manager is to calculate loss for detectors from mmdet in eval mode as in training mode.

__init__(model: Module)[source]
class ares.attack.detection.utils.HiddenPrints[source]

Bases: object

Context manager to shield the output of print functions

ares.attack.detection.utils.all_reduce(tensor, reduction='sum')[source]

Gather all tensor results across all GPUs if ddp.

ares.attack.detection.utils.build_optimizer(params, **kwargs)[source]

Build optimizer.

ares.attack.detection.utils.denormalize(tensor, mean, std)[source]

Denormalize input tensor with given mean and std.

Parameters:
  • tensor (torch.Tensor) – Float tensor image of shape (B, C, H, W) to be denormalized.

  • mean (torch.Tensor) – Float tensor means of size (C, ) for each channel.

  • std (torch.Tensor) – Float tensor standard deviations of size (C, ) for each channel.

ares.attack.detection.utils.get_word_size(group=None)[source]

Return the number of used GPUs.

ares.attack.detection.utils.is_distributed() bool[source]

Return True if distributed environment has been initialized.

ares.attack.detection.utils.is_main_process(group=None) int[source]

Whether the current rank of the given process group is equal to 0.

Note

Calling get_rank in non-distributed environment will return True

Parameters:

group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Defaults to None.

Returns:

bool

ares.attack.detection.utils.main_only(func)[source]

Decorate those methods which should be executed in main process.

Parameters:

func (callable) – Function to be decorated.

Returns:

Return decorated function.

Return type:

callable

ares.attack.detection.utils.mkdirs_if_not_exists(dir)[source]

Make dirs if it does not exist.

ares.attack.detection.utils.modify_test_pipeline(cfg)[source]

The default pipeline for testing in mmdet is usually as follows: “LoadImageFromFile–>Resize–>LoadAnnotations–>PackDetInputs”, which makes the gt bboxes are not resized. To resize bboxes also when resizing images, we move the “LoadAnnotations” before “Resize”.

ares.attack.detection.utils.modify_train_pipeline(cfg)[source]

Modify some dataset settings in train dataloader to that in test dataloader.

ares.attack.detection.utils.normalize(tensor, mean, std)[source]

Normalize input tensor with given mean and std.

Parameters:
  • tensor (torch.Tensor) – Float tensor image of shape (B, C, H, W) to be denormalized.

  • mean (torch.Tensor) – Float tensor means of size (C, ) for each channel.

  • std (torch.Tensor) – Float tensor standard deviations of size (C, ) for each channel.

ares.attack.detection.utils.save_images(img_tensors, data_samples, save_dir, with_bboxes=True, width=5, scale=True)[source]

Save images.

Parameters:
  • img_tensors (torch.Tensor) – Image tensor with shape [N,C,H,W] and value range [0, 1].

  • data_samples (list) – List of mmdet.structures.DetDataSample.

  • save_dir (str) – Path to save images.

  • with_bboxes (bool) – Whether to save images with bbox rectangles on images.

  • width (int) – Line width to draw rectangles.

  • scale (bool) – Whethe to scale images to original size.

ares.attack.detection.utils.save_patches_to_images(patches, save_dir, class_names, labels=None)[source]

Save adversarial patches to images.

Parameters:
  • patches (torch.Tensor) – Aderversarial patches with Shape [N,C=3,H,W].

  • save_dir (str) – Path to save adversarial patches.

  • class_names (str) – Names of classes corresponding to patches.

  • labels (torch.Tensor) – Labels of patches.

ares.attack.detection.utils.tv_loss(images, reduction='mean')[source]

Implementation of the total variation loss (L_{tv}) proposed in the arxiv paper “Fooling automated surveillance cameras: adversarial patches to attack person detection”.

Parameters:
  • images (torch.Tensor) – Image tensor with shape [N, C, H, W] where N, C, H and W are the number of images, channel, height and width.

  • reduction (str) – Supported reduction methods are mean, sum and none.

Returns:

torch.Tensor