ares.attack package

ares.attack.base module

class ares.attack.fgsm.FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, loss='ce', target=False)[source]

Bases: object

Fast Gradient Sign Method (FGSM). A white-box single-step constraint-based method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('fgsm')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, loss='ce', target=False)[source]

The initialize function for FGSM.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

class ares.attack.bim.BIM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, target=False, loss='ce')[source]

Bases: object

Basic Iterative Method (BIM). A white-box iterative constraint-based method. Require a differentiable loss function.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('bim')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, target=False, loss='ce')[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf. It is selected from [1, 2, np.inf].

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The step size for each attack iteration. Defaults to 1/255.

  • steps (int) – The attack steps. Defaults to 20.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

  • loss (str) – The loss function. Defaults to ‘ce’.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

class ares.attack.mim.MIM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, target=False, loss='ce')[source]

Bases: object

Momentum Iterative Method (MIM). A white-box iterative constraint-based method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('mim')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, target=False, loss='ce')[source]

The initialize function for MIM.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The step size for each attack iteration. Defaults to 1/255.

  • steps (int) – The attack steps. Defaults to 20.

  • decay_factor (float) – The decay factor. Defaults to 1.0.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

  • loss (str) – The loss function. Defaults to ‘ce’.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

class ares.attack.tim.TIFGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_name='gaussian', len_kernel=15, nsig=3, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]

Bases: object

Translation invariant attacks.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('tim')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_name='gaussian', len_kernel=15, nsig=3, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]

The initialize function for TIFGSM.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The attack range for each step.

  • steps (int) – The number of attack iteration.

  • kernel_name (str) – The name of the kernel.

  • len_kernel (int) – The size for gaussian kernel.

  • nsig (float) – The sigma for gaussian kernel.

  • decay_factor (float) – The decay factor.

  • resize_rate (float) – The resize rate for input transform.

  • diversity_prob (float) – The probability of input transform.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to True.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

gkern(kernlen=15, nsig=3)[source]

Returns a 2D Gaussian kernel array.

input_diversity(x)[source]

The function to perform random input transform.

kernel_generation()[source]
lkern(kernlen=15)[source]
ukern(kernlen=15)[source]

Projected Gradient Descent (PGD). A white-box iterative constraint-based method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('pgd')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
class ares.attack.cw.CW(model, device='cuda', norm=2, kappa=0, lr=0.2, init_const=0.01, max_iter=200, binary_search_steps=4, num_classes=1000, target=False)[source]

Bases: object

Carlini & Wagner Attack (C&W). A white-box iterative optimization-based method. Require a differentiable logits.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('cw')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=2, kappa=0, lr=0.2, init_const=0.01, max_iter=200, binary_search_steps=4, num_classes=1000, target=False)[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to 2.

  • kappa (float) – Defaults to 0.

  • lr (float) – The learning rate for attack process.

  • init_const (float) – The initialized constant.

  • max_iter (int) – The maximum iteration.

  • binary_search_steps (int) – The steps for binary search.

  • num_classes (int) – The number of classes of all the labels.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

atanh(x)[source]
class ares.attack.deepfool.DeepFool(model, device='cuda', norm=inf, overshoot=0.02, max_iter=50, target=False)[source]

Bases: object

DeepFool. A white-box iterative optimization method. It needs to calculate the Jacobian of the logits with relate to input, so that it only applies to tasks with small number of classification class.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('deepfool')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels)
__init__(model, device='cuda', norm=inf, overshoot=0.02, max_iter=50, target=False)[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • overshoot (float) – The parameter overshoot. Defaults to 0.02.

  • max_iter (int) – The maximum iteration.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

deepfool(x, y)[source]

The function for deepfool.

class ares.attack.nes.NES(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, nes_samples=10, sample_per_draw=1, max_queries=1000, search_sigma=0.02, decay=0.0, random_perturb_start=False, target=False)[source]

Bases: object

Natural Evolution Strategies (NES). A black-box constraint-based method. Use NES as gradient estimation technique and employ PGD with this estimated gradient to generate the adversarial example.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('nes')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, nes_samples=10, sample_per_draw=1, max_queries=1000, search_sigma=0.02, decay=0.0, random_perturb_start=False, target=False)[source]

The initialize function for NES.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The step size for each attack iteration. Defaults to 1/255.

  • nes_samples (int) – The samples for NES.

  • sample_per_draw (int) – Sample in each draw.

  • max_queries (int) – Maximum query number.

  • search_sigma (float) – The sigma param for searching.

  • decay (float) – Decay rate.

  • random_perturb_start (bool) – Whether start with random perturbation.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

clip_eta(batchsize, eta, norm, eps)[source]

The function to clip image according to the constraint.

nes(x_victim, y_victim, y_target)[source]

The attack process of NES.

nes_gradient(x, y, ytarget)[source]

The function to calculate the gradient of NES.

class ares.attack.spsa.SPSA(model, device='cuda', norm=inf, eps=0.01568627450980392, learning_rate=0.01, delta=0.01, spsa_samples=10, sample_per_draw=1, nb_iter=20, early_stop_loss_threshold=None, target=False)[source]

Bases: object

Simultaneous Perturbation Stochastic Approximation (SPSA). A black-box constraint-based method. Use SPSA as gradient estimation technique and employ Adam with this estimated gradient to generate the adversarial example.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('spsa')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, learning_rate=0.01, delta=0.01, spsa_samples=10, sample_per_draw=1, nb_iter=20, early_stop_loss_threshold=None, target=False)[source]

The initialize function for SPSA.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • learning_rate (float) – The learning rate of attack.

  • delta (float) – The delta param.

  • spsa_samples (int) – Number of samples in SPSA.

  • sample_per_draw (int) – Sample in each draw.

  • nb_iter (int) – Number of iteration.

  • early_stop_loss_threshold (float) – The threshold for early stop.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

clip_eta(batchsize, eta, norm, eps)[source]

The function to clip image according to the constraint.

spsa(x, y, y_target)[source]

The main function of SPSA attack.

class ares.attack.nattack.Nattack(model, device='cuda', norm=inf, eps=0.01568627450980392, max_queries=1000, sample_size=100, lr=0.02, sigma=0.1, target=False)[source]

Bases: object

NAttack. A black-box constraint-based method. It is motivated by NES.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('nattack')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, max_queries=1000, sample_size=100, lr=0.02, sigma=0.1, target=False)[source]

The initialize function for NATTACK.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • max_queries (int) – The maximum query number.

  • sample_size (int) – The sample size.

  • lr (float) – The learning rate.

  • sigma (float) – The sigma parameter.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

atanh(x)[source]
clip_eta(batchsize, eta, norm, eps)[source]

The function to clip image according to the constraint.

is_adversarial(x, y, target_labels)[source]

The function to judge if the input image is adversarial.

nattack(x, y, y_target)[source]

The function for nattack

scale_to_tanh(x)[source]
ares.attack.nattack.nattack_loss(inputs, targets, target_lables, device, targeted)[source]

The loss function for nattack.

ares.attack.nattack.scale(x, dst_min, dst_max, src_min, src_max)[source]
class ares.attack.boundary.BoundaryAttack(model, device='cuda', norm=2, spherical_step_eps=20, orth_step_factor=0.5, orthogonal_step_eps=0.01, perp_step_factor=0.5, max_iter=20, target=False)[source]

Bases: object

Boundary. A black-box decision-based method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('boundary')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=2, spherical_step_eps=20, orth_step_factor=0.5, orthogonal_step_eps=0.01, perp_step_factor=0.5, max_iter=20, target=False)[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to 2.

  • spherical_step_eps (float) – The spherical step epsilon.

  • orth_step_factor (float) – The orthogonal step factor.

  • orthogonal_step_eps (float) – The orthogonal step epsilon.

  • perp_step_factor (float) – The perpendicular step factor.

  • max_iter (int) – The maximum iteration.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

boundary(x, y, ytarget)[source]

The function of boundary attack.

get_init_noise(x_target, y, ytarget)[source]

The function to initialize noise.

perturbation(x, x_adv, y, ytarget)[source]

The function of single attack iteration.

class ares.attack.evolutionary.Evolutionary(model, device='cuda', ccov=0.001, decay_weight=0.99, max_queries=10000, mu=0.01, sigma=0.03, maxlen=30, target=False)[source]

Bases: object

Evolutionary. A black-box decision-based method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('evolutionary')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', ccov=0.001, decay_weight=0.99, max_queries=10000, mu=0.01, sigma=0.03, maxlen=30, target=False)[source]

The function to initialize evolutionary attack.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • ccov (float) – The parameter cconv. Defaults to 0.001.

  • decay_weight (float) – The decay weight param. Defaults to 0.99.

  • max_queries (int) – The maximum query number. Defaults to 10000.

  • mu (float) – The mean for bias. Defaults to 0.01.

  • sigma (float) – The deviation for bias. Defaults to 3e-2.

  • maxlen (int) – The maximum length. Defaults to 30.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

evolutionary(x, y, ytarget)[source]

The function to conduct evolutionary attack.

get_init_noise(x_target, y, ytarget)[source]

The function to initialize noise.

class ares.attack.di_fgsm.DI2FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]

Bases: object

Diverse Input Iterative Fast Gradient Sign Method. A transfer-based black-box attack method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('dim')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The step size for each attack iteration. Defaults to 1/255.

  • steps (int) – The attack steps. Defaults to 20.

  • decay_factor (float) – The decay factor.

  • resize_rate (float) – The resize rate for input transform.

  • diversity_prob (float) – The probability of input transform.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

input_diversity(x)[source]

The function perform diverse transform for input images.

class ares.attack.si_ni_fgsm.SI_NI_FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, scale_factor=1, decay_factor=1.0, loss='ce', target=False)[source]

Bases: object

Nesterov Accelerated Gradient and Scale Invariance with FGSM. A black-box attack method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('si_ni_fgsm')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, scale_factor=1, decay_factor=1.0, loss='ce', target=False)[source]

The initialize function for PGD.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The attack range for each step.

  • steps (int) – The number of attack iteration.

  • scale_factor (float) – The scale factor.

  • decay_factor (float) – The decay factor.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

class ares.attack.vmi_fgsm.VMI_fgsm(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, beta=1.5, sample_number=10, loss='ce', target=False)[source]

Bases: object

Enhancing the Transferability of Adversarial Attacks through Variance Tuning.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('vmi_fgsm')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, beta=1.5, sample_number=10, loss='ce', target=False)[source]

The initialize function for VMI_FGSM.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The attack range for each step.

  • steps (int) – The number of attack iteration.

  • decay_factor (float) – The decay factor.

  • beta (float) – The beta param.

  • sample_number (int) – The number of samples.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to False.

attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]

This function is used to attack object detection models.

Parameters:
  • batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.

  • excluded_losses (list) – List of losses not used to compute the attack loss.

  • scale_factor (float) – Factor used to scale adv images.

  • object_vanish_only (bool) – When True, just make objects vanish only.

Returns:

Adversarial images with value range [0,1].

Return type:

torch.Tensor

ares.attack.tta.Cos_dis(a, b)[source]
ares.attack.tta.Poincare_dis(a, b)[source]
ares.attack.tta.TI_tta(kernel_size=5, nsig=3)[source]
class ares.attack.tta.TTA(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_size=5, nsig=3, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=True)[source]

Bases: object

Transferable Targeted Attacks.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('tta')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
__init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_size=5, nsig=3, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=True)[source]

The initialize function for TTA.

Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.

  • eps (float) – The maximum perturbation range epsilon.

  • stepsize (float) – The attack range for each step.

  • steps (int) – The number of attack iteration.

  • kernel_size (int) – The size for gaussian kernel.

  • nsig (float) – The sigma for gaussian kernel.

  • resize_rate (float) – The resize rate for input transform.

  • diversity_prob (float) – The probability of input transform.

  • loss (str) – The loss function.

  • target (bool) – Conduct target/untarget attack. Defaults to True.

ce_loss(outputs, labels, target_labels)[source]

Function of ce loss for TTA.

input_diversity(x)[source]

The function to perform random input transform.

logits_loss(outputs, labels, target_labels)[source]

The logits function.

po_trip_loss(outputs, labels, target_labels)[source]

The function to calculate po trip loss.

ares.attack.tta.gkern(kernlen=15, nsig=3)[source]

Skip Gradient Method. A transfer-based black-box attack method.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('sgm')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)

ares.attack.autoattack module

class ares.attack.autoattack.autoattack.AutoAttack(model, device='cuda', norm=inf, eps=0.3, seed=None, verbose=False, attacks_to_run=[], version='standard', is_tf_model=False, logger=None)[source]

Bases: object

A class to perform autoattack. It is called by registry.

Example

>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('autoattack')
__init__(model, device='cuda', norm=inf, eps=0.3, seed=None, verbose=False, attacks_to_run=[], version='standard', is_tf_model=False, logger=None)[source]
Parameters:
  • model (torch.nn.Module) – The target model to be attacked.

  • device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.

  • norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf. It is selected from [1, 2, np.inf].

  • eps (float) – The maximum perturbation range epsilon.

  • seed (float) – Random seed. Defaults to None.

  • verbose (bool) – Output the details during the attack process. Defaults to True.

  • attacks_to_run (list) – Set the attacks to run. Defaults to []. It should be selected from [‘apgd-ce’, ‘apgd-dlr’, ‘fab’, ‘square’, ‘apgd-t’, ‘fab-t’].

  • version (str) – Define the version of attack. Defaults to ‘standard’. It is selected from [‘standard’, ‘plus’, ‘rand’].

  • is_tf_model (bool) – Whether the model is based on tensorflow. Defaults to False.

  • log_path (str) – Path to the log file. Defaults to None.

clean_accuracy(images, labels, bs=250)[source]
get_logits(x)[source]

This function calculates the logits of the target model.

get_seed()[source]

This function automatically set a random seed.

run_standard_evaluation(images, labels, bs=250, return_labels=False)[source]
run_standard_evaluation_individual(images, labels, bs=250, return_labels=False)[source]
set_version(version='standard')[source]

The function to set the attack version.

Parameters:

version (str) – The version of attack. Defaults to ‘standard’.