ares.attack package¶
ares.attack.base module¶
- class ares.attack.fgsm.FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, loss='ce', target=False)[source]¶
Bases:
object
Fast Gradient Sign Method (FGSM). A white-box single-step constraint-based method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('fgsm') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1412.6572.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, loss='ce', target=False)[source]¶
The initialize function for FGSM.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to False.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.bim.BIM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, target=False, loss='ce')[source]¶
Bases:
object
Basic Iterative Method (BIM). A white-box iterative constraint-based method. Require a differentiable loss function.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('bim') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1607.02533.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, target=False, loss='ce')[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf. It is selected from [1, 2, np.inf].
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The step size for each attack iteration. Defaults to 1/255.
steps (int) – The attack steps. Defaults to 20.
target (bool) – Conduct target/untarget attack. Defaults to False.
loss (str) – The loss function. Defaults to ‘ce’.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.mim.MIM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, target=False, loss='ce')[source]¶
Bases:
object
Momentum Iterative Method (MIM). A white-box iterative constraint-based method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('mim') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1710.06081.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, target=False, loss='ce')[source]¶
The initialize function for MIM.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The step size for each attack iteration. Defaults to 1/255.
steps (int) – The attack steps. Defaults to 20.
decay_factor (float) – The decay factor. Defaults to 1.0.
target (bool) – Conduct target/untarget attack. Defaults to False.
loss (str) – The loss function. Defaults to ‘ce’.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.tim.TIFGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_name='gaussian', len_kernel=15, nsig=3, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]¶
Bases:
object
Translation invariant attacks.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('tim') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1904.02884.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_name='gaussian', len_kernel=15, nsig=3, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]¶
The initialize function for TIFGSM.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The attack range for each step.
steps (int) – The number of attack iteration.
kernel_name (str) – The name of the kernel.
len_kernel (int) – The size for gaussian kernel.
nsig (float) – The sigma for gaussian kernel.
decay_factor (float) – The decay factor.
resize_rate (float) – The resize rate for input transform.
diversity_prob (float) – The probability of input transform.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to True.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
Projected Gradient Descent (PGD). A white-box iterative constraint-based method.
Example
>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('pgd')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1706.06083.
- class ares.attack.cw.CW(model, device='cuda', norm=2, kappa=0, lr=0.2, init_const=0.01, max_iter=200, binary_search_steps=4, num_classes=1000, target=False)[source]¶
Bases:
object
Carlini & Wagner Attack (C&W). A white-box iterative optimization-based method. Require a differentiable logits.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('cw') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 2.
References: References: https://arxiv.org/pdf/1608.04644.pdf.
- __init__(model, device='cuda', norm=2, kappa=0, lr=0.2, init_const=0.01, max_iter=200, binary_search_steps=4, num_classes=1000, target=False)[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to 2.
kappa (float) – Defaults to 0.
lr (float) – The learning rate for attack process.
init_const (float) – The initialized constant.
max_iter (int) – The maximum iteration.
binary_search_steps (int) – The steps for binary search.
num_classes (int) – The number of classes of all the labels.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.deepfool.DeepFool(model, device='cuda', norm=inf, overshoot=0.02, max_iter=50, target=False)[source]¶
Bases:
object
DeepFool. A white-box iterative optimization method. It needs to calculate the Jacobian of the logits with relate to input, so that it only applies to tasks with small number of classification class.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('deepfool') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels)
Supported distance metric: 2, np.inf.
References: https://arxiv.org/abs/1511.04599.
- __init__(model, device='cuda', norm=inf, overshoot=0.02, max_iter=50, target=False)[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
overshoot (float) – The parameter overshoot. Defaults to 0.02.
max_iter (int) – The maximum iteration.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.nes.NES(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, nes_samples=10, sample_per_draw=1, max_queries=1000, search_sigma=0.02, decay=0.0, random_perturb_start=False, target=False)[source]¶
Bases:
object
Natural Evolution Strategies (NES). A black-box constraint-based method. Use NES as gradient estimation technique and employ PGD with this estimated gradient to generate the adversarial example.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('nes') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: 1. https://arxiv.org/abs/1804.08598. 2. http://www.jmlr.org/papers/volume15/wierstra14a/wierstra14a.pdf.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, nes_samples=10, sample_per_draw=1, max_queries=1000, search_sigma=0.02, decay=0.0, random_perturb_start=False, target=False)[source]¶
The initialize function for NES.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The step size for each attack iteration. Defaults to 1/255.
nes_samples (int) – The samples for NES.
sample_per_draw (int) – Sample in each draw.
max_queries (int) – Maximum query number.
search_sigma (float) – The sigma param for searching.
decay (float) – Decay rate.
random_perturb_start (bool) – Whether start with random perturbation.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.spsa.SPSA(model, device='cuda', norm=inf, eps=0.01568627450980392, learning_rate=0.01, delta=0.01, spsa_samples=10, sample_per_draw=1, nb_iter=20, early_stop_loss_threshold=None, target=False)[source]¶
Bases:
object
Simultaneous Perturbation Stochastic Approximation (SPSA). A black-box constraint-based method. Use SPSA as gradient estimation technique and employ Adam with this estimated gradient to generate the adversarial example.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('spsa') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1802.05666.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, learning_rate=0.01, delta=0.01, spsa_samples=10, sample_per_draw=1, nb_iter=20, early_stop_loss_threshold=None, target=False)[source]¶
The initialize function for SPSA.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
learning_rate (float) – The learning rate of attack.
delta (float) – The delta param.
spsa_samples (int) – Number of samples in SPSA.
sample_per_draw (int) – Sample in each draw.
nb_iter (int) – Number of iteration.
early_stop_loss_threshold (float) – The threshold for early stop.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.nattack.Nattack(model, device='cuda', norm=inf, eps=0.01568627450980392, max_queries=1000, sample_size=100, lr=0.02, sigma=0.1, target=False)[source]¶
Bases:
object
NAttack. A black-box constraint-based method. It is motivated by NES.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('nattack') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1905.00441.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, max_queries=1000, sample_size=100, lr=0.02, sigma=0.1, target=False)[source]¶
The initialize function for NATTACK.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
max_queries (int) – The maximum query number.
sample_size (int) – The sample size.
lr (float) – The learning rate.
sigma (float) – The sigma parameter.
target (bool) – Conduct target/untarget attack. Defaults to False.
- clip_eta(batchsize, eta, norm, eps)[source]¶
The function to clip image according to the constraint.
- ares.attack.nattack.nattack_loss(inputs, targets, target_lables, device, targeted)[source]¶
The loss function for nattack.
- class ares.attack.boundary.BoundaryAttack(model, device='cuda', norm=2, spherical_step_eps=20, orth_step_factor=0.5, orthogonal_step_eps=0.01, perp_step_factor=0.5, max_iter=20, target=False)[source]¶
Bases:
object
Boundary. A black-box decision-based method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('boundary') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 2.
References: https://arxiv.org/abs/1712.04248.
- __init__(model, device='cuda', norm=2, spherical_step_eps=20, orth_step_factor=0.5, orthogonal_step_eps=0.01, perp_step_factor=0.5, max_iter=20, target=False)[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to 2.
spherical_step_eps (float) – The spherical step epsilon.
orth_step_factor (float) – The orthogonal step factor.
orthogonal_step_eps (float) – The orthogonal step epsilon.
perp_step_factor (float) – The perpendicular step factor.
max_iter (int) – The maximum iteration.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.evolutionary.Evolutionary(model, device='cuda', ccov=0.001, decay_weight=0.99, max_queries=10000, mu=0.01, sigma=0.03, maxlen=30, target=False)[source]¶
Bases:
object
Evolutionary. A black-box decision-based method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('evolutionary') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 2.
References: https://arxiv.org/abs/1904.04433.
- __init__(model, device='cuda', ccov=0.001, decay_weight=0.99, max_queries=10000, mu=0.01, sigma=0.03, maxlen=30, target=False)[source]¶
The function to initialize evolutionary attack.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
ccov (float) – The parameter cconv. Defaults to 0.001.
decay_weight (float) – The decay weight param. Defaults to 0.99.
max_queries (int) – The maximum query number. Defaults to 10000.
mu (float) – The mean for bias. Defaults to 0.01.
sigma (float) – The deviation for bias. Defaults to 3e-2.
maxlen (int) – The maximum length. Defaults to 30.
target (bool) – Conduct target/untarget attack. Defaults to False.
- class ares.attack.di_fgsm.DI2FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]¶
Bases:
object
Diverse Input Iterative Fast Gradient Sign Method. A transfer-based black-box attack method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('dim') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1803.06978.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=False)[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The step size for each attack iteration. Defaults to 1/255.
steps (int) – The attack steps. Defaults to 20.
decay_factor (float) – The decay factor.
resize_rate (float) – The resize rate for input transform.
diversity_prob (float) – The probability of input transform.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to False.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.si_ni_fgsm.SI_NI_FGSM(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, scale_factor=1, decay_factor=1.0, loss='ce', target=False)[source]¶
Bases:
object
Nesterov Accelerated Gradient and Scale Invariance with FGSM. A black-box attack method.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('si_ni_fgsm') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/1908.06281.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, scale_factor=1, decay_factor=1.0, loss='ce', target=False)[source]¶
The initialize function for PGD.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The attack range for each step.
steps (int) – The number of attack iteration.
scale_factor (float) – The scale factor.
decay_factor (float) – The decay factor.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to False.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.vmi_fgsm.VMI_fgsm(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, beta=1.5, sample_number=10, loss='ce', target=False)[source]¶
Bases:
object
Enhancing the Transferability of Adversarial Attacks through Variance Tuning.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('vmi_fgsm') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/2103.15571.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, decay_factor=1.0, beta=1.5, sample_number=10, loss='ce', target=False)[source]¶
The initialize function for VMI_FGSM.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The attack range for each step.
steps (int) – The number of attack iteration.
decay_factor (float) – The decay factor.
beta (float) – The beta param.
sample_number (int) – The number of samples.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to False.
- attack_detection_forward(batch_data, excluded_losses, scale_factor=255.0, object_vanish_only=False)[source]¶
This function is used to attack object detection models.
- Parameters:
batch_data (dict) – {‘inputs’: torch.Tensor with shape [N,C,H,W] and value range [0, 1], ‘data_samples’: list of mmdet.structures.DetDataSample}.
excluded_losses (list) – List of losses not used to compute the attack loss.
scale_factor (float) – Factor used to scale adv images.
object_vanish_only (bool) – When True, just make objects vanish only.
- Returns:
Adversarial images with value range [0,1].
- Return type:
torch.Tensor
- class ares.attack.tta.TTA(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_size=5, nsig=3, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=True)[source]¶
Bases:
object
Transferable Targeted Attacks.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('tta') >>> attacker = attacker_cls(model) >>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/2012.11207.
- __init__(model, device='cuda', norm=inf, eps=0.01568627450980392, stepsize=0.00392156862745098, steps=20, kernel_size=5, nsig=3, resize_rate=0.85, diversity_prob=0.7, loss='ce', target=True)[source]¶
The initialize function for TTA.
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf.
eps (float) – The maximum perturbation range epsilon.
stepsize (float) – The attack range for each step.
steps (int) – The number of attack iteration.
kernel_size (int) – The size for gaussian kernel.
nsig (float) – The sigma for gaussian kernel.
resize_rate (float) – The resize rate for input transform.
diversity_prob (float) – The probability of input transform.
loss (str) – The loss function.
target (bool) – Conduct target/untarget attack. Defaults to True.
Skip Gradient Method. A transfer-based black-box attack method.
Example
>>> from ares.utils.registry import registry
>>> attacker_cls = registry.get_attack('sgm')
>>> attacker = attacker_cls(model)
>>> adv_images = attacker(images, labels, target_labels)
Supported distance metric: 1, 2, np.inf.
References: https://arxiv.org/abs/2002.05990.
ares.attack.autoattack module¶
- class ares.attack.autoattack.autoattack.AutoAttack(model, device='cuda', norm=inf, eps=0.3, seed=None, verbose=False, attacks_to_run=[], version='standard', is_tf_model=False, logger=None)[source]¶
Bases:
object
A class to perform autoattack. It is called by registry.
Example
>>> from ares.utils.registry import registry >>> attacker_cls = registry.get_attack('autoattack')
- __init__(model, device='cuda', norm=inf, eps=0.3, seed=None, verbose=False, attacks_to_run=[], version='standard', is_tf_model=False, logger=None)[source]¶
- Parameters:
model (torch.nn.Module) – The target model to be attacked.
device (torch.device) – The device to perform autoattack. Defaults to ‘cuda’.
norm (float) – The norm of distance calculation for adversarial constraint. Defaults to np.inf. It is selected from [1, 2, np.inf].
eps (float) – The maximum perturbation range epsilon.
seed (float) – Random seed. Defaults to None.
verbose (bool) – Output the details during the attack process. Defaults to True.
attacks_to_run (list) – Set the attacks to run. Defaults to []. It should be selected from [‘apgd-ce’, ‘apgd-dlr’, ‘fab’, ‘square’, ‘apgd-t’, ‘fab-t’].
version (str) – Define the version of attack. Defaults to ‘standard’. It is selected from [‘standard’, ‘plus’, ‘rand’].
is_tf_model (bool) – Whether the model is based on tensorflow. Defaults to False.
log_path (str) – Path to the log file. Defaults to None.