Adversarial machine learning
   HOME

TheInfoList



OR:

Adversarial machine learning is the study of the attacks on
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
algorithms, and of the defenses against such attacks. A recent survey exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications. To understand, note that most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution ( IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption. Some of the most common threat models in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction.


History

In 2004, Nilesh Dalvi and others noted that
linear classifier In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to. A linear classifier achieves this by making a classification decision based on the v ...
s used in spam filters could be defeated by simple " evasion attacks" as spammers inserted "good words" into their spam emails. (Around 2007, some spammers added random noise to fuzz words within "image spam" in order to defeat OCR-based filters.) In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers (such as
support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborat ...
s and
neural networks A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
) might be robust to adversaries, until Battista Biggio and others demonstrated the first gradient-based attacks on such machine-learning models (2012-2013). In 2012,
deep neural networks Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. ...
began to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries, again using a gradient-based attack to craft adversarial perturbations. Recently, it was observed that adversarial attacks are harder to produce in the practical world due to the different environmental constraints that cancel out the effect of noises. For example, any small rotation or slight illumination on an adversarial image can destroy the adversariality. In addition, researchers such as Google Brain's Nicholas Frosst point out that it is much easier to make self-driving cars miss stop signs by physically removing the sign itself, rather than creating adversarial examples. Frosst also believes that the adversarial machine learning community incorrectly assumes models trained on a certain data distribution will also perform well on a completely different data distribution. He suggests that a new approach to machine learning should be explored, and is currently working on a unique neural network that has characteristics more similar to human perception than state of the art approaches. While adversarial machine learning continues to be heavily rooted in academia, large tech companies such as Google, Microsoft, and IBM have begun curating documentation and open source code bases to allow others to concretely assess the
robustness Robustness is the property of being strong and healthy in constitution. When it is transposed into a system, it refers to the ability of tolerating perturbations that might affect the system’s functional body. In the same line ''robustness'' ca ...
of machine learning models and minimize the risk of adversarial attacks.


Examples

Examples include attacks in
spam filtering Various anti-spam techniques are used to prevent email spam (unsolicited bulk email). No technique is a complete solution to the spam problem, and each has trade-offs between incorrectly rejecting legitimate email (false positives) as opposed to ...
, where spam messages are obfuscated through the misspelling of "bad" words or the insertion of "good" words; attacks in
computer security Computer security, cybersecurity (cyber security), or information technology security (IT security) is the protection of computer systems and networks from attack by malicious actors that may result in unauthorized information disclosure, t ...
, such as obfuscating malware code within
network packet In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the ''payload''. Control inform ...
s or modifying the characteristics of a network flow to mislead intrusion detection; attacks in biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user; or to compromise users' template galleries that adapt to updated traits over time. Researchers showed that by changing only one-pixel it was possible to fool deep learning algorithms. Others
3-D print 3D printing or additive manufacturing is the construction of a three-dimensional object from a CAD model or a digital 3D model. It can be done in a variety of processes in which material is deposited, joined or solidified under computer ...
ed a toy turtle with a texture engineered to make Google's object detection AI classify it as a rifle regardless of the angle from which the turtle was viewed. Creating the turtle required only low-cost commercially available 3-D printing technology. A machine-tweaked image of a dog was shown to look like a cat to both computers and humans. A 2019 study reported that humans can guess how machines will classify adversarial images. Researchers discovered methods for perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.
McAfee McAfee Corp. ( ), formerly known as McAfee Associates, Inc. from 1987 to 1997 and 2004 to 2014, Network Associates Inc. from 1997 to 2004, and Intel Security Group from 2014 to 2017, is an American global computer security software company head ...
attacked Tesla's former
Mobileye Mobileye Global Inc. is a company developing autonomous driving technologies and advanced driver-assistance systems (ADAS) including cameras, computer chips and software. Mobileye was acquired by Intel in 2017 and went public again in 2022. Mobi ...
system, fooling it into driving 50 mph over the speed limit, simply by adding a two-inch strip of black tape to a speed limit sign. Adversarial patterns on glasses or clothing designed to deceive facial-recognition systems or license-plate readers, have led to a niche industry of "stealth streetwear". An adversarial attack on a neural network can allow an attacker to inject algorithms into the target system. Researchers can also create adversarial audio inputs to disguise commands to intelligent assistants in benign-seeming audio; a parallel literature explores human perception of such stimuli. Clustering algorithms are used in security applications. Malware and
computer virus A computer virus is a type of computer program that, when executed, replicates itself by modifying other computer programs and inserting its own code. If this replication succeeds, the affected areas are then said to be "infected" with a comput ...
analysis aims to identify malware families, and to generate specific detection signatures.D. B. Skillicorn. "Adversarial knowledge discovery". IEEE Intelligent Systems, 24:54–61, 2009.B. Biggio, G. Fumera, and F. Roli.
Pattern recognition systems under attack: Design issues and research challenges
. Int'l J. Patt. Recogn. Artif. Intell., 28(7):1460002, 2014.


Attack modalities


Taxonomy

Attacks against (supervised) machine learning algorithms have been categorized along three primary axes: influence on the classifier, the security violation and their specificity. * Classifier influence: An attack can influence the classifier by disrupting the classification phase. This may be preceded by an exploration phase to identify vulnerabilities. The attacker's capabilities might be restricted by the presence of data manipulation constraints. * Security violation: An attack can supply malicious data that gets classified as legitimate. Malicious data supplied during training can cause legitimate data to be rejected after training. * Specificity: A targeted attack attempts to allow a specific intrusion/disruption. Alternatively, an indiscriminate attack creates general mayhem. This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy.B. Biggio, G. Fumera, and F. Roli.
Security evaluation of pattern classifiers under attack
". IEEE Transactions on Knowledge and Data Engineering, 26(4):984–996, 2014.
This taxonomy has further been extended to include dimensions for defense strategies against adversarial attacks.


Strategies

Below are some of the most commonly encountered attack scenarios:


Data poisoning

Poisoning consists of contaminating the training dataset. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms. Serious concerns have been raised especially for user-generated training data, e.g. for content recommendation or natural language models, especially given the ubiquity of fake accounts. To measure the scale of the risk, it suffices to note that Facebook reportedly removes around 7 billion fake accounts per year. In fact, data poisoning has been reported as the leading concern for industrial applications. On social medias,
disinformation campaigns Disinformation is false information deliberately spread to deceive people. It is sometimes confused with misinformation, which is false information but is not deliberate. The English word ''disinformation'' comes from the application of the L ...
are known to produce vast amounts of fabricated activities to bias recommendation and moderation algorithms, to push certain content over others. A particular case of data poisoning is called backdoor attack, which aims to teach a specific behavior for inputs with a given trigger, e.g. a small defect on images, sounds, videos or texts. For instance, intrusion detection systems (IDSs) are often re-trained using collected data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining.B. Biggio, B. Nelson, and P. Laskov.
Support vector machines under adversarial label noise
. In Journal of Machine Learning Research – Proc. 3rd Asian Conf. Machine Learning, volume 20, pp. 97–112, 2011.
M. Kloft and P. Laskov.
Security analysis of online centroid anomaly detection
. Journal of Machine Learning Research, 13:3647–3690, 2012.


Byzantine attacks

As machine learning is scaled, it often relies on multiple computing machines. In
federated learning Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. This approach stands in ...
, for instance, edge devices collaborate with a central server, typically by sending gradients or model parameters. However, some of these devices may deviate from their expected behavior, e.g. to harm the central server's model or to bias algorithms towards certain behaviors (e.g., amplifying the recommendation of disinformation content). On the other hand, if the training is performed on a single machine, then the model is very vulnerable to a failure of the machine, or an attack on the machine; the machine is a
single point of failure A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. SPOFs are undesirable in any system with a goal of high availability or reliability, be it a business practice, software ap ...
. In fact, the machine owner may themselves insert provably undetectable backdoors. The current leading solutions to make (distributed) learning algorithms provably resilient to a minority of malicious (a.k.a.
Byzantine The Byzantine Empire, also referred to as the Eastern Roman Empire or Byzantium, was the continuation of the Roman Empire primarily in its eastern provinces during Late Antiquity and the Middle Ages, when its capital city was Constantinopl ...
) participants are based on robust gradient aggregation rules. Nevertheless, in the context of heterogeneous honest participants, such as users with different consumption habits for recommendation algorithms or writing styles for language models, there are provable impossibility theorems on what any robust learning algorithm can guarantee.


Evasion

Evasion attacksB. Nelson, B. I. Rubinstein, L. Huang, A. D. Joseph, S. J. Lee, S. Rao, and J. D. Tygar.
Query strategies for evading convex-inducing classifiers
. J. Mach. Learn. Res., 13:1293–1332, 2012
consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and
malware Malware (a portmanteau for ''malicious software'') is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, depr ...
. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems. Evasion attacks can be generally split into two different categories: black box attacks and white box attacks.


Model extraction

Model extraction involves an adversary probing a black box machine learning system in order to extract the data it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model extraction could be used to extract a proprietary stock trading model which the adversary could then use for their own financial benefit. In the extreme case, model extraction can lead to model stealing, which corresponds to extracting a sufficient amount of data from the model to enable the complete reconstruction of the model. On the other hand, membership inference is a targeted model extraction attack, which infers the owner of a data point, often by leveraging the
overfitting mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
resulting from poor machine learning practices. Concerningly, this is sometimes achievable even without knowledge or access to a target model's parameters, raising security concerns for models trained on sensitive data, including but not limited to medical records and/or personally identifiable information. With the emergence of
transfer learning Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize ...
and public accessibility of many state of the art machine learning models, tech companies are increasingly drawn to create models based on public ones, giving attackers freely accessible information to the structure and type of model being used.


Categories


Adversarial deep reinforcement learning

Adversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.


Adversarial Natural Language Processing

Adversarial attacks on speech recognition have been introduced for speech-to-text applications, in particular for Mozilla's implementation of DeepSpeech.


Specific attack types

There are a large variety of different adversarial attacks that can be used against machine learning systems. Many of these work on both
deep learning Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. ...
systems as well as traditional machine learning models such as SVMs and
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is cal ...
. A high level sample of these attack types include: * Adversarial Examples * Trojan Attacks / Backdoor Attacks * Model Inversion * Membership Inference


Adversarial examples

An adversarial example refers to specially crafted input which is designed to look "normal" to humans but causes misclassification to a machine learning model. Often, a form of specially designed "noise" is used to elicit the misclassifications. Below are some current techniques for generating adversarial examples in the literature (by no means an exhaustive list). * Gradient-based evasion attack * Fast Gradient Sign Method (FGSM) * Projected Gradient Descent (PGD) * Carlini and Wagner (C&W) attack * Adversarial patch attack


Black Box Attacks

Black box attacks in adversarial machine learning assumes that the adversary can only get outputs for provided inputs and has no knowledge of the model structure or parameters. In this case, the adversarial example is generated either using a model created from scratch, or without any model at all (excluding the ability to query the original model). In either case, the objective of these attacks are to create adversarial examples that are able to transfer to the black box model in question.


= Square Attack

= The Square Attack was introduced in 2020 as a black box evasion adversarial attack based on querying classification scores without the need of gradient information. As a score based black box attack, this adversarial approach is able to query probability distributions across model output classes, but has no other access to the model itself. According to the paper's authors, the proposed Square Attack required less queries than when compared to state of the art score based black box attacks at the time. To describe the function objective, the attack defines the classifier as f: , 1d \rightarrow \reals^K, with d representing the dimensions of the input and K as the total number of output classes. f_k(x) returns the score (or a probability between 0 and 1) that the input x belongs to class k, which allows the classifier's class output for any input x to be defined as argmax_f_k(x). The goal of this attack is as follows: argmax_f_k(\hat) \neq y, , , \hat - x, , _p \leq \epsilon \text \hat \in , 1d In other words, finding some perturbed adversarial example \hat such that the classifier incorrectly classifies it to some other class under the constraint that \hat and x are similar. The paper then defines
loss Loss may refer to: Arts, entertainment, and media Music * ''Loss'' (Bass Communion album) (2006) * ''Loss'' (Mull Historical Society album) (2001) *"Loss", a song by God Is an Astronaut from their self-titled album (2008) * Losses "(Lil Tjay son ...
L as L(f(\hat), y) = f_y(\hat) - \max_f_k(\hat) and proposes the solution to finding adversarial example \hat as solving the below constrained optimization problem: \min_L(f(\hat), y), \text , , \hat - x, , _p \leq \epsilon The result in theory is an adversarial example that is highly confident in the incorrect class but is also very similar to the original image. To find such example, Square Attack utilizes the iterative
random search Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also ...
technique to randomly perturb the image in hopes of improving the objective function. In each step, the algorithm perturbs only a small square section of pixels, hence the name Square Attack, which terminates as soon as an adversarial example is found in order to improve query efficiency. Finally, since the attack algorithm uses scores and not gradient information, the authors of the paper indicate that this approach is not affected by gradient masking, a common technique formerly used to prevent evasion attacks.


= HopSkipJump Attack

= This black box attack was also proposed as a query efficient attack, but one that relies solely on access to any input's predicted output class. In other words, the HopSkipJump attack does not require the ability to calculate gradients or access to score values like the Square Attack, and will require just the model's class prediction output (for any given input). The proposed attack is split into two different settings, targeted and untargeted, but both are built from the general idea of adding minimal perturbations that leads to a different model output. In the targeted setting, the goal is to cause the model to misclassify the perturbed image to a specific target label (that is not the original label). In the untargeted setting, the goal is to cause the model to misclassify the perturbed image to any label that is not the original label. The attack objectives for both are as follows where x is the original image, x^\prime is the adversarial image, d is a distance function between images, c^* is the target label, and C is the model's classification class label function: \textbf \min_d(x^\prime, x) \text C(x^\prime) = c^* \textbf \min_d(x^\prime, x) \text C(x^\prime) \neq C(x) To solve this problem, the attack proposes the following boundary function S for both the untargeted and targeted setting: S(x^\prime):= \begin max_ - F(x^\prime) _ , & \text \\ F(x^\prime) _ - max_, & \text \end This can be further simplified to better visualize the boundary between different potential adversarial examples: S(x^\prime) > 0 \iff \begin argmax_cF(x^\prime) \neq C(x) , & \text \\ argmax_cF(x^\prime) = c^*, & \text \end With this boundary function, the attack then follows an iterative algorithm to find adversarial examples x^\prime for a given image x that satisfies the attack objectives. # Initialize x to some point where S(x) > 0 # Iterate below ## Boundary search ## Gradient update ##* Compute the gradient ##* Find the step size Boundary search uses a modified
binary search In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the ...
to find the point in which the boundary (as defined by S) intersects with the line between x and x^\prime. The next step involves calculating the gradient for x, and update the original x using this gradient and a pre-chosen step size. HopSkipJump authors prove that this iterative algorithm will converge, leading x to a point right along the boundary that is very close in distance to the original image. However, since HopSkipJump is a proposed black box attack and the iterative algorithm above requires the calculation of a gradient in the second iterative step (which black box attacks do not have access to), the authors propose a solution to gradient calculation that requires only the model's output predictions alone. By generating many random vectors in all directions, denoted as u_b, an approximation of the gradient can be calculated using the average of these random vectors weighted by the sign of the boundary function on the image x^\prime + \delta_, where \delta_ is the size of the random vector perturbation: \nabla S(x^\prime, \delta) \approx \frac\sum_^\phi(x^\prime + \delta_) u_b The result of the equation above gives a close approximation of the gradient required in step 2 of the iterative algorithm, completing HopSkipJump as a black box attack.


White Box Attacks

White box attacks assumes that the adversary has access to model parameters on top of being able to get labels for provided inputs.


= Fast Gradient Sign Method (FGSM)

= One of the very first proposed attacks for generating adversarial examples was proposed by Google researchers Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. The attack was called fast gradient sign method, and it consists of adding a linear amount of in-perceivable noise to the image and causing a model to incorrectly classify it. This noise is calculated by multiplying the sign of the gradient with respect to the image we want to perturb by a small constant epsilon. As epsilon increases, the model is more likely to be fooled, but the perturbations become easier to identify as well. Shown below is the equation to generate an adversarial example where x is the original image, \epsilon is a very small number, \Delta_x is the gradient function, J is the loss function, \theta is the model weights, and y is the true label. adv_x = x + \epsilon \cdot sign(\Delta_xJ(\theta, x, y)) One important property of this equation is that the gradient is calculated with respect to the input image since the goal is to generate an image that maximizes the loss for the original image of true label y. In traditional
gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
(for model training), the gradient is used to update the weights of the model since the goal is to minimize the loss for the model on a ground truth dateset. The Fast Gradient Sign Method was proposed as a fast way to generate adversarial examples to evade the model, based on the hypothesis that neural networks cannot resist even linear amounts of perturbation to the input.


= Carlini & Wagner (C&W)

= In an effort to analyze existing adversarial attacks and defenses, researchers at the University of California, Berkeley, Nicholas Carlini and David Wagner in 2016 propose a faster and more robust method to generate adversarial examples. The attack proposed by Carlini and Wagner begins with trying to solve a difficult non-linear optimization equation: \min(, , \delta, , _) \text C(x + \delta) = t, x + \delta \in , 1n Here the objective is to minimize the noise (\delta), added to the original input x, such that the machine learning algorithm (C) predicts the original input with delta (or x + \delta) as some other class t. However instead of directly the above equation, Carlini and Wagner propose using a new function f such that: C(x + \delta) = t \iff f(x + \delta) \leq 0 This condenses the first equation to the problem below: \min(, , \delta, , _) \text f(x + \delta) \leq 0, x + \delta \in , 1n and even more to the equation below: \min(, , \delta, , _ + c \cdot f(x + \delta)), x + \delta \in , 1n Carlini and Wagner then propose the use of the below function in place of f using Z, a function that determines class probabilities for given input x. When substituted in, this equation can be thought of as finding a target class that is more confident than the next likeliest class by some constant amount: f(x) = ( max_Z(x)_i- Z(x)_t)^ When solved using gradient descent, this equation is able to produce stronger adversarial examples when compared to fast gradient sign method that is also able to bypass defensive distillation, a defense that was once proposed to be effective against adversarial examples.


Defenses

Researchers have proposed a multi-step approach to protecting machine learning. * Threat modeling – Formalize the attackers goals and capabilities with respect to the target system. * Attack simulation – Formalize the optimization problem the attacker tries to solve according to possible attack strategies. * Attack impact evaluation * Countermeasure design * Noise detection (For evasion based attack) * Information laundering – Alter the information received by adversaries (for model stealing attacks)


Mechanisms

A number of defense mechanisms against evasion, poisoning, and privacy attacks have been proposed, including: * Deep Neural Network (DNN) classifiers enhanced with data augmentation from GANs, eg. * Secure learning algorithmsO. Dekel, O. Shamir, and L. Xiao.
Learning to classify with missing and corrupted features
. Machine Learning, 81:149–178, 2010.
* Byzantine-resilient algorithms * Multiple classifier systemsB. Biggio, G. Fumera, and F. Roli.
Evade hard multiple classifier systems
. In O. Okun and G. Valentini, editors, Supervised and Unsupervised Ensemble Methods and Their Applications, volume 245 of Studies in Computational Intelligence, pages 15–38. Springer Berlin / Heidelberg, 2009.
* AI-written algorithms. * AIs that explore the training environment; for example, in image recognition, actively navigating a 3D environment rather than passively scanning a fixed set of 2D images. * Privacy-preserving learningB. I. P. Rubinstein, P. L. Bartlett, L. Huang, and N. Taft. " Learning in a large function space: Privacy- preserving mechanisms for svm learning". Journal of Privacy and Confidentiality, 4(1):65–100, 2012. * Ladder algorithm for
Kaggle Kaggle, a subsidiary of Google LLC, is an online community of data scientists and machine learning practitioners. Kaggle allows users to find and publish data sets, explore and build models in a web-based data-science environment, work with oth ...
-style competitions * Game theoretic modelsM. Kantarcioglu, B. Xi, C. Clifton
"Classifier Evaluation and Attribute Selection against Active Adversaries"
Data Min. Knowl. Discov., 22:291–335, January 2011.
* Sanitizing training data * Adversarial training * Backdoor detection algorithms


See also

*
Pattern recognition Pattern recognition is the automated recognition of patterns and regularities in data. It has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics ...
*
Fawkes (image cloaking software) ''Fawkes'' is a facial image cloaking software created by the SAND (Security, Algorithms, Networking and Data) Laboratory of the University of Chicago. It is a free tool that is available as a standalone executable.Ledford, B 2021, ''An Assessment ...


References


External links


MITRE ATLAS: Adversarial Threat Landscape for Artificial-Intelligence Systems

NIST 8269 Draft: A Taxonomy and Terminology of Adversarial Machine Learning
* NIPS 2007 Workshop o
Machine Learning in Adversarial Environments for Computer Security

AlfaSVMLib
– Adversarial Label Flip Attacks against Support Vector MachinesH. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli.
Support vector machines under adversarial label contamination
. Neurocomputing, Special Issue on Advances in Learning with Label Noise, In Press.
* * Dagstuhl Perspectives Workshop on
Machine Learning Methods for Computer Security
* Workshop o
Artificial Intelligence and Security
(AISec) Series {{Differentiable computing Machine learning Computer security