Home / Definitions / Adversarial Machine Learning

Adversarial Machine Learning

Shelby Hiter
Last Updated June 24, 2022 11:34 am

Adversarial machine learning (ML) involves the disruption of machine learning practices, which can stall business processes or even cause serious human injury.


Portions of this definition originally appeared on CIO Insight and are excerpted here with permission.

What are the types of adversarial machine learning?

While a fairly new approach, the growing popularity of machine learning makes it an attractive target for cyberattacks such as data poisoning. Adversarial ML attacks focus on obstructing initial machine learning training and deep learning or interfering with trained ML so the ML confuses its instructions and makes a mistake. Examples of adversarial ML attacks include:

  • Poisoning/contaminating attacks involve disguising malicious data as training data to make small, often inscrutable, changes to training data over time in order to train ML systems to make bad decisions. The disguised data is often difficult to detect and are rarely caught until long after the ML training phase.
  • Evasion attacks involve testing an ML system for vulnerabilities after it has been trained, so attackers can discover ways to evade security safeguards and gain access to the algorithms and code that guide the ML system’s actions. These attacks can damage everything from intended outputs to data quality to system confidentiality.

Go in-depth on how businesses can prevent adversarial ML attacks | CIOInsight.com

Examples of adversarial ML attacks

While only a small number of adversarial ML attacks have been successful, with victims spanning Amazon, Google, Tesla, and Microsoft, any company could suffer from an adversarial ML attack.

To stay ahead of attackers, data and IT professionals practice adversarial attacks to see how different ML scripts and ML-enabled technologies respond to them. Some of the attacks they have attempted and believe could be successfully launched in the near future include:

  • 3D printing human facial features to fool facial recognition technology
  • Adding new markers to roads or road signs to misdirect self-driving cars
  • Inserting additional text in command scripts for military drones to change their travel or attack vectors
  • Changing command recognition for home assistant IoT technology, so it will perform the same action (or no action) for very different command sets

What are the risks of adversarial ML?

While some adversarial ML attacks have resulted in ultimately negligible consequences, they have the potential to cause serious damage to human life and business processes, such as:

  • Physical danger and death could happen as a result of altered algorithms and code for self-driving cars and military drones.
  • Private training data can be stolen and used by competitors.
  • An inability to recognize or fix altered training algorithms can leave machines unusable.
  • Disruption of supply chain and/or other business processes can lead to delays and frustrated customers.
  • A violation of personal data privacy can lead to identity theft for customers, resulting in fines and loss of reputation.

Defending against an adversarial ML attack

Adversarial ML attacks may seem unavoidable, but enterprises can take these proactive steps to protect their machine learning tools and algorithms:

  • Strengthen endpoint security and audit existing security measures regularly.
  • Take both in-training and trained ML systems through adversarial training and attack simulations.
  • Change up classification model algorithms, so attackers can’t easily predict and learn training methods.
  • Become familiar with an adversarial example library to sharpen knowledge of attacks and defense methods.

Learn more about how MITRE protection tests shed new light on endpoint security.

eSecurityPlanet.com