Blue Dots Partners

How will we deal with the security risks of AI?

The potential benefits of artificial intelligence and machine learning have accelerated both the excitement about and development of related technologies, tools, and applications. Sometimes overshadowed by the wealth of opportunities are the increasing concerns about the dangers that AI poses. In 2018, 26 authors from fourteen institutions, including Cambridge, Oxford, Stanford, Yale, the Electronic Frontier Foundation, OpenAI, and the Center for a New American Security, released their comprehensive report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”. The hundred-page study outlines AI-centric threats related to digital, physical, and political security as well as ways to deal with them.

If the study were only an academic exercise, it would be fascinating but not particularly useful. Fortunately, the paper is a practical assessment of attacks that are likely unless adequate defenses are developed.

The group predicted three primary types of risk:

  1. 1. Expansion of existing threats, with an enlarged set of actors, an increased volume and frequency of attacks, and a wider set of targets.
  2. 2. Introduction of new threats, in which AI performs tasks that humans would not be capable of completing.
  3. 3. Change to the typical character of threats, where attacks by actors who are more difficult to trace will be precisely targeted, more effective, and capable of finding and exploiting vulnerabilities in AI systems.

The scope of threats we need to anticipate is far-reaching…and scary. Digital security risks include impersonation via speech synthesis; automated hacking; and data poisoning. Physical security may be compromised by autonomous weapons; subverted vehicle control systems for transportation and weaponized delivery bots; and deployment of swarms of thousands of micro-drones. Political risks in terms of privacy invasion and social manipulation include mass-collected data analysis fed into surveillance systems; targeted propaganda; deceptive, highly realistic manipulated videos; denial of information attacks; and application of mood and behavior data to drive targeted attacks. All of which contribute to undermining fact-based communications, informed debate, and intelligent decisions.

The authors did not merely raise red flags and identify an array of threat scenarios. They also drafted high-level recommendations, or interventions, clustered in four areas. Those recommendations, excerpted from the report are:

  1. 1. Policy makers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. 2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  3. 3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. 4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

The authors also identified four areas of research that should benefit defense of security as well as some specific actions. The first of the four research focal points is to learn from and with the cybersecurity community. Among the associated recommendations are red teaming, formal verification, a confidential vulnerability disclosure process, white hat forecasting efforts, new security tools, and built-in hardware security. The other three research recommendation categories are: to explore different openness models, promote a culture of responsibility, and develop technology and policy solutions.

Why is all this important to us? Because, as we all have heard countless times, “an ounce of prevention is worth a pound of cure”.  Understanding and mitigating risks rather than reacting to disasters may not be standard procedure for people or political processes but, when AI is involved, it is necessary. Researchers, developers, and implementors of AI need to anticipate and mitigate risks as an integral part of the design and development process, then continuously improve defenses over time.

If you are interested in reading the entire report, you can find it here.