The potential benefits of artificial intelligence and machine learning have accelerated both the excitement about and development of related technologies, tools, and applications. Sometimes overshadowed by the wealth of opportunities are the increasing concerns about the dangers that AI poses. In 2018, 26 authors from fourteen institutions, including Cambridge, Oxford, Stanford, Yale, the Electronic Frontier Foundation, OpenAI, and the Center for a New American Security, released their comprehensive report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”. The hundred-page study outlines AI-centric threats related to digital, physical, and political security as well as ways to deal with them.
If the study were only an academic exercise, it would be fascinating but not particularly useful. Fortunately, the paper is a practical assessment of attacks that are likely unless adequate defenses are developed.
The group predicted three primary types of risk:
The scope of threats we need to anticipate is far-reaching…and scary. Digital security risks include impersonation via speech synthesis; automated hacking; and data poisoning. Physical security may be compromised by autonomous weapons; subverted vehicle control systems for transportation and weaponized delivery bots; and deployment of swarms of thousands of micro-drones. Political risks in terms of privacy invasion and social manipulation include mass-collected data analysis fed into surveillance systems; targeted propaganda; deceptive, highly realistic manipulated videos; denial of information attacks; and application of mood and behavior data to drive targeted attacks. All of which contribute to undermining fact-based communications, informed debate, and intelligent decisions.
The authors did not merely raise red flags and identify an array of threat scenarios. They also drafted high-level recommendations, or interventions, clustered in four areas. Those recommendations, excerpted from the report are:
The authors also identified four areas of research that should benefit defense of security as well as some specific actions. The first of the four research focal points is to learn from and with the cybersecurity community. Among the associated recommendations are red teaming, formal verification, a confidential vulnerability disclosure process, white hat forecasting efforts, new security tools, and built-in hardware security. The other three research recommendation categories are: to explore different openness models, promote a culture of responsibility, and develop technology and policy solutions.
Why is all this important to us? Because, as we all have heard countless times, “an ounce of prevention is worth a pound of cure”. Understanding and mitigating risks rather than reacting to disasters may not be standard procedure for people or political processes but, when AI is involved, it is necessary. Researchers, developers, and implementors of AI need to anticipate and mitigate risks as an integral part of the design and development process, then continuously improve defenses over time.
If you are interested in reading the entire report, you can find it here.