The Malicious Use of Artificial Intelligence:Why it’s urgent to prepare now?

The Malicious Use of Artificial Intelligence:Why it’s urgent to prepare now?

Artificial intelligence and machine learning capabilities are growing at an unprecedented rate.These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis.Countless more such applications are being developed and can be expected over the long term.Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.

This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies and proposes ways to better forecast, prevent, and mitigate these threats.We analyze but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be.We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

In response to the changing threat landscape we make those high-level recommendations:

  • Acknowledge AI’s dual-use nature: AI is a technology capable of immensely positive and immensely negative applications. We should take steps as a community to better evaluate research projects for perversion by malicious actors, and engage with policymakers to understand areas of particular sensitivity. As we write in the paper: “Surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion. Governments and powerful private actors will have access to many of these AI tools and could use them for public good or harm.” Some potential solutions to these problems include pre-publication risk assessments for certain bits of research, selectively sharing some types of research with a significant safety or security component among a small set of trusted organizations, and exploring how to embed norms into the scientific community that are responsive to dual-use concerns.
  • Learn from cybersecurity: The computer security community has developed various practices that are relevant to AI researchers, which we should consider implementing in our own research. These range from “red teaming” by intentionally trying to break or subvert systems, to investing in tech forecasting to spot threats before they arrive, to conventions around the confidential reporting of vulnerabilities discovered in AI systems, and so on.
  • Broaden the discussion: AI is going to alter the global threat landscape, so we should involve a broader cross-section of society in discussions. Parties could include those involved in the civil society, national security experts, businesses, ethicists, the general public, and other researchers.

Like their work on concrete problems in AI safety, they’ve grounded some of the problems motivated by the malicious use of AI in concrete scenarios, such as:

  • persuasive ads generated by AI systems being used to target the administrator of security systems;
  • cybercriminals using neural networks and “fuzzing” techniques to create computer viruses with automatic exploit generation capabilities;
  • malicious actors hacking a cleaning robot so that it delivers an explosives payload to a VIP; and
  • rogue states using omnipresent AI-augmented surveillance systems to pre-emptively arrest people who fit a predictive risk profile.

OpenAI is excited to start having this discussion with their peers, policymakers, and the general public; they’ve spent the last two years researching and solidifying our internal policies at OpenAI and are going to begin engaging a wider audience on these issues.

They’re especially keen to work with more researchers that see themselves contributing to the policy debates around AI as well as making research breakthroughs.

Share

Leave a Reply