OpenAI says its AI can now be used in military applications

  • OpenAI amended its usage policies to allow its tools to be used in some military applications
  • The company is working with the US Department of Defense on several projects
  • Its usage policies still do not allow its AI products to be used in weapons
AI weapons

OpenAI has amended its usage policies to allow its AI products to be used in some military applications as the company engages with the Pentagon on several projects.

OpenAI is working on several Department of Defense projects including cybersecurity capabilities and methods to assist with preventing veteran suicide. It’s also assisting DARPA with its AI Cyber Challenge to automatically find and fix software vulnerabilities and defend critical infrastructure from cyberattacks.

OpenAI’s engagement with the US military is a departure from its previous policy of banning military use of its products. Up until a few days ago, its usage policies page listed “military and warfare” in the list of disallowed applications.

The updated terms no longer mention the word military but still say its tools may not be used to cause harm to anyone or to develop weapons.

In an interview at Bloomberg House at the World Economic Forum in Davos, Anna Makanju, OpenAI’s VP of global affairs said, “Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.”

Silicon Valley employees at Big Tech companies have often balked at seeing their engineering efforts used in military applications.

In 2019, Microsoft employees signed a letter protesting the company’s $480m contract to supply the U.S. Army with augmented-reality headsets. Last year, Google and Microsoft employees protested a joint project that would see the companies provide cloud computing services to the Israeli government and military.

An OpenAI spokesperson told CNB, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.”

If that mission includes making money for shareholders then OpenAI may have taken the first step down a slippery slope. Could we see ‘WarGPT’ in the future?

In November, Deputy Secretary of Defense Kathleen Hicks said that AI is “a key part of the comprehensive, warfighter-centric approach to innovation that Secretary Austin and I have been driving from Day 1.”

It seems inevitable that AI will be used in a broad range of military applications, including weapons. The question is, whose AI models will be under the hood?

© 2023 Intelliquence Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions