US military establishes generative AI task force

August 13, 2023
Military AI

The US Department of Defense announced the formation of a task force to design generative AI tools for defense purposes. 

Dubbed Task Force Lima, its primary objective will be to “assess, synchronize, and employ generative AI capabilities” throughout the defense sector. This aims to bolster the US’s position as a leader in AI.

Kathleen Hicks, Deputy Secretary of Defense, initiated this task force, which will operate under Dr. Craig Martell, the Department’s Chief Digital and Artificial Intelligence Officer.

Dr. Martell highlighted the threats and challenges of using generative AI for military applications, stating, “We must identify proper protective measures and mitigate national security risks that may result from issues such as poorly managed training data.” 

He also emphasized the importance of knowing how the US’s adversaries might adopt the technology. 

AI applications are also being deployed to protect critical infrastructure and systems – the US government recently announced an AI-centric cyber security challenge with a prize pool of $20m to fuel development towards that goal. 

US Navy Capt. M. Xavier Lugo, Task Force Lima mission commander and member of the CDAO’s Algorithmic Warfare Directorate said, “The adoption of artificial intelligence in defense is not solely about innovative technology but also about enhancing national security.” 

The rising role of AI on the battlefield

AI has already been trialed in battlefield decision-making, by both the US and Israeli governments, at least. 

US Air Force Colonel Matthew Strohmeyer revealed to Bloomberg that live tests were conducted on generative AI models to gauge their potential in decision-making processes. Speaking of the AI test, he said, “It was highly successful. It was very fast.”

The US Air Force also announced a successful AI-powered test flight with an unmanned aircraft, and Bonnie Jenkins, the State Department’s Undersecretary for Arms Control and International Security, emphasized the transformative potential of AI on global militaries, referencing the Ukrainian army’s use of AI for battlefield analytics.

In February, Jenkins remarked, “As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI.”

AI’s integration into military systems has sparked serious ethical concerns. Who’s responsible for AI if it goes wrong in such situations, e.g., if fully autonomous drones kill civilians? 

Can highly sophisticated drones turn on their operators in pursuit of ‘the greater good?’

Thus far, experimental evidence has proven AI’s ability to establish unpredictable emergent goals that place humanity at risk. 

Militaries have to stay one step ahead of AI, at least, to develop safe methods of deploying the technology on the battlefield.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions