AI on the battlefield: who is responsible if it goes wrong?

August 27, 2023

War crimes

In a world where “war games” no longer refer solely to board games or video games but to life-and-death scenarios facilitated by machine intelligence, the question of responsibility is monumental.

Militaries are ramping up research and investment in AI.

Some analysts defined an 11-day 2021 Israel-Palestine encounter as the first “AI war,” where Israeli technologies assisted in battlefield intelligence and deployment.

In recent months, the US military established a generative AI task force and successfully tested an autonomous AI-powered jet.

AI’s rising role in warfare presents an array of complex legal and ethical dilemmas that we’ve not yet answered, even despite AI-powered military technology being deployed at this very moment.

Who is responsible when AI goes wrong?

Military technology manufacturer Rafael’s “Fire Weaver” locates enemy positions using sensors and suggests the best-placed unit to fire upon them. 

The product’s description states, “Fire Weaver calculates the Rules of Engagement and directs targeting and firing, using the most appropriate shooter for each acquired target.”

“Calculates” is the operative word here – AI weaponry may distill the decision to destroy a target down to a binary yes/no decision, so what if Fire Weaver mistakes a child for an enemy soldier? Or a humanitarian aid truck instead of an enemy vehicle?

Currently, the lack of moral, ethical, and legal clarity is glaring. AI occupies a legal and ethical vacuum, which could be the case for some time, as lawmaking is notoriously slow – and rarely does it encounter such a fast-evolving technology. 

In wider society, there are already many examples of AI compromising people and their rights. These provide a glimpse into the void in legislation and ethics carved out by AI and its various uses. 

For instance, ChatGPT alleged a Georgia man, Mark Walters, was found guilty of corporate embezzlement and accused law professor Jonathan Turley of sexual assault. ChatGPT was false in both cases.

Similarly, Getty Images and several artists, authors, and other creators have launched copyright lawsuits against tech companies for using training data to build their models. 

Training data is often collected from third parties, like Common Crawl, and ‘shadow libraries’ like Bibliotik, similar to torrent sites like PirateBay. 

In that case, who is liable for copyright infringement? The AI developers or the data set publishers? It’s a cartoon-esque circle of responsibility where each potentially guilty party points to those next to them, and in the end, everyone escapes the blame. 

This hazy ethical landscape is far riskier in the context of AI weaponry and algorithmic military tactics, where a model’s calculations could quite literally determine life or death. 

Military organizations are already aware of the paradigm shift presented by AI weaponry. According to the Department of Defense (DoD), the first of its five “ethical principles for artificial intelligence” for military uses is “Responsible,” defined as, “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

This means a human must ultimately be responsible for the machine’s actions. The military has always relied on the principle that someone – typically the commander or a soldier – must be held responsible for actions taken during warfare. Yet, AI’s role in the decision-making process is increasingly murky. 

The first question is, as AI takes on more sophisticated roles in targeting, surveillance, and other areas, does pressing the “Approve” button equate to culpability? 

If AI’s predictions faltered, for example, resulting in civilian casualties, it’s doubtful that anyone would accept ‘the machine’ receiving sole blame for an accident. 

In turn, there may be situations where humans are wrongly blamed for their role in an accident that they made no contribution to, which anthropologist M.C. Elish describes as a “moral crumple zone.” 

Elish’s research of industrial and machine-based accidents suggests that humans tend to absorb the blame in any accident, even if the fault lies in the machine, the algorithm, or the decisionmaker who approved the technology in the first place. 

She draws on several real-life examples, such as a nuclear accident on Three Mile Island, Pennsylvania, and the crash of Air France Flight 447, which were largely attributed to ‘human error’ rather than a more intricate series of failures distributed among multiple individuals and systems. 

Elish says of responsibility in the era of AI, “With regard to autonomous and robotic technologies, the regulations, laws, and norms are still in formation and may be particularly susceptible to uncertainties or even evasions of responsibility.”

This also raises questions surrounding the role of democratic processes in warfare and the sovereignty of human decision-making. When the Vietnam War aired in living rooms across America, the immediacy of war’s toll impacted public opinion and policy. 

In contrast, AI-aided combat could remove the societal checks and balances from public sentiment and democratic discourse.

Ethicist and philosopher Thomas Metzinger underscores that ethical norms aren’t just legal constructs but social ones that arise from democratic processes.

If algorithms make the decisions, human involvement – and hence moral responsibility – becomes diffused and redundant. 

International legal implications of AI in warfare

One of the Geneva Convention’s pillars is the “distinction” principle, which mandates distinguishing between combatants and civilians. 

Technology has already proven its failure to deliver on promises of enhanced protection of civilians, with just 10% of those killed in US drone strikes throughout the Obama presidency being the intended targets, according to leaked papers from The Intercept

AI algorithms are only as good as the data they are trained on and the rules they are programmed to follow. When it comes to warfare, an algorithm might misinterpret data due to fog of war, flawed training data, or intentionally deceptive enemy tactics. 

The International Committee of the Red Cross (ICRC) has started discussions on the legality of autonomous weapons systems under existing international humanitarian law, but there are few concrete definitions. 

They determine that existing frameworks are ill-suited for the new challenges posed by AI, and the ICRC’s proposed principles are vague; for instance, “Unpredictable autonomous weapon systems should be expressly ruled out.”

Where do we set the threshold for ‘unpredictable’ when one minuscule mistake might be catastrophic?

Moreover, the battlefield is forever changing and will present ‘edge cases,’ which are blind spots not accounted for in AI training. Building AI systems that respond to dynamic environmental conditions with the same reaction times as humans is exceptionally tough. 

While AI weaponry might maintain accuracy in typical battlefield scenarios, what happens when the environment shifts away from what the model thinks is the “ground truth,” or edge cases throw off its accuracy?

Erosion of moral sensitivity

The act of killing in warfare has changed vastly with the advent of modern military strategies and tactics. 

During World War I, historical analyses and accounts like S.L.A. Marshall’s controversial book “Men Against Fire” suggest that only 15 to 25% of frontline soldiers fired their weapons with the intent to kill. 

Today, a soldier operating a drone from thousands of miles away does not experience the immediate emotional and psychological impact of their actions, which has been indicated by a generally lower occurrence of PTSD and other mental health problems compared to those serving in the field.

The very design of military technology has adapted to new paradigms of ‘distance warfare.’ The controllers used for drones have been noted for their resemblance to video game controllers, a design choice that may not be accidental.

AI operators may make life-and-death decisions in an environment that abstracts the realities of war into data points and images on a screen, severing the final moral connections we have with the lives of those swept up in conflict. 

Where do we go from here?

As military AI evolves into something many gravely fear, pinning the responsibility for mistakes and failures onto any singular person, developers, organizations, or entire groups seems unlikely. 

Who, or what, does this leave to bear responsibility? How do we prevent stepping into a future where killing becomes more machine than human and where ethical responsibility dissipates into algorithmic abstractions? 

In time, increasingly intelligent artificial systems will likely transform the fabric of ethics and responsibility, particularly if they illustrate signs of sentience or consciousness.

But that doesn’t account for how this situation is managed in the present, and like so many things in the world of AI, there are more questions than answers.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions