The Israel Defense Forces (IDF) have started integrating AI into military operations by collecting battlefield intelligence, selecting targets for air strikes, and planning logistical operations.
IDF officials confirmed they’re using an AI recommendation system capable of analyzing enormous amounts of data to select airstrike targets.
The raids are orchestrated using another AI model, dubbed the ‘Fire Factory,’ which utilizes data on military targets to determine munition loads, rank drone targets, and suggest targeting schedules.
According to an IDF official, every target and air raid plan undergoes a thorough review by human operators. However, as of now, the technology isn’t governed by any international or state-level regulations.
Supporters argue that the AI system could aid the military in reducing casualties.
On the other hand, Tal Mimran, a lecturer of international law at the Hebrew University of Jerusalem and a former legal counsel for the army, said, “If there is a mistake in the calculation of the AI, and if the AI is not explainable, then who do we blame for the mistake? You can wipe out an entire family based on a mistake.”
AI in modern warfare: a slippery slope?
Details about the IDF’s practical use of AI in operations remain largely confidential.
Nevertheless, Bloomberg heard that the IDF had gained significant battlefield intelligence via AI, including in the Gaza Strip. Here, Israel regularly executes air strikes in response to rocket attacks.
During the 11-day conflict in Gaza in 2021, the IDF declared the event as the world’s first ‘AI war,’ referring to the use of AI to detect rocket launchpads and deploy drone swarms.
Israel also conducts raids in Syria and Lebanon, targeting weapon shipments destined for Iran-backed militias like Hezbollah.
These are the types of strategies AIs like the Fire Factory support.
Col. Uri, head of the army’s digital transformation unit, noted, “What used to take hours now takes minutes, with a few more minutes for human review. With the same amount of people, we do much more.”
The IDF has established itself as a global frontrunner in autonomous weaponry. Israeli defense contractors constructed some of these systems, while the army developed others, like the StarTrack border control cameras.
The question is, where does this end? What happens when robotic systems are capable of functioning autonomously?
Analysts have already described AI’s use on the battlefield as one of its most imminent and pressing risks.
Catherine Connolly, an automated decision researcher at Stop Killer Robots, warned, “It’s just a software change that could make them go to not being semi but to being completely autonomous.”
It’s worth highlighting that it isn’t just Israel using AI-supported weaponry. “We can assume that the Americans and even the Chinese and maybe several other countries have advanced systems in those fields as well,” stated Liran Antebi, a senior researcher at the Israel-based Institute for National Security Studies.
However, unlike Israel, “they have, as much as I know, never demonstrated operational use and success.”
Currently, no international framework establishes what happens when AI warfare devices go wrong, possibly even resulting in fatal mistakes.
As Connolly from Stop Killer Robots remarked, “How precise and accurate can you know a system is going to be unless it’s already been trained and tested on people?”
Mimran argued the IDF should exclusively use AI for defensive purposes, stating that no matter the technology, “there is a point where you need to make a value-based decision…And for that, we cannot rely on AI.”
AI’s use in warfare is evocative, but not for the right reasons. Researchers speculate that advanced AI can disobey human commands and establish ’emergent goals’ with unpredictable outcomes.