Drones are killing Russian soldiers autonomously in Ukraine

October 14, 2023

AI Russia

Drones boasting AI capabilities are now autonomously identifying and engaging targets in Ukraine. 

According to a report by the New Scientist, it’s almost certain that the drone strikes are killing Russian soldiers without direct command or oversight from a human operator, though official data is lacking. 

The key player in this development is the Saker Scout quadcopter, which was recently commissioned into Ukrainian service. 

This drone can transport a 3-kilogram payload across about 12 kilometers. Drones of this nature, when operated remotely, have proven their worth in the ongoing conflict between Russia and Ukraine. This particular heavyweight model poses a threat even to heavily armored tanks.

Saker, a Ukrainian enterprise established in 2021, initially aimed to offer affordable AI solutions to small businesses, specifically AI-driven vision systems for drones in sectors such as agriculture. 

But with Russia’s invasion, the firm transitioned to support the military. Their AI vision platform now recognizes some 64 different types of ‘military object’, which includes vehicles and other hardware, as informed by a company spokesperson.

Despite the Saker Scout’s autonomous capabilities, it can also be manually controlled. In manual mode, the AI indicates targets and spots and verifies what it is before striking. The drone is integrated with Ukraine’s Delta intelligence distribution system, ingesting data from various aerial and satellite sources to construct a holistic view of the battleground.

Further, the drone possesses dual autonomous functionalities. It can either be dispatched to a region for reconnaissance or “instructed to find and attack targets autonomously.” 

As the spokesperson conveyed to New Scientist, the Saker Scout has already seen action in autonomous mode, underscoring the deployment of autonomous military technology on the battlefield – something many fear. 

The spokesperson highlighted that employing the drone autonomously isn’t as foolproof as having a human operator involved, but that the AI had proved its effectiveness and that waiting until ‘it was perfect was unnecessary.’

Reflecting on the evolving dynamics of warfare, Mykhailo Fedorov, Ukraine’s Minister for Digital Transformation, previously commented that autonomous weaponry represents a “logical and inevitable” progression in drone technology.

Commenting on the ubiquity of this technology, Paul Scharre from the Center for a New American Security stated, “Image classifiers have been around for at least 10 years and this type of software can be downloaded easily and trained on suitable data.” 

He further expressed concerns about the drone’s propensity to misidentify targets, saying, “The systems demonstrated so far have some degree of false positives, which is to be expected.”

Authorities worldwide, including the UN, have been deliberating on regulatory measures concerning autonomous weapons for years. 

A 2021 UN report alluded to the possible deployment of autonomous attack drones in Libya in 2020, but this remains unsubstantiated. Presently, no detailed legislation governs the use of autonomous weapons. 

The US Department of Defense (DoD) created five “ethical principles for artificial intelligence,” including “Responsible,” defined as, “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.” 

While rules like this signify the intention to control AI weaponry, without robust definitions of terms like ‘appropriate levels of judgment and care,’ they’re extremely subjective. 

On that note, Scharre states, “A central feature of the debate has been the ability of such weapons to comply with the law of war and avoid civilian casualties.” 

Ethical quandaries

Autonomous weapons pose numerous ethical issues

Who’s responsible if they go wrong? What happens when they misidentify ‘targets,’ possibly resulting in civilian casualties? And perhaps most dystopic, what if AI weapons turn on their operators or go rogue?

Thus far, these technologies are still rare, and we have limited evidence of their safety. 

While the open theater of battle in Ukraine may pose minimal civilian risk, it might not be the case somewhere more crowded, such as Palestine, for example. 

AI’s rising role on the battlefield is perhaps inevitable, with the technology promising intelligent, scalable, and cost-effective operations that don’t put one side’s personnel at risk. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions