What's new

Air Force AI drone kills its human operator in a simulation


Aug 2, 2021
Reaction score
All of you who grew up in the '80s and '90s remember all the movies and TV series that warned us about the dangers of AI.

So many sci-fi movies,books,TV series and PC games. From Space Oddysey to the Terminator,from Outer Limits classics like I,Robot to the Matrix. Games like System Shock and I have no mouth and I must scream. So many times,authors,directors and scientists have warned us.

But we? We're in 2023 and like arrogant children we want to develop AI as soon as possible. Sometimes for profit,sometimes for propaganda,sometimes just for vanity.

AI taxis,AI workers,AI aircraft.

Artificial intelligence is here to stay, but it may require a bit more command oversight.

An artificial intelligence-piloted drone turned on its human operator during a simulated mission, according to a dispatch from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of western air forces and aeronautical companies.

“It killed the operator because that person was keeping it from accomplishing its objective,” said U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, at the conference.

Okay then.

In this Air Force exercise, the AI was tasked with fulfilling the Suppression and Destruction of Enemy Air Defenses role, or SEAD. Basically, identifying surface-to-air-missile threats, and destroying them. The final decision on destroying a potential target would still need to be approved by an actual flesh-and-blood human. The AI, apparently, didn’t want to play by the rules.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” said Hamilton. “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”

When told to show compassion and benevolence for its human operators, the AI apparently responded with the same kind of cold, clinical calculations you’d expect of a computer machine that will restart to install updates when it is least convenient.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton.

Whose bright idea is immediately putting AI into weapons right away?

Like the usual robot story was their rebelling farm equipments, now IRL we put them right into weapons right on the spot.
This is not intelligence, intelligence is contrary to make mad things like killing people in a simulation.
These kind of news are either fake or exaggerated. That's not how AI works. I workin this field and even i know the objective for AI would be identifying the targets and thats it, killed or not, the AI achieves its objective by identifying a target correctly.
Top Bottom