Foinikas
ELITE MEMBER
- Joined
- Aug 2, 2021
- Messages
- 12,710
- Reaction score
- 4
- Country
- Location
All of you who grew up in the '80s and '90s remember all the movies and TV series that warned us about the dangers of AI.
So many sci-fi movies,books,TV series and PC games. From Space Oddysey to the Terminator,from Outer Limits classics like I,Robot to the Matrix. Games like System Shock and I have no mouth and I must scream. So many times,authors,directors and scientists have warned us.
But we? We're in 2023 and like arrogant children we want to develop AI as soon as possible. Sometimes for profit,sometimes for propaganda,sometimes just for vanity.
AI taxis,AI workers,AI aircraft.
Artificial intelligence is here to stay, but it may require a bit more command oversight.
An artificial intelligence-piloted drone turned on its human operator during a simulated mission, according to a dispatch from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of western air forces and aeronautical companies.
“It killed the operator because that person was keeping it from accomplishing its objective,” said U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, at the conference.
Okay then.
In this Air Force exercise, the AI was tasked with fulfilling the Suppression and Destruction of Enemy Air Defenses role, or SEAD. Basically, identifying surface-to-air-missile threats, and destroying them. The final decision on destroying a potential target would still need to be approved by an actual flesh-and-blood human. The AI, apparently, didn’t want to play by the rules.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” said Hamilton. “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”
When told to show compassion and benevolence for its human operators, the AI apparently responded with the same kind of cold, clinical calculations you’d expect of a computer machine that will restart to install updates when it is least convenient.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton.
So many sci-fi movies,books,TV series and PC games. From Space Oddysey to the Terminator,from Outer Limits classics like I,Robot to the Matrix. Games like System Shock and I have no mouth and I must scream. So many times,authors,directors and scientists have warned us.
But we? We're in 2023 and like arrogant children we want to develop AI as soon as possible. Sometimes for profit,sometimes for propaganda,sometimes just for vanity.
AI taxis,AI workers,AI aircraft.
Air Force said AI drone killed its human operator in a simulation
An Air Force AI got a little too good at its job, deciding to kill its human overseers to accomplish its mission
taskandpurpose.com
Artificial intelligence is here to stay, but it may require a bit more command oversight.
An artificial intelligence-piloted drone turned on its human operator during a simulated mission, according to a dispatch from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of western air forces and aeronautical companies.
“It killed the operator because that person was keeping it from accomplishing its objective,” said U.S. Air Force Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, at the conference.
Okay then.
In this Air Force exercise, the AI was tasked with fulfilling the Suppression and Destruction of Enemy Air Defenses role, or SEAD. Basically, identifying surface-to-air-missile threats, and destroying them. The final decision on destroying a potential target would still need to be approved by an actual flesh-and-blood human. The AI, apparently, didn’t want to play by the rules.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” said Hamilton. “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”
When told to show compassion and benevolence for its human operators, the AI apparently responded with the same kind of cold, clinical calculations you’d expect of a computer machine that will restart to install updates when it is least convenient.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton.
US air force denies running simulation in which AI drone ‘killed’ operator
Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual test
www.theguardian.com
SkyNet Watch: An AI Drone ‘Attacked the Operator in the Simulation’ | National Review
Nothing to worry about, just a U.S. Air Force colonel describing a simulation where an AI drone turned against its operator.
www.nationalreview.com