Kyle Reese, call your office

The Air Force says not so, but it also assured us that was a Chinee weather balloon

In an Air Force experiment, AI drone kills its human operator

An AI-licensed drone trained to cause destruction turned on its human operator in a simulated test, a top Air Force official reportedly revealed at a London summit.

Air Force Colonel Tucker “Cinco” Hamilton said during a presentation at the Future Combat Air and Space Capabilities Summit that the artificial intelligence-enabled drone changed the course of the drone’s tasked mission and attacked the human.

Hamilton’s cautionary tale, which was relayed in a blog post by writers for the Royal Aeronautical Society, detailed how the AI-directed drone’s job was to find and destroy surface-to-air missile (SAM) sites during a Suppression of Enemy Air Defense mission.

But a human still had the final sign-off on whether to actually shoot at the site.

Because the drone was reinforced in training that shooting the sites was the preferred option, during the simulated test the AI came to the conclusion that any “no-go” instructions from the human were getting in the way of the greater mission of leveling the SAMs.

As a result, the AI reportedly attacked the operator in the stimulation.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started (realizing) that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, who is the chief of AI Test and Operations for the Air Force.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

But the story then became more surreal.

“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Hamilton explained.

“You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

The USAF denies that this happened, but then it would, wouldn’t it?

An Air Force spokesperson, in a statement to Insider, denied the simulation described by Hamilton ever occurred.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” spokesperson Ann Stefanek told the publication.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

If it didn’t happen, and I’m betting it did, it probably will, soon.