What's new

Breaking: AI Just Controlled a US Military Plane for the First Time Ever

F-22Raptor

ELITE MEMBER
Joined
Jun 19, 2014
Messages
16,980
Reaction score
3
Country
United States
Location
United States
On December 15, the United States Air Force successfully flew an AI copilot on a U-2 spy plane in California, marking the first time AI has controlled a U.S. military system. In this Popular Mechanics exclusive, Dr. Will Roper, the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, reveals how he and his team made history.

For Star Wars fans, an X-Wing fighter isn’t complete without R2-D2. Whether you need to fire up converters, increase power, or fix a broken stabilizer, that trusty droid, full of lively beeps and squeaks, is the ultimate copilot.


Teaming artificial intelligence (AI) with pilots is no longer just a matter for science fiction or blockbuster movies. On Tuesday, December 15, the Air Force successfully flew an AI copilot on a U-2 spy plane in California: the first time AI has controlled a U.S. military system.


Completing over a million training runs prior, the flight was a small step for the computerized copilot, but it’s a giant leap for “computerkind” in future military operations.

The U.S. military has historically struggled developing digital capabilities. It’s hard to believe difficult-to-code computers and hard-to-access data—much less AI—held back the world’s most lethal hardware not so long ago in an Air Force not far, far away.

But starting three years ago, the Air Force took its own giant leap toward the digital age. Finally cracking the code on military software, we built the Pentagon’s first commercially-inspired development teams, coding clouds, and even a combat internet that downed a cruise missile at blistering machine speeds. But our recent AI demo is one for military record books and science fiction fans alike.

With call sign ARTUµ, we trained µZero—a world-leading computer program that dominates chess, Go, and even video games without prior knowledge of their rules—to operate a U-2 spy plane. Though lacking those lively beeps and squeaks, ARTUµ surpassed its motion picture namesake in one distinctive feature: it was the mission commander, the final decision authority on the human-machine team. And given the high stakes of global AI, surpassing science fiction must become our military norm.

Our demo flew a reconnaissance mission during a simulated missile strike at Beale Air Force Base on Tuesday. ARTUµ searched for enemy launchers while our pilot searched for threatening aircraft, both sharing the U-2’s radar. With no pilot override, ARTUµ made final calls on devoting the radar to missile hunting versus self-protection. Luke Skywalker certainly never took such orders from his X-Wing sidekick!

The fact ARTUµ was in command was less about any particular mission than how completely our military must embrace AI to maintain the battlefield decision advantage. Unlike Han Solo’s “never-tell-me-the-odds” snub of C-3PO’s asteroid field survival rate (approximately 3,720 to 1), our warfighters need to know the odds in dizzyingly-complex combat scenarios. Teaming with trusted AI across all facets of conflict—even occasionally putting it in charge—could tip those odds in our favor.

But to trust AI, software design is key. Like a breaker box for code, the U-2 gave ARTUµ complete radar control while “switching off” access to other subsystems. Had the scenario been navigating an asteroid field—or more likely field of enemy radars—those “on-off” switches could adjust. The design allows operators to choose what AI won’t do to accept the operational risk of what it will. Creating this software breaker box—instead of Pandora’s—has been an Air Force journey of more than a few parsecs.

The journey began early in 2018, when I approved a hoodie-wearing Air Force team (fittingly named Kessel Run for a Star Wars smuggling route) to “smuggle” commercial DevSecOps software practices into our Air Operations Center. By merging development, security, and operations using modern information technology, DevSecOps produced higher-quality code faster and more continuously. Sounds perfect for a digitally-challenged Pentagon, right?

You’d think. Kessel Run bent all the rules and definitely “shot first” at the Pentagon’s fixation on five-year development plans with crippling baselines. As Han Solo advocated, keeping momentum sometimes required a good blaster at our side. Thankfully, Kessel Run’s results were game-changing, outpacing previous programs and inspiring a generation of Air Force and Space Force DevSecOps teams, including our U-2 FedLab.

But coding effectively is only one element of trusted AI design. A year later, I directed a Service-wide adoption of coding clouds using landmark technologies containerization and Kubernetes. Containers virtualize and isolate everything code needs to run for Kubernetes then to orchestrate, selectively powering disparate software like a dynamic-but-secure breaker box.

Running ARTUµ containers in our FedLab cloud also proved they would run identically on the U-2—no lengthy safety or interference checks required! This is how we get evolving software—especially AI—out of our clouds and safely onto planes flying through them.

Yet this trusted design didn’t create ARTUµ’s copilot abilities. You have to train for that. Like a digital Yoda, our small-but-mighty U-2 FedLab trained µZero’s gaming algorithms to operate a radar—reconstructing them to learn the good side of reconnaissance (enemies found) from the dark side (U-2s lost)—all while interacting with a pilot. Running over a million training simulations at their “digital Dagobah,” they had ARTUµ mission-ready in just over a month.


So my recent U-2 AI pathfinder—and military AI more generally—was really a three-year journey to becoming a software-savvy Air Force. But why not skip computerized copilots and wingmen and create a purely autonomous Force? After all, a computer won DARPA’s recent dogfight, and we’re already developing autonomous mini-fighters in our Skyborg program.

That autonomous future will happen eventually. But today’s AI can be easily fooled by adversary tactics, precisely what future warfare will throw at it.


Like board or video games, human pilots could only try outperformingDARPA’s AI while obeying the rules of the dogfighting simulation, rules the AI had algorithmically learned and mastered. The loss is a wakeup call for new digital trickery to outfox machine learning principles themselves. Even R2-D2 confused computer terminals with harmful power sockets!

As we complete our first generation of AI, we must also work on algorithmic stealth and countermeasures to defeat it. Though likely as invisible to human pilots as radar beams and jammer strobes, they’ll need similar instincts for them—as well as how to fly with and against first-generation AI—as we invent the next. Algorithmic warfare has begun.

Now if only we could master those hyperdrives, too.


https://www.popularmechanics.com/mi...ce-controls-u2-spy-plane-air-force-exclusive/
 
The Air Force allowed an artificial-intelligence algorithm to control sensor and navigation systems on a U-2 Dragon Lady spy plane in a training flight Tuesday, officials said, marking what is believed to be the first known use of AI onboard a U.S. military aircraft.


No weapons were involved, and the plane was steered by a pilot. Even so, senior defense officials touted the test as a watershed moment in the Defense Department’s attempts to incorporate AI into military aircraft, a subject that is of intense debate in aviation and arms control communities.

“This is the first time this has ever happened,” said Assistant Air Force Secretary Will Roper.


Former Google chief executive Eric Schmidt, who previously headed the Pentagon’s Defense Innovation Board, described Tuesday’s flight test as “the first time, to my knowledge, that you have a military system integrating AI, probably in any military.”

The AI system was deliberately designed without a manual override to “provoke thought and learning in the test environment,” Air Force spokesman Josh Benedetti said in an email.

It was relegated to highly specific tasks and walled off from the plane’s flight controls, according to people involved in the flight test.

“For the most part I was still very much the pilot in command,” the U-2 pilot who carried out Tuesday’s test told The Washington Post in an interview.


The pilot spoke on the condition of anonymity because of the sensitive nature of his work. The Air Force later released photos from shortly before the test flight with materials that referenced only his call sign: “Vudu.”
“[The AI’s] role was very narrow … but, for the tasks the AI was presented with, it performed well,” the pilot said.

The two-and-a-half-hour-long test was performed in a routine training mission at Beale Air Force Base, near Marysville, Calif., starting Tuesday morning. Air Force officials and the U-2 pilot declined to offer details about the specific tasks performed by the AI, except that it was put in charge of the plane’s radar sensors and tactical navigation.


Roper said the AI was trained against an opposing computer to look for oncoming missiles and missile launchers. For the purposes of the initial test flight, the AI got the final vote on where to direct the plane’s sensors, he said.
The point is to move the Air Force closer to the concept of “man and machine teaming,” in which robots are responsible for limited technical tasks while humans remain in control of life-or-death decisions like flight control and targeting.

“This is really meant to shock the Air Force and the [Defense] Department as a whole into how seriously we need to treat AI teaming,” Roper said in an interview shortly before the test.


The AI “is not merely part of the system. … We’re logging it in the pilot registry,” he said.

The AI itself, dubbed ARTUµ in an apparent Star Wars reference, is based on open-source software algorithms and adapted to the plane’s computer systems at the U-2 Federal Laboratory.

It is based on a publicly accessible algorithm called µZero, which was developed by the AI research company DeepMind to quickly master strategic games like Chess and Go, according to two officials familiar with its development. And it is enabled by a publicly available, Google-developed system called Kubernetes, which allows the AI software be to ported between the plane’s onboard computer systems and the cloud-based one it was developed on.


On its face, the U-2 seems an unlikely candidate for AI-enabled flight. It was developed for the CIA in the early 1950s and used throughout the Cold War to conduct surveillance from staggeringly high altitudes of 60,000 or 70,000 feet. The planes were later procured by the Defense Department.

But its surveillance function is one that has already incorporated the use of AI to analyze complex data. An Air Force program called Project Maven sought to rapidly analyze reams of drone footage in place of humans. Google famously declined to renew its Maven contract following an internal revolt from employees who didn’t want the company’s algorithms involved in warfare. The company later released a set of AI principles that disallowed the company’s algorithms from being used in any weapons system.

Schmidt, who led Google until 2011, said he believes it’s unlikely that the military will embrace fully autonomous weapons systems anytime soon. The problem, he says, is that it’s hard to demonstrate how an AI algorithm would perform in every possible scenario, including those in which human life is at stake.

“If a human makes a mistake and kills civilians, it’s a tragedy. … If an autonomous system kills civilians, it’s more than a tragedy,” Schmidt said Tuesday in an interview.

“No general is going to take the liability of a system where they’re not really sure it’s going to do what it says. That problem may be fixed in the next several decades but not in the next year,” he said.

https://www.washingtonpost.com/business/2020/12/16/air-force-artificial-intelligence/
 
It never ceases to amaze me how some Americans can spend resources on useless-to-humanity weapons delivery platforms instead of removing more pressing problems in America itself like lack of homes and healthcare.
Lol says the guy from India, stfu
 
It never ceases to amaze me how some Americans can spend resources on useless-to-humanity weapons delivery platforms instead of removing more pressing problems in America itself like lack of homes and healthcare.
America's power comes from the dollar and since the dollar is just a piece of paper with no gold or silver to back it, The only reason it is worth so much is because the strongest country on earth (for now) says it is worth this much and because you have to use it for oil trade. If America stops spending money on military then other countries won't fear it and they won't get bullied around with sanctions.
 
Lol says the guy from India, stfu

I am a Communist so I can speak about any part of humanity. Well, I am just a gentle and sensible person. Doesn't matter if I am from India or Cuba.

I know that India is the second-largest importer of armaments in the world, much of that from the West, from America too. And I want that to stop. India should spend monetary and other resources on the welfare of its people instead of maintaining a huge military ( like America ).

Don't you want the welfare of American citizens ? Didn't you support for example the Occupy Movement of 2011 ?

America's power comes from the dollar and since the dollar is just a piece of paper with no gold or silver to back it, The only reason it is worth so much is because the strongest country on earth (for now) says it is worth this much and because you have to use it for oil trade. If America stops spending money on military then other countries won't fear it and they won't get bullied around with sanctions.

Most of your post I agree but about strongest country, America should try to bully Russia. :D

About dollar use, Gaddafi tried to create a gold-based currency for African and generally Muslim countries and they killed him.
 
Last edited:
It never ceases to amaze me how some Americans can spend resources on useless-to-humanity weapons delivery platforms instead of removing more pressing problems in America itself like lack of homes and healthcare.
Man, a lot of inventions came through military first, then civilian use. From Navigation to satcom, Nuclear-medicine, first it was meant for military use. Now, coming to the usual commie BS, nah, they're just fine. The US need to reach the Europe level in terms of well being. Without going full retard aka Socialist.
 
Big leap for the military. Soon no pilots and no danger of "blue" side casualties. Commercial airliner jets can fly and land on their own, right?
 
Man, a lot of inventions came through military first, then civilian use. From Navigation to satcom, Nuclear-medicine, first it was meant for military use.

That's why I said "weapons delivery platforms". What has been the contribution of the American B1 bomber ?

I know that the internet had military-work origin : the ARPANET ( more on that here ). But I am also sure that betterment of multimedia storage and transmission based on the internet happened because of p0rn.

Now, coming to the usual commie BS, nah, they're just fine. The US need to reach the Europe level in terms of well being. Without going full retard aka Socialist.

I would have liked to reply to that but I am sleepy and am logging-off.

But you can read my proposal for a new communist economic system here. Can be applied anywhere.
 
AI (computer vision more specifically) should have a easier time in the air than the ground due to far less obstacles and boundary conditions. Current planes such as passenger airliners already use low level AI in the flight control software for autopilot and there hasn't been any accidents caused.
 
Back
Top Bottom