What's new

AI To Fly In Dogfight Tests By 2024: SecDef

F-22Raptor

ELITE MEMBER
Joined
Jun 19, 2014
Messages
16,980
Reaction score
3
Country
United States
Location
United States
WASHINGTON: Defense Secretary Mark Esper used the Pentagon first AI conference to issue a challenge to China and Russia. The US, he vowed, will lead the world on the military use of artificial intelligence – including testing an AI pilot in a fighter by 2024. But, he said, America’s AI will be governed by ethics that its great power rivals lack, and it will be coordinated with nearly a dozen democratic allies in a new “AI Partnership for Defense.”

An AI beat a veteran human pilot 5-0 in DARPA’s virtual AlphaDogfight trials this summer. That program, called Air Combat Evolution (ACE), will now advance to testing in actual fighter aircraft, Esper announced this morning. But, he emphasized, the US military does not seek to replace human judgment and control in combat operations, only to augment them.

DARPA plans to hand Air Combat Evolution over to the Air Force in 2024, but don’t be surprised if the Navy and Marine Corps now get involved as well.

ACE will hold “live field experiments,” a DARPA spokesperson told us in an email. But they declined to describe the events as a “competition” between humans and AI, instead emphasizing “human-machine teaming” where the organic and the digital work as partners: “The pilots will be given higher cognitive level battle management tasks while their aircraft fly dogfights, and there will be human factors sensors measuring their attention and stress to gauge how well they trust the AI.”


“Full-scale airborne events start in FY23 [fiscal year 2023],” DARPA said. “They will be using tactical fighter-class aircraft with safety pilots in them in case something goes wrong…Current schedule has 1v1 live airborne dogfights in Q2FY23, 2v1 in Q4FY23, and 2v2 in Q1FY24.”

In this summer’s trials, “the AI agent’s resounding victory demonstrated the ability of advanced algorithms to out-perform humans in virtual dogfights,” Esper told today’s conference, hosted by the Pentagon’s two-year-old Joint AI Center. “These simulations will culminate in a real-world competition involving full-scale tactical aircraft in 2024.”

In his very next breath, Esper went on to reassure a world increasingly nervous about armed automatons: “To be clear, AI’s role in our lethality is to support human decision-makers, not replace them. We see AI as a tool to free up resources, time, and manpower so our people can focus on higher priority tasks, and arrive at the decision point, whether in a lab or on the battlefield, faster and more precise than the competition.”

That’s not how rival powers are approaching AI, Esper warned. “At this moment, Chinese weapons manufacturers are selling autonomous drones they claim can conduct lethal targeted strikes,” he said. “As we speak, the PRC is deploying and honing its AI surveillance apparatus to support the targeted repression of its Muslim Uighur population. Likewise, pro-democracy protestors in Hong Kong are being identified, seized, imprisoned, or worse, by the CCP’s digital police state – unencumbered by privacy laws or ethical governing principles. As China scales this technology, we fully expect it to sell these capabilities abroad, enabling other autocratic governments to move toward a new era of digital authoritarianism.”

As for Russia, “Moscow has announced the development of AI-enabled autonomous systems across ground vehicles, aircraft, nuclear submarines, and command and control,” Esper said. “We expect them to deploy these capabilities in future combat zones.”


By contrast, the US is moving as quickly as it can without endangering “individual liberty, democracy, human rights, and respect for the rule of law,” Esper argued. “In February, we became the first military in the world to adopt ethical principles for the use of AI, based on core values such as transparency, reliability, and governability.”

Those AI ethics principles lack the force of law, a major concern for those fearful of robotic weapons and algorithmic infringements of civil rights. Nevertheless, the Defense Department is taking steps to implement the principles in practical ways, including as non-binding language in at least one contract.

As part of that implementation drive, Esper today touted multiple efforts to train DoD personnel on AI in general and the principles in particular. “We are designing a comprehensive strategy to train and educate all DoD personnel, from AI developers to end-users,” he said. “The Department has stood up a Responsible AI Committee that brings together leaders from across the enterprise to foster a culture of AI ethics within their organizations. In addition, the JAIC has launched the Responsible AI Champions program, a nine-week training course for DoD personnel directly involved in the AI delivery pipeline; we plan to scale this program to all DoD components over the coming year.

Further, the JAIC, working with the Defense Acquisition University and the Naval Postgraduate School, will launch an intensive six-week pilot course next month to train over 80 defense acquisition professionals of all ranks and grades…. With the support of Congress, the department plans to request additional funding for the Services to grow this effort over time.”

The Joint AI Center is also leading the way on outreach to allies. “Next week, the JAIC will launch the first-ever AI Partnership for Defense, to engage military and defense organizations from more than 10 nations, with a focus on incorporating ethical principles into the AI delivery pipeline,” Esper said. “Over the coming year, we expect to expand this initiative to include even more countries, as we create new frameworks and tools for data sharing, cooperative development, and strengthened interoperability.”


“We must stay ahead of our near-peer rivals, namely China and Russia,” Esper said. “Together with our allies and partners, we will defend the international rules and norms that have secured our rights and our homeland for generations….We cannot afford to cede the high ground to revisionist powers intent on bending, breaking, or reshaping international rules and norms in their favor.”

Esper’s internationalist rhetoric, with its emphasis on collaboration with other countries and ethical limits on US action, probably won’t help his position with President Trump. Esper has already publicly, if politely, contradicted the president’s statements on everything from the Beirut explosion to withdrawing forces from Germany to deploying active-duty troops against protesters, and there are persistent rumors he would not keep his job in a second Trump term.

https://breakingdefense.com/2020/09/ai-will-dogfight-human-pilots-in-tests-by-2024-secdef/
 
. . .
Looks like the Air Force is moving out rapidly on AI fighters/pilots. Revolutionary for air warfare!
 
.
Looks like the Air Force is moving out rapidly on AI fighters/pilots. Revolutionary for air warfare!

Could an AI revolution be in our future?

I recall listening to speeches made by some of techs biggest names particularly in the field of AI like Elon Musk along with great physicists like Stephen Hawking who have been warning against doing this.

Asimov's first law of robotics:

" A robot may not injure a human being or, through inaction, allow a human being to come to harm."​
Well that's out the window because that's not a rule you can impose on an AI in fact they're weaponizing AI to do exactly that, harm humans.

What they're suggesting is that its only a support tool but the fact that they're doing this will likely lead to a nation, maybe the US, deciding to leave the decision making up to the AI itself to save more time, free up more resources, ensure victory against another system that is still taking some of its lead from humans.

We've been doing exactly that already like with transportation going from horse and buggy to manual automobiles to automatic transmission cars then to self driving vehicles like Tesla because humans are slow, illogical and emotional.

So AI's will be weaponized, you'll have AI versus AI perhaps they'll infect one another or maybe they'll come to the realization some other way they don't need humans in general and attack everyone.

Otherwise maybe they'll remain independent but decide that the best course of action is complete genocide of the opposing team with both racing to destroy the other nation and its allies peoples first perhaps realizing if there's no one to protect then the enemy AI will stop its murder spree because that's logical, right?
 
Last edited:
.

Country Latest Posts

Back
Top Bottom