During testing in December, a pair of AI programs were injected into the system: the Air Force Research Laboratory’s Autonomous Air Combat Operations (AACO) and the Defense Advanced Research Projects Agency’s (DARPA) Air Combat Evolution (ACE). AACO’s AI agents focused on fighting a single adversary beyond visual range (BVR), while ACE focused on dogfight-style maneuvers with a closer, “visible” simulated enemy.

Although VISTA requires a certified pilot in the rear cockpit as backup, during test flights an AI-trained engineer was in the front cockpit to handle any technical issues that arose. In the end, these problems were minor. While DARPA program manager Lt. Col. Ryan Heffron was unable to elaborate, he explained that any glitches were “expected in the transition from virtual to live.” All in all, this was a significant step towards realizing Skyborg’s goal of launching autonomous aircraft as soon as possible.

The Department of Defense emphasizes that AACO and ACE are designed to complement human pilots, not replace them. In some cases, AI copilot systems can act as a support mechanism for pilots in active combat. With AACO and ACE able to analyze millions of inputs per second and be able to take control of the aircraft at critical moments, this can be vital in life or death situations. For more conventional missions that do not require human intervention, flights can be fully autonomous, with the nose of the aircraft replaced when the cockpit is not required for a human pilot.

“We’re not trying to replace pilots, we’re trying increase them, give them an extra tool,” Cotting says. He draws an analogy with soldiers of past campaigns riding into battle on horseback. “Horse and man had to work together,” he says. “A horse can run well on a trail, so the rider doesn’t have to worry about getting from A to B. His brain can be freed up to think bigger.” For example, Cotting says, a first lieutenant with 100 hours of cockpit experience can artificially gain the same advantage as a much higher-ranking officer with 1,000 hours of flight experience, thanks to augmented artificial intelligence.

For Bill Gray, chief test pilot at the US Test Pilot School, incorporating artificial intelligence is a natural extension of the work he does with human students. “Whenever we [pilots] talk to engineers and scientists about the difficulty of training and qualifying AI agents, they usually see it as a new challenge,” he says. “This worries me because I have been training and qualifying highly nonlinear and unpredictable agents of natural intelligence—students—for decades. For me, the question is not, can we train and qualify artificial intelligence agents? It’s: “What can we train and qualify humans for, and what can this teach us about doing the same for artificial intelligence agents?”

Gray believes that AI is “not a miracle tool that can solve all problems”, but that it should be developed in a balanced approach with built-in safety measures to prevent costly accidents. Over-reliance on artificial intelligence — “trust in autonomy” — can be dangerous, Gray said, pointing to failures in Tesla’s Autopilot program, despite Tesla’s claims that a driver should be behind the wheel as a backup. Cotting agrees, calling the ability to test AI programs in VISTA a “risk mitigation plan.” By training AI on conventional systems like the VISTA X-62, rather than building an entirely new aircraft, automatic limits and, if necessary, safety pilot intervention can help prevent the AI ​​from compromising the aircraft as it trains .

Source by [author_name]

Previous articleDigital technologies: the foundation of a zero-emissions future
Next articleSilvergate goes out of business and liquidates the bank