AI-controlled fighter jets are now a reality, sparking a debate on the future of warfare. DARPA’s successful dogfight test highlights the potential for AI in combat, but raises ethical concerns about autonomous weapons and the need for international regulation. The EU’s stance on banning autonomous weapons may need to be revisited in the face of rapid AI advancements by other nations.

AI Achieves Major Milestone in Aerial Combat, Fuelling Debate on the Future of Warfare

The Human Factor in AI-A Double-Edged SwordTired of Reading? Listen Instead!

Prefer to listen?

If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left,  sit back, relax and leave everything else to us.

In a development that is as groundbreaking as it was expected and long in coming, the US military arm DARPA (the same that gave us the Internet) conducted a real-world dogfight pitting an AI-controlled fighter jet against a seasoned human pilot under its ACE programme. Powered by machine learning, the autonomous F-16 engaged in complex tactical manoeuvres at supersonic speeds, marking a significant breakthrough for AI in the realm of combat aviation.

The AI, flying a modified F-16 (designated VISTA) under DARPA’s oversight, faced a high-risk test at Edwards Air Force Base. DARPA underscored the importance of safety innovations that enabled this test, seen as critical for building trust in AI for future combat applications.

“2023 was the year ACE made machine learning a reality in the air,” stated Lt Col Ryan Hefron, ACE program manager. Interestingly, in 2020, during five tests conducted in flight simulators, AI triumphed in all simulated combats against human opponents. Beyond victories in simulated environments, late last year, in 2023, AI also managed to beat humans in drone obstacle-races. This dogfight highlights the potential for AI to not just support, but potentially replace human pilots in high-intensity, high-risk combat situations.

This breakthrough comes as the development of autonomous weapons systems is increasingly becoming a reality. The use of AI in warfare raises important ethical and strategic questions, including the potential for autonomous weapons to lower the threshold for military engagement and the need for clear guidelines and regulations to ensure human control and accountability.

It also puts the USA’s ban on highly-capable NVIDIA chips to China into perspective, giving it added significance. China, Russia and the USA are all known to have been experimenting with autonomic weapons systems and given that the best autonomic weapons will be based on the AI models that they utilise, but also on the speed with which the AI-enabled weaponry at the edge is able to perform calculations and react, even with comparable -or indeed, the exact same – AI models, it is the weapon with the shortest computation time (which implies at least some computation at the edge) that will prevail.

Defence Transformation... and Deepening Concerns

US Air Force chief Frank Kendall sees VISTA as a “transformational” technology, one that could upend traditional understandings of aerial warfare. However, this breakthrough also amplifies a host of ethical and strategic concerns. Ethicists fear an AI fighter jet could lower the threshold for military engagement by making conflict seem less costly in human lives. If targeting decisions are increasingly made by machines, especially in ultra-fast-paced combat requiring decisions within time windows measured in microseconds, how can human control be even contemplated, let alone meaningfully guaranteed? Finally, ensuring that complex AI systems don’t develop unpredictable, dangerous behaviours poses a daunting technical and security challenge, especially given that explainability in several of the areas of AI still lags significantly behind the advancements in AI itself.

Global Implications and the EU Stance

The European Parliament resolution of 12 September 2018 on autonomous weapon systems (https://www.europarl.europa.eu/doceo/document/TA-8-2018-0341_EN.html) and its vote (https://www.europarl.europa.eu/doceo/document/PV-8-2018-09-12-ITM-006-08_EN.html) adopted with 82% of the members voting in favour, call for a ban on lethal autonomous weapons development and highlights the growing controversy around these advancements. The success of AI-powered fighter jets underscores the urgency to address what is now no longer the prospect, but the reality, of lethal autonomous weapons systems. The pressure is on nations to balance the potential military advantages with the ethical, legal, and security dilemmas posed by AI taking centre stage in warfare.

The success of the AI-powered fighter jet experiment highlights the widening technological gap between nations who are willing to push the boundaries of their advanced AI capabilities with all the risks this can bring about, and those who either lack AI capabilities or have decided to hobble them up, effectively preventing them from evolving, again with the many risks that this entails. This disparity could add fuel to the arms race that Russia’s invasion of Ukraine has vehemently revamped, and could further strain international relations. Nations who either lack the resources to develop comparable autonomous systems or who have decided to ban autonomic lethal weapon systems face the grim choice of prospectively facing AI-powered adversaries on the battlefield without having similar capabilities in the future, of relying on potentially unreliable partnerships for such capabilities, or of considering pre-emptive action while they are still on time.

The European Union, with its focus on human-centric AI development, is likely to take a leading role in advocating for international treaties and safeguards to govern the use of autonomous weapons. However, as we’ve learnt time and time again from Babylonian times to Roman, all the way up to modern times, and to no lesser extent in Russia’s invasion of Ukraine, and the Israeli-Hamas conflict, “silent enim leges inter arma” – laws are silent in wartime, and understandably so as war represents a significant breakdown of law and order. The EU’s stance on banning lethal autonomous weapons systems, while influential, are diametrically opposed to the interests of nations seeking a decisive military edge. The rapid advancement of AI and its integration into weapons systems will force the EU to decide between promoting ethical AI development and ensuring its own security in a fast-changing geopolitical landscape where traditional deterrence – the same that gave the European continent seven decades of relative peace and a corresponding peace dividend – is no longer guaranteed and where, to quote NATO’s Admiral Rob Bauer, they need to expect the unexpected.

Ultimately, in a world where other nations may prioritise rapid technological advancement above ethical constraints, the EU’s present stance on lethal autonomic weapons is not only a disadvantage but a security risk to its own citizens. While ethical considerations are important, not all ethical considerations should be seen as being on the same footing. To paraphrase legal scholar John Rawls and to extend the application of his theory of justice, a just society, while ethical, also has a fundamental right, and indeed an ethical obligation, to self-preservation. If unethical groups within or outside that society pose a serious threat to the basic structure and just institutions of a society through overt or covert autonomic weapons programmes, then that society is ethically justified in placing hard limits to the ethical treatment of those groups and should also develop its own such programmes. Then there is also the argument that the use of autonomic weapons may actually reduce civilian casualties as they are likely to be better than humans at distinguishing between military and civilian personnel.

The Way Forward

While AI-driven breakthroughs present intriguing, even revolutionary, possibilities for national defence, they compel the international community to grapple with some very complex – sometimes even intractable – questions. The development of AI in combat aviation has significant implications for the future of warfare. It is crucial that we address the concerns and risks associated with AI and develop clear guidelines and regulations to ensure that it is used responsibly. This includes:

  • Developing clear ethical guidelines for the development and deployment of AI in warfare, especially in the areas of bias and ;
  • Establishing international regulations and standards for the use of autonomous weapons systems;
  • Ensuring that AI systems are designed and deployed in a way that provides human killswitch control and accountability;
  • Continuing to research and develop AI and autonomous systems, with a focus on safety, security, and ethical hierarchy considerations; and
  • Above all, keep investing in the development of such systems, ensuring that other nations do not overtake them in this area to any significant extent, so that if any international regulations that might have been adopted are repudiated by any nation, the threatened nation(s) can have the means to defend itself(themselves) meaningfully, and also so that if the killing machines of any other nation go rogue, one can have a meaningful way of defence against the rogue killing machines of other nations.

Failure to address these concerns could not only jeopardise international security but also significantly alter the existing balance of power.

How can we help you?

Contact us by requesting a call-back or submitting a business inquiry online.

Looking for support in your AI journey?