Written by Will Stone, MBDA Secondee.

The use of Artificial Intelligence (AI) in defence is a topic that often induces visions of runaway killer robots, the like of which the public has seen time and again at the cinema. Whilst these fears might be labelled by some as unwarranted, it can be argued that they form the basis of the principles that will likely govern the use of AI in the future. An AI system deployed on a battlefield will undoubtedly be required to be trusted, controllable, traceable, and list a human responsible for its lawful use. The challenge posed to Defence is that while failure to implement AI in an appropriate timeframe could result in military disadvantage, premature adoption without sufficient research, analysis and safeguards may result in a system not fulfilling these requirements . Whilst the use of AI in defence might split public opinion, it is hard to argue that it is not inevitable, given increasing investment in China and an imminent race to lead and define how the technology is utilised.

AI has already demonstrated across numerous industries its ability to automate tasks, spot patterns in vast amounts of data, and implement knowledge learned from previous experiences. An AI system performing these non-lethal tasks will likely encounter minimal protest, whereas the use of artificial intelligence in command and control, an area where 39% of defence leaders feel there is value to be added by AI4, is perhaps where there is the most concern. In theory, AI systems could be used to assess threats and respond accordingly without the time-consuming interference of a human operator, but this introduces questions of ethical use of AI and its perceived morality and consideration for human life. Therefore, it is likely that battlespaces will witness a rise in AI equipment solely used for surveillance and reconnaissance, as well as deception and jamming of hostile equipment and systems. This situation could result in rival AI systems repeatedly deceiving each other, creating an indecipherable set of valid and invalid targets for human commanders to assess.

Whilst the readily available and potential use cases outlined above are already mind-boggling to some, immediate expectations must be managed. The buzz and excitement of a rapidly emerging and evolving technology has often led to a phenomenon known as Amara’s law: people tend to overestimate the short-term effects of technology yet underestimate the long-term impacts and uses. Hence, it has been suggested that the immediate focus of AI investment should be automation of mundane and time-consuming tasks, as well as repurposing existing AI systems from other sectors. Meanwhile research funding in the medium to long term can support innovation and the more experimental capabilities that many have suggested are possible.

Challenges for Policy Makers and Industry

Responsibility and Accountability
Lawful Use
Governance
Reliability and Trust
Traceability

Rather than delay the implementation of AI, Government in partnership with industry should embrace the technology and invest in AI and human development simultaneously. Skills will not become redundant, but instead become enhanced due to the unparalleled efficiency and decision-making capacity of AI systems working in conjunction with humans. Future commanders must be capable of understanding, collaborating with, and challenging their ‘artificial teammates’ . Human experience will be needed to highlight where systems have been mistaken, and expertise required to design, train, maintain, oversee, and develop these systems further, fostering the creation of a range of highly skilled, well-paid jobs across all regions and nations of the UK.

For more details please read ADS’s policy briefing, which can be found in the ADS members here.