Bromley Meeting, 13th April 2022
Dr Elke Schwarz is Professor of Political Theory at Queen Mary, University of London. She has a particular interest in the ethics of warfare and especially in the use of autonomous weapons systems. She is the author of Death Machines: the Ethics of Violent Technologies (Manchester University Press).
She opened with an exploration of the idea of ‘technology’. We may think of a technology as a tool, or as a rule of use, or as a system of hardware, software, procedures and users. She stressed that what is important about a technology is not so much what it does FOR us but what it does TO us. Using a technology alters our way of life and our mindsets – just watch teenagers with their mobile ‘phones! We have a much more intimate relationship with digital technologies than we ever had with analogue technologies, they are very seductive, they shape our behaviour so we should give them close attention. This is particularly true of the technologies of warfare.
She then went on to discuss artificial intelligence technology. This has crept into our lives without us noticing but is now almost universal. It determines the music we listen to, the books we read, the advertisements we see, and it is fundamental to speech and face recognition, medical diagnosis, agriculture, and many other uses. It has even been suggested that robot priests could advise and bless you and even perform funerals (Humanists celebrants beware!). But till now AI systems have been ‘narrow’, meaning that they have been designed to carry out a specific task. They are not yet ‘wide’, meaning that they posses intelligence & understanding similar to a human. The design of narrow AIs is based on assumptions about the tasks they will be required to do, with the result that they nearly always exhibit biasses. She gave the example of face recognition being reliable with white male faces but very unreliable with non-white female faces.
With this background she then went on to discuss the ethics of autonomous weapons systems (AWS). The Red Cross has defined an AWS as a weapons system having autonomy in its critical functions; i.e. it can search for, detect, identify, track, select and attack targets without human intervention. The problem is that such systems cannot contextualise situations as a human operator can do and because of the biasses in the AIs that they contain we cannot be confident that they will always do what they are supposed to do. A second problem is that all weapons are designed to be faster and more deadly than those of the opponent so there is an insidious tendency that the weapon technology entices the operator to trust it and not intervene. Our capacity to develop an appropriate mental model to manage AWSs atrophies. That is why we need to consider the ethics of using such systems.
The ethics of war is not the contradiction in terms that it first seems. Internationally enforceable laws have been established to control, inter alia, the use of chemical and biological weapons, mines, cluster bombs and the involvement of civilians. Dr Schwarz asserted that such laws need to be extended to include AWSs.
The arguments for such extension include that decisions over life and death should never be left to machines; that AWSs lack the capacity for judgement and the ability to understand context; that AWSs lack the ability to reliably distinguish between combatants and non-combatants; that the accountability for the use or misuse of AWSs is not clear; and that they lower the threshold of resorting to violence.
However, the lobby that is promoting the development these weapons is powerful and uses arguments that are very seductive to politicians. Our survival, they claim, depends on maintaining a competitive advantage over our perceived enemies, & innovation is both good in itself and useful on the battlefield. Selling weapons is very profitable. Vladimir Putin is reported as having said in 2017 “Whoever becomes the leader in AI will become ruler of the world.”
Finally, she argued that if we cannot fully understand the technical decisions built into these weapons, and if we lack the opportunity to challenge these decisions so as to effectively intervene in their use, then we cannot achieve meaningful control over them. Thus we cannot retain our moral agency for acting ethically in war.
Autonomous weapons systems should be banned, she said. She ended somewhat pessimistically by suggesting that, unless there is a conflict severe enough to shock society into action, this is unlikely to happen.
Review by Tony Brewer