Here is the rather short abstract:
Are operators of weapon systems which draw on neuroscience, or their commanders capable of applying [International Humanitarian Law] IHL? Only at the price of a decision review system that would be so fundamental as to eradicate the temporal advantages neuroweapons create in the first place. To be meaningful, this review system would need to take the metaphysical foundations of neuroweapons into account.A bit of a longer summary can be found in the paper's introduction:
This question, formulated in Section C and underlying the remainder of the text, is whether operators of weapons systems drawing on neuroscience, or their commanders, are capable of applying IHL. Section C first explains how rapid processing is traded off against consciousness, and why this might be a problem for IHL. Second, it shows that some scholars, whether from law or from other disciplines, react rather optimistically to the promises of neuroscience. Third, I try to take that optimism to its extreme by sketching the development of an IHL software that could be integrated into future weapons systems, automatising judgements on whether a certain conduct is in conformity with IHL norms or not. This enables me to ask what would be lost if we were to use such a machine. Section D answers this question from a micro-perspective, focusing on the cognitive unity of the human being. It draws on the critique of neuroscience as a degenerate form of Cartesianism that has been formulated within analytical philosophy. Section E is devoted to the loss of language, which leads me to consider the work of the German philosopher Martin Heidegger. In the concluding Section F, I suggest that the ‘nature’ of man as reflected by neuroscience risks to undermine the ability to apply IHL in the use of neuroweapons.
And there are some predictions about the future of warfare that are both fascinating and frightening to think about:
Arms development in general follows a temporal logic of surprise: being first with the latest. My main example of neuroscientific applications in the military domain is very literally about acceleration. In a great number of battlefield situations, the human brain is actually faster than a computer when it comes to perceiving threats, yet a computer is faster than a human being in calculating countermeasures. Obviously, those militaries combining the two – human perception and machine calculation – will gain an accumulated temporal advantage over those who do not. As I will illustrate in what follows, time competes with conscious decision-taking.
. . .This paper contributes to the broader scholarly discussion over autonomous military robots and the ethical questions they raise. Noll seems skeptical about whether international humanitarian law could govern neurotechnological weaponry since this may come at the "price of a decision review system that would be so fundamental as to eradicate the temporal advantages neuroweapons create in the first place." But as other commentators like Kenneth Anderson and Matthew Waxman argue, the development of autonomous weapons systems is inevitable and the application of legal and ethical rules to these systems should accompany that development as it happens.
I believe that neuroweapons are the logical sequel to UAVs, and the debate on the ‘autonomy’ of the latter prepares the ground for the acceptance of neurotechnology in the development of weapons. While currently one operator is needed to control a single UAV, developments take place that will allow a single operator to control a swarm of UAV in the future. Consequently, there will be a strong case for neuroscientific enhancement of the cognitive capabilities of that operator. Today, all the talk is about drones, while we should be talking about the neurotechnology that will follow in the wake of their deployment.
The development of autonomous military technology and technology that is connected on a fundamental level to the brain functions of human operators pose interesting legal and ethical questions. While some of this technology may seem fanciful now, I think that it may be a real possibility down the road, and these difficult questions may eventually be unavoidable.
No comments:
Post a Comment