Filmic dystopias are a great way to imagine how current and prospective scientific developments might be abused in the ‘not too distant future’.

Gattica, for example, is everyone’s favourite depiction of how genetic engineering could help to create and sustain a deeply unequal society of perfect human beings and the rest. Minority Report – based, as are so many good film visions of a bleak future, on a short story by Philip K Dick – depicts a world in which criminals can be identified before they commit any crimes and then appropriately treated. In this way the murder rate is reduced to zero – but at what moral cost?

So, is forensic science as it is currently developing a force for great good or something that we have ample reason to worry about? A recent Nuffield Council on Bioethics horizon scanning workshop, ‘The future of science in crime and security,’ addressed this question. Whilst it did not provide a clear and unequivocal answer to the question, it did at least allow a careful identification and evaluation of the issues that provided a welcome antidote to the more fanciful speculations about what is coming down the line. The world of Minority Report may as yet be a long way off, but it is still important to separate realistic concerns about the abuse of science from its supposed prospective benefits.

The science in question has distinct elements: artificial intelligence and ‘big data’, biometric identification techniques, genomics and especially DNA analysis, and neuroscience. At the same time there are various individuals and parties who have interests in the new science: corporations, professional bodies, state agencies, researchers, and of course members of the general public who might rightly wonder what is known about them and what use such knowledge is being put to. At its simplest, private companies want to make money, academics want to make their reputations, the police want to catch criminals, and criminals want to commit bigger and better crimes without being caught.

It was perhaps not surprising to learn that in forensic science, as in other areas, any new development can have a dual use. DNA evidence can secure the identification of perpetrators who might otherwise go undetected. Yet DNA evidence could also be faked or ‘muddied’ such that the wrong people are convicted. AI and algorithms have the potential to facilitate cybercrime by for instance automatically generating fraudulent emails, but also to be a useful tool in combating cybercrime, by for instance detecting potential attacks earlier. It was also not surprising to hear that prospects for the new science can be exaggerated. Facial recognition technology, for instance, is by no means infallible and as of now remains hobbled by the limits of CCTV.

The ethical considerations at play in anevaluation of forensic science do not seem to be new. They are those of thetrust that can be maintained in how government regulates these new technologies,which can quickly be eroded by evidence of a failure to control them. There isan obvious question of how individual privacy can be protected, although, asever, it is important to distinguish between the wrongness of acquiringpersonal information and that of making improper use of such information. Thereis the matter of how information might be used for discriminatory purposes ormight exacerbate social injustice. There are human rights to be identified androbustly protected.

As ever when talking about the prospects for thestrong regulation of emergent science that is also responsive to rapid changes,it is crucial that we acknowledge the global context. Sciences develop at adifferent pace in different societies. At the same time, crime crosses bordersand its successful prevention must involve international cooperation.

All in all it is not quite true that there is nothing new to consider in the ethical appraisal of the world of forensic science. But the workshop did conclude that the Nuffield Council should consider capitalising on the success of its 2007 report on the forensic use of bioinformation. It was suggested that this report might be usefully updated in the light of subsequent scientific developments to consider whether the ethical issues or values have changed, and how far the relevant technology has progressed.

One last thought on what is new and what is merely a fresh version of something familiar: Minority Report, name checked at the outset, is about the predictive (and thus pre-emptive) punishment of ‘criminals’ who have as yet done nothing wrong. It thus offends against natural justice and rightly gives rise to moral alarm. Yet what was talked about at the workshop was the future of predictive policing, that is identifying prospective criminals and crimes. Such policing is certainly not new. It goes back at least to the dubious nineteenth century criminology of Cesare Lombroso and his physiological identification of criminal types. Nor is such policing wrong in itself. Good policing is above all about preventing crime and those who would otherwise be its victims have reason to endorse a scientifically assisted policing. However, such policing is only as good as the means of identifying future wrongdoers and wrongdoing. Moreover, reasonable concerns about how using data on past crimes might unfairly target certain individuals and communities need to be addressed. The new forensic science may help better and fairer policing. But the case for using it is yet to be clearly and conclusively made.

Comments (2)

  • Dave Archard   

    In a short piece I hoped to distinguish between predictive policing which might be ok, and predictive punishment which would not be alright. The former may make use of simple geographical data but it might also employ psychological profiling. The worry is that the predictive picture of a likely criminal may be based on out of data or poorly evidenced date with the result that certain categories of person suffer unfair police monitoring or attention.

  • Brian Scott   

    Isn't predictive policing mainly about identifying physical locations where crime is more likely, so that police resources can be more efficently deployed? This could include data on populations, but how specific would these need to be?

Join the conversation