Legal, Ethical and Social Issues
Law Department Research Seminar – Thursday 13 December, 16.30, Room F308 (Fusion Building)
The basic idea of this research project is to map the landscape of our informational interests and to evaluate a range of recognised and putative rights and wrongs associated with modern ‘information societies’.
To a considerable extent, the rights and wrongs under consideration arise from the disruptive effects of new technologies—particularly ICTs, biotechnologies and, now, machine learning and AI. Specifically, it is the disruptive effect on informational expectations and understandings that provokes the questions that lie at the core of the project.
There are three primary contexts for this disruption.
First, modern developments in biotechnologies (particularly in human genetics), often in conjunction with developments in both ICTs and neuro-imaging, have disrupted traditional expectations concerning informational privacy and confidentiality and, in turn, provoked questions about the right to know and the right not to know. Such disruption and provocation is found in various domains, including: the family, reproduction, and children; blood, gametes, and organ donation; health research and biobank participation; clinical practice, and the marketing and consumption of (GM) food.
Secondly, with the development of ICTs, the phenomenon of ‘big data, and the recent acceleration of machine learning and AI, there has been a major disruption in relation to traditional understandings of privacy while, at the same time, provoking the articulation of standards for the fair and proportionate processing of personal data. Indeed, the Charter of Fundamental Rights of the European Union (2000) explicitly recognises both the right to privacy (Article 7) and the right to protection of personal data (Article 8). In Europe, as in North America, there have been debates about the ‘offensive internet’ (where trolling and the like is facilitated by anonymity), about the right to be forgotten and about the right to an explanation (where automated processes make decisions that have significant negative impacts on an individual); and there will surely be debates about potential wrongs where profiles facilitated by machine learning result in discriminatory practices (whether in the form of dynamic pricing in the consumer marketplace or more intensive police surveillance). In this light, should individuals have a ‘right to obfuscation’, deliberately confusing their prospective profilers?
Thirdly, we live in an era of hybrid warfare, lawfare, and fake news. The reliance of nation states on critical ICT infrastructures suggests a range of wrongs that might be recognised as a matter of international law; and, at the same time, reliance on ICTs to conduct international relations raises questions about the rights and wrongs of one nation state using these technologies to interfere with the domestic politics (and democratic processes) of another nation state. Furthermore, for the sake of international security and respect for human rights, there are questions about the responsibilities of nation states to allow their citizens to access online information but also of online intermediaries to monitor and control such information.
So much for the disruption of our informational expectations brought about by new technological developments. Beyond such matters, there are also many difficult informational questions, such as those raised by the claimed right to truth in post-conflict situations, the extent of ‘freedom of information’ rights, the identity of parties in certain kinds of legal proceedings, and so on.
Roger Brownsword is Professor of Law at Bournemouth University and King’s College London. His landmark research on legal theory, bioethics, and the regulation of technology is known worldwide.