Foreshadowing: #13AI
Dr. Argyro Karanasiou
On the 13th of July 1956 Dartmouth’s Summer Research Project on Artificial Intelligence was launched to “find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”. The Dartmouth workshops are credited to have led to the “birth” of Artificial Intelligence, in the sense that they involved scholars working in various related fields (cybernetics, automata theory, complex information processing) and provided them with a platform to discuss and share ideas. Several decades after the Dartmouth workshops, we are still faced with several questions that require a multidisciplinary approach: The classic question “can machines think” has now evolved into the more complex syllogism of whether machines “can think for us”? To what extent can machines mimic human behaviour, fully undertake mundane tasks or even help us to predict court decisions?
A mari usque ad mare – A series of transatlantic dialogues
On the 13th of February 2017, I started my own journey, trying to explore these intricate matters by discussing them with a number of academics, coming from different disciplines, countries and backgrounds. In what followed to be a round of endless emails, phone-calls, visits, and many sleepless nights working on discussion points, I managed to provide a transatlantic cross-disciplinary communicatory platform, which will hopefully serve as a point of reference for any future AI related debates. This report is submitted on the 13th of July 2017 and has sought to explore the legal and ethical conundrums posed by the rapid technological advancements in the field of AI. The name of the project “13AI”, does not only refer to the day Dartmouth Workshops signalled the beginning of AI as we now know it, but it is further chosen to remind us of the superstition, with which this field has been approached in the past. When discussing emergent technologies from a purely theoretical point of view, there is a thin balance that needs to be kept: one should be able to articulate an innovative argument, whilst retaining credibility, given that this area has been a major theme in popular culture. Moreover, the tendency to disregard what cannot be easily understood, codified or deciphered in legal terms is also another danger lurking in the debate. This explains the chosen title “The 13 Laws of AI”, which in turn reveals the projects objective: the Code of Hammurabi, one of the oldest known codified laws in history, lacks a 13th clause due to superstition; including this number in the title explains the need to address the issue of AI in terms of the apparent legal vacuums and ethical dilemmas posed by the use of algorithms.

In this quest, I have not been alone. On the contrary, during the course of this project I consider myself privileged to have discussed various aspects of AI with some of the brightest scholars at an international level: bioethicists, philosophers, sociologists, computer scientists, neuroscientists and legal scholars offered their “2 cents” to the debate. What is more, they each offered further questions, which I would then pass on to the next speaker – maintaining thereby interactivity and keeping the momentum.
I will always treasure the discussions held with people, whose work has been a personal inspiration in my research: Julie Cohen (Georgetown Law) offered an excellent account of the implications AI has for the rule of law and admittedly posed one of the most difficult questions towards the end of the podcast. In her own words, a “mind-blowingly hard” question, which travelled from Washington DC to London and was passed on to Andrew Murray (London School of Economics). Andrew in turn, sought further advice from Arno Lodder (Vrije Universiteit Amsterdam) and then went on to discuss more about “hybrid” rights reserved for non-humans; a topical matter of utmost importance. When discussing liability and driverless vehicles with Michael Rustad (Suffolk University) and Thomas Koenig (Northeastern University), it became clear that it will indeed be a major challenge to harmonise laws for driverless vehicles due to certain normative and infrastructural parameters. The podcast recorded at Suffolk University in Boston, is to be noted for its level of sophistication and vibrant discussion. Several weeks later and many miles away, Chris Reed (QMUL) addressed the matter further and the ping pong of questions continued as his question involving the First Amendment travelled back to the US and to Joel Reidenberg (Fordham Law), one of the seminal figures in IT law related research, credited with the concept of “Lex Informatica”.

A dialogue on the legal and ethical aspects of AI had finally started and it was high time that also disciplines other than law were included: Bio-ethicist Wendell Wallach (Yale University) offered his valuable expertise on the normativity and ethics in AI and offered a thought-provoking question on morality that was then addressed back in the UK by Karen Yeung and Roger Brownsword (Kings College London). In turn, the latter offered an excellent analysis of how law and economics when joined by the latest advancements in AI pose significant challenges for the consumer. The consumer however is also a citizen, which prompted me to visit Bruno Latour’s Medialab at Sciences Po in Paris and discuss with Paul Girard on how to best utilise big data to boost the concept of “enlightened citizenry”.
Having discussed on the complexity and opacity of legislative process, I then joined Dimitris Tsarapatsanis (University of Sheffield) and Nick Aletras (University College London and Amazon Research Cambridge) to discuss their paper using AI to predict ECtHR decisions. I was slowly entering a more technical terrain and became fascinated at the prospect of understanding more about the so-called “black box” in AI. To this effect, I visited Princeton University and Aylin Caliskan, whose paper on implicit bias in AI (co-authored with Bryson and Narayanan) had just been published in Science. Aylin effortlessly discussed the modus operandi of machine learning and provided some much needed context.
But there was one core discipline still absent from the discussion: neuroscience. For this reason my travels took me to University College London and the Welcome Trust Centre for Neuroimaging, where I visited Karl Friston FRS, one of the key figures in neuroscience and an authority on brain imaging. This has been a fascinating podcast unravelling the mysteries of the brain and the advanced mathematical techniques used to allow researchers to further their understanding. Across the Atlantic again, I followed up with Dimitris Pinotsis (Massachusetts Institute of Technology), who provided a thorough analysis of deep nets and connected all dots thus far.

And as I was moving away from law and social sciences, towards bio-ethics, computational neuroscience and mathematics, a discussion with philosophers seemed almost inevitable. First I visited Juliet Floyd at Boston University, an expert on early analytic philosophy, whose inputs on logic, mathematics and science served as the perfect primer to understand Turing’s seminal work in AI. From one philosopher I travelled to another, this time to explore the philosophy of information and information ethics: Luciano Floridi at the Oxford Internet Institute explored during our podcast whether the “right to explanation” holds any water. The latter argument was picked up by Ugo Pagallo (University of Turin), whose work on Robotics and Law has impacted greatly the field. Ugo followed up on the GDPR and provided an extremely interesting link between regulating robotics and morality. I had reserved one last spot for a scholar that could not be missing from the discussion: Mireille Hildebrandt (Vrije Universiteit Brussel) uncovered new modes of existence of the rule of law and highlighted the need for articulation of legal norms into the relevant computing architectures.
I am truly indebted to all of these brilliant people for accepting my invitation to join this exciting project, generously sharing views and –most importantly- showing a genuine interest in this area, which is hopefully reflected in full in the vibrant discussions recorded in the podcasts.
The Road Ahead
Artificial Intelligence has permeated our lives in an unprecedented way: Driverless Cars are now being tested in the US and the UK, IBM’s ROSS is an algorithmic tool supporting legal practitioners and virtual personal assistants, such as Apple’s Siri or Microsoft’s Cortana are nowadays widely used. This wide use of AI reflects a number of legal issues, debated by academics, philosophers, ethicists, scientists, practitioners and technology industry players, on both sides of the Atlantic. It is envisioned that the project will lay the foundation for an international platform for a transatlantic dialogue on AI and Law, which is currently absent. As such, I consider this the first step of a much larger project that will be materialised in time: by no means is the list of experts exhaustive and it is certainly still missing the contributions of several scholars that I hope to bring on board at a later stage. This is merely a “beta version”, which I hope to “update” frequently.
All raw material is now being edited and will soon be accessible online at several outlets. Again, in this task I am not alone, but have been joined by students that have shown great interest in becoming more involved and have volunteered their time towards the final stages of this projects to set everything up and running, before it all goes live in mid-September (Elspeth Griffin). Moreover, I will use this material with postgraduate students at Bournemouth University for the course “Cyberlaw”, commencing this coming October: young minds are the future and our task is to provide them with ample opportunities to voice well-informed views.
None of this would have been possible without the generous support offered by Santander, whose funding enabled me to visit several of the places noted above (primarily in the US and the UK) and record the podcasts. I am truly grateful for this opportunity and hope that I shall be able to fully capitalise and build further on the outcomes of this project. Last, but certainly not least, a few good people at BU made my life a lot easier whilst administering this by showing great professionalism in their offered support throughout the stages of this project: I wish to thank Sarah Olliffe, Laura Hampshaw, Josh Deerman and Tanesha Duff for always being quick, efficient and reliable.