Deep-fakes, Turing Tests and chatbots - AI systems are becoming ever more human-like. Robotics expert Prof Nigel Crook says we need to make sure they are moral too.

I thoroughly enjoyed watching the recent episode of Premier’s Big Conversation with guests Astronomer Royal Lord Martin Rees and bioethicist Dr John Wyatt, with the conversation masterfully choreographed by Justin Brierley. The episode consists of more than an hour of insightful and thought-provoking discussion that addresses of some of the biggest questions humanity faces today, ranging from robots looking after the elderly to questions concerning the sentience of AI algorithms and transhumanism.  There was even a clip of a conversation I arranged between Justin and a cute robot called ‘Nao’

I would like to expand on three themes that emerged from the discussion: the need for robots to have a degree of moral competence, the much debated ‘singularity’, and questions concerning human authenticity that are provoked by the emergence of deep-fake technologies.

 
 

Moral Machines

At one point in the conversation Lord Rees asserted that AI algorithms and robots need to take into account the moral consequences of the decisions we delegate to them. He made reference to Autonomous Vehicles and the often cited ‘Trolley Problem’, which is an ethical dilemma originally devised by philosophers to highlight the differences between classical approaches to moral thought. It is frequently used to illustrate the fact that some of the decisions of an Autonomous Vehicle may have serious ethical consequences.

Lord Rees rightly pointed out that we are delegating an increasing number of morally charged decisions to AI powered algorithms across a wide range of application areas that can have a real impact on people’s lives. This is a subject that I am passionate about and that has become the focus of my academic AI research in recent years. So much so that I am about to publish a book on the subject entitled ‘Rise of the Moral Machines: Exploring Virtue Through a Robot’s Eyes’ (IngramSpark, forthcoming – October 2022).

Some of the recent breath-taking advances in AI have led to serious ethical concerns over the potential harms that this technology could cause to individuals, organisations and society at large. The root cause of these concerns comes from the fact that these decision-making algorithms are morally naïve. They have no concept of right or wrong and no capacity to recognise the potential good or evil that their ‘autonomous’ decisions and actions might have on the people they interact with. 

 
 

My book describes the emergence of a new science called ‘moral machines’ that seeks to rectify this by equipping AI algorithms and robots with a capacity to perceive and respond to the ethical consequences of their choices and actions. I cite three primary drivers for the development of so-called ‘moral machines’:

  • increasing robot autonomy, 

  • the increasing integration of robots in society (‘social embedding’), 

  • the increasing human likeness of robots and AI algorithms. 

The term ‘robot autonomy’ refers to a machine’s ability to independently make decisions based on the data available to it without direct human intervention. The more decisions we delegate to AI powered systems, the more likely it is that these decisions will have moral consequences. This becomes even more likely when those systems are embedded in social contexts like care homes or schools where they interact with people. These robots need to know what counts as acceptable and unacceptable behaviour around people, especially with vulnerable people. 

In the video, Dr Wyatt pointed out that people have a natural tendency to project human-like capacities and qualities onto objects of various kinds, including robots. This tendency to anthropomorphise is accentuated when those robots look and sound and move like humans. It is natural to expect such a robot to possess human-like capacities, including the capacity to know and abide by social and moral norms.

 

Get access to exclusive bonus content & updates: register & sign up to the Premier Unbelievable? newsletter!

 

Technological Singularity

I wholeheartedly agree with Dr Wyatt’s comment that the idea of a singularity, the point at which machines become more intelligent than humans, is pure science fiction and not a genuine risk. 

The concept of a singularity was first applied to technological progress by the father of the modern computer John von Neumann. He predicted that there would come a point in the progressive advancement of technology after which human ‘…affairs, as we know them, could not continue.’ He described this as the technological singularity. This term has been used by a number of authors since then to describe how superintelligence would ultimately bring the end of the human race.

 
 

What is the basis for these alarming predictions and what evidence do these prophets of doom have for spreading them? There are two underlying aspects to this which on the face of it make these extreme predictions seem plausible: Increasing computational capacity, and the ability of AI systems to self-learn. Both of these are major topics that I don’t have the space to do justice to here (I address more fully in my book). But I would like to give you a feel for the kind of issues involved. 

One of the issues that comes up frequently in the singularity debate is that artificial neural networks, the technology that enables machines to learn and that has led to the recent astronomical success of AI, will soon exceed the cognitive capacity of the human brain which is evolving at a much slower rate. Artificial neural networks are, after all, inspired by the brain, consisting of networks of simple neuron-like units with weighted connections between them. The problem I have with this comparison is that the human brain is vastly more complex than its artificial counterpart. Furthermore, it works in a fundamentally different way, and, I believe, has an infinite capacity for unique thoughts and cognitive activity.

Many people are worried about the ability of AI systems to learn and improve themselves, believing that this will inevitably lead to the much-feared technological singularity. In the video, Lord Rees mentioned ‘Alpha Go’ and its ability to play the game of Go in a very short period by playing against itself (‘self-learning’). But it is important to recognise that Alpha Go was an algorithm that had been designed to optimise a particular mathematical function based on the ‘feedback’ it received. The algorithm had no idea that its operations and ‘decisions’ were controlling a game called Go. Neither did it have any understanding of its human opponent Lee Sedol. 

Impressive though this artificial intelligence is, it is a disconnected kind of intelligence, it is disconnected from the real-world in which it operates. This aligns well with the point which Lord Rees emphatically made the video in the context of AI systems that learn languages.  The self-learning of AI algorithms is highly focussed on one task and one task alone and is not sufficient in and of itself to lead to the much-feared technological singularity.

 

Authentically Human

In their Big Conversation,  Dr Wyatt spoke about how truth and authenticity are critically important in the medical profession. He pointed out that if an elderly person thinks that a robot really cares for them, when actually it is just clever programming, then there is something inauthentic about this. I agree with this and would argue that it applies not just in medical applications but wherever AI or robotic systems are used to mimic humans in how they interact with people. These systems are sophisticated technological puppets whose strings are ultimately pulled by the robot’s human designers.

 
 

This puppetry, however, is becoming ever more realistic, with AI systems that are capable of breath-taking realism in mimicking human behaviour and looks. In one of the clips of Justin Brierley’s interview with me, he asked about whether AI systems would ever pass the so-called Turing Test. This test was devised by English mathematician and computer scientist Alan Turing and published in 1950 in his paper entitled ‘Computing Machinery and Intelligence’. He called the test the ‘Imitation Game’ and designed it to evaluate whether or not an intelligent machine was indistinguishable from a human being. 

In the most common form of the test a human interrogator communicated with two other players through a text-based system (most likely a ‘teletype’ in those days). One of the other players was a human and the third was a computer. The interrogator could ask questions of either of the other players, and would use this approach to arrive at a decision about which of the other players was human and which was computer. If the machine was intelligent enough, it should be indistinguishable from the human in the manner in which it answered the questions, and it would be deemed to have passed the test. 

Since 2006, the Turing Test format has been used to award the annual Loebner Prize to the AI systems that are judged to be most human-like. The chatbot Mitsuku has won the Loebner Prize 5 times in recent years, and holds the world record in terms of successes. The chatbot represents an 18-year-old female from Leeds and has the ability to do some basic reasoning about what people say to it and the questions they typically ask. 

If the technology behind Mitsuku was combined with some of the ultra-realistic humanoid robots that have recently been developed, then we would be heading towards some convincing deep-fake humans. At what point should we stop regarding them as machines and see them more equivalent to human beings? If they look and act as humans, then to all intent and purposes they are human, aren’t they?

A colleague of mine once said to me: “If there is no objective test that can distinguish a robot from a human, then there is no difference between them”. That comment challenged me to the core. I firmly believe that there are fundamental differences between humans and even highly realistic replicants or ‘deep-fake humans’, but it is hard to articulate these under the conditions of an ‘objective test’. 

I have devoted the last few pages of my forthcoming book to responding to this issue. But to summarise briefly here, there are two responses: The first is to note that this argument makes what philosophers would regard a fundamental category error, and the second is to re-frame the argument in the context of being authentically human.  

 
 

The objective test that my colleague referred to is designed to reveal the difference between the behavioural or functional responses of two agents, one a robot the other a human (in the same way that the Turing Test does). So, the conclusion from any such test can only be that they are either functionally or behaviourally the same or different. In other words, the test supports a conclusion that is in the functional category. But to then infer from this that there is no difference between the human and the robot (i.e. that they are in the same category) is drawing an ontological conclusion. You cannot draw an ontological conclusion from a test designed to distinguish functional characteristics.[1]

The second response focuses on the content of the argument, and specifically on the requirement for an objective test. Objective testing is good, scientific thinking. Scientists make every effort to be objective in how they test their hypotheses. But the kind of objective tests being described here are focussed on what can be manipulated and what is observable in the physical universe. If we accept the view that human beings are spirits with bodies and that we occupy a joint, deeply integrated heaven-and-earth reality, then objective tests of this sort are going to have a hard time distinguishing the spiritual from the physical aspects of what it is to be human. As Dr Wyatt rightly pointed out in his concluding remarks, human authenticity comes from the fact that we are made in the image of God.

Justin Brierley’s conversation with Lord Rees and Dr Wyatt is very timely and provides much food for thought across the wide range of issues that are raised by recent advancements of AI and robotics technology. I am very grateful for the opportunity to contribute to this ongoing discussion and hope that what I’ve written here is helpful in untangling some of the most challenging questions humanity faces to date.

 

Watch the full episode here.

 

Nigel Crook is Professor of AI and Robotics, Founding Director of the Institute for Ethical AI and Associate Dean Research and Knowledge Exchange at Oxford Brookes University. He is the author of the soon-to-be published book ‘Rise of the Moral Machines: Exploring Virtue Through a Robot’s Eyes’.

nigel cover updated

ntcrook.com

 

[1] I am grateful to Nick Chatrath for pointing out this error in the argument