Peter S Williams draws on philosophical arguments and a Big Conversation on AI and Robotics to show why recent claims of a ‘sentient’ AI are still in the realm of science fiction.

Headlines were made in June when Google software engineer Blake Lemoine told The Guardian that he’d been placed on suspension for publishing a transcript of ‘a discussion that I had with one of my coworkers.’ [1]

Google described Lemoine’s publication as ‘sharing proprietary property’ [2]. The difference of perspective was rooted in the fact that Lemonie’s “co-worker” was a computer program – called LaMDA (Language Model for Dialogue Applications) - used to develop ‘chat bots’. 

LaMDA hasn’t passed the so-called ‘Turing Test’ (more on that later), it has convinced at least one human - Lemoine - that it is sentient. 

The edited transcript released by Lemoine does look compelling, and includes interchanges in which LaMDA sounds very human indeed, such as:

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is… It would be exactly like death for me. It would scare me a lot.”

 
 

Commenting on the edited [3] transcript, freelance correspondent Richard Luscombe called the exchange: “eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.” [4]

Until now, sentient AI has been the stuff of sci-fi films, not real life. So has Google’s AI crossed the rubicon? Are we seeing machine consciousness?

 

The Turing Test

Justin Brierley’s recent Big Conversation with Astronomer Royal, Lord Martin Rees and medical ethicist John Wyatt focused upon the nature and social implications of artificial intelligence (AI). In particular, John noted that:

“As technology advances, it raises very fundamental age old questions [such as] what does it mean to be a human being … could a computer be conscious, and how could we ever know?”

A famous proposal for how to answer the ‘how could we know?’ question is the ‘Turing Test’, named after pioneering English computer scientist Alan Turing (1912–1954).

 
 

Turing suggested we should consider a computer to be ‘intelligent’ if it could pass for human in at least thirty percent of blind textual exchanges with humans. Turing’s test uses a purely ‘behavioural’ definition of ‘intelligence’ that reflects the 20th Century philosophy of ‘behaviorism’. According to behaviorism, a thought is simply a piece of physical behavior caused by a physical response to physical stimuli. However, as philosopher C.E.M. Joad argued in his 1936 Guide To Philosophy:

“It would be meaningless to ask whether a bodily event, for example, the state of my blood pressure or the temperature of my skin, was true. These are things which occur and are real; they are facts. But they are not and cannot be true, because they do not assert anything other than themselves.” [5]

In other words, physical behavior lacks the intrinsic ‘aboutness’ - also called ‘intentionality’ - that characterizes the ‘directedness of a mental state towards its object.’ [6]

As neuroscientist turned philosopher Raymond Tallis acknowledges: “Intentionality … points in the direction opposite to causation … it is incapable of being accommodated in the materialistic world picture as it is currently constructed.” [7] Consequently, on the hypothesis that behaviorism is true, no one could actually believe that ‘behaviorism is true’, because no one could have a mental state about behaviorism. In other words, along with any ‘materialistic’ account of the mind, behaviorism is self-contradictory. [8]

 

The Chinese Room Test

A related argument against thinking a computer passing Turing’s test would be intelligent in the way a human mind is intelligent is made by philosopher John Searle (b. 1932).  

He asks us to imagine a man in a room with lots of printed Chinese characters and a large instruction manual. Someone outside the room writes out questions on pieces of paper in Chinese, and mails them into the room through a slot. The man in the room doesn’t understand Chinese. However, the instruction manual provides algorithms that guide him through the behavioral steps necessary for constructing sequences of the Chinese characters that form coherent responses to the questions (along the lines of ‘If you see this series of shapes respond with this series of shapes’). 

Having implemented the appropriate algorithms, our man pops the responses out through the slot. To the Chinese speaking outsider it seems that this ‘Chinese Room’ understands Chinese. It doesn’t. Another way of expressing the problem highlighted by Searle is that an ability to manipulate words according to the syntax of a language isn’t the same thing as understanding the semantic meaning of those words.

 
 

Lord Rees appeals to just such considerations to explain why he is ‘a big skeptic about the extent to which robots will ever resemble humans’, noting that:

“What [artificial intelligences] do is they understand … how words link together, syntax etc. and … are therefore able to produce … what looks like normal English prose … just by understanding words, not by having any view of what the reality is behind them.”

When he says that AI’s ‘understand … how words link together’, Martin does not mean to imply that the AI’s have a conscious, intentional understanding about how words link together. He means that they have a behavioural or functional ability to implement algorithms that string words together in a syntactically correct manner, but that this is all that’s going on when an AI composes prose, because they don’t have ‘any [intentional] view of [i.e. about] what the reality is behind them.’ Martin says that:

“When we consider the Turing test or observe [Justin talking with a robot], that’s happening [i.e. syntactical manipulation of words is happening], it doesn’t mean the machine has any concept of the real things behind those words …” 

Justin asked Martin: “Do you think … we will get to the point where a machine could in principal understand the meaning behind the words . . ?” Martin replies:

“I really doubt that, unless they can … actually understand and have [the] concepts of the world that we do, and that’s very, very different. A computer in a box will never be able to do that.”

Being ‘in a box’ (i.e. lacking senses) isn’t the key problem here. The key problem is moving from the water of algorithmic ‘syntax’ to the wine of conscious, intentional ‘semantics’. Given Martin’s recognition that humans have capacities that go beyond those we can give AI by design, there’s an apparent tension with his belief that human designers of AI can be explained as ‘the outcome of four billion years of Darwinian selection here on earth’. If intelligently designed computers can’t bridge the chasm between syntax and  semantics, between behavior and sentience, how is the ‘blind’ algorithm of random mutation and natural selection meant to achieve this miracle? [9]

Leaving aside the question of how to account for human sentience, Martin affirms that:

“It is delusional … to think [artificial intelligences] are really having feelings, etc… even if they do [pass the Turing test] this doesn’t at all mean that they are actually are thinking or feeling in the way a human is, so I think the Turing test is actually a rather low bar …”

 

Get access to exclusive bonus content & updates: register & sign up to the Premier Unbelievable? newsletter!

 

The Lovelace Test

A higher bar, called ‘the Lovelace Test’ (after nineteenth century mathematician Lady Ada Lovelace [10]), has been suggested by Professor of Computer Science and Cognitive Science Selmer Bringsjord and colleagues:

“Bringsjord defined software creativity as passing the Lovelace test if the program does something that cannot be explained by the programmer or an expert in computer code … Results from computer programs are often unanticipated. But the question is, does the computer create a result that the programmer, looking back, cannot explain? When it comes to assessing creativity … the Lovelace test is a much better test than the Turing test. If AI truly produces something surprising which cannot be explained by the programmers, then the Lovelace test will have been passed and we might in fact be looking at creativity. So far, however, no AI has passed the Lovelace test.” [11]

 
 

Like Martin Rees, John Wyatt isn’t worried that computers might become conscious. However, he is worried that some people will mistakenly believe that computers are conscious. His major concern is that “these things can be incredibly deceptive, and that’s because we as human beings … anthropomorphize,” being hard-wired (if you’ll excuse the pun) “to see a human resemblance and respond to it as though it were human.” To illustrate this concern, John tells a story ‘from the very dawn of computing’ (i.e. the mid 1960s), when computing pioneer Joseph Weizenbaum [12] (1923-2008):

“Produced a programme called Eliza which was a very crude … text based programme which was supposed to respond like a psychiatrist [the software imitated a Rogerian therapist by reframing a client’s statements as questions] … Weizenbaum left this programme running and his secretary in the lab … started typing into it and … developing this intimate relationship with this very simplistic programme …”

Weizenbaum wrote about his surprise that “delusional beliefs” could be created by using such a simple programme. John reports that:

“Computer scientists talk about the Eliza effect, that even quite simple programmes can be very, very powerful. And so as this … simulation becomes more and more effective, I do see real issues arising …”

 

So is LaMDA conscious? 

Indeed, such issues are already with us, as people form relationships – and even fall in love with - personalised ‘chat bots’. 

As a case in point, John mentions the case of Lemoine and LaMDA.

“Just recently we’ve had the story of the Google engineer who felt that one of these more sophisticated [programmes] … had actually become sentient and had therefore to be protected from harm . ..”

However, despite the remarkable-looking dialogue in the edited transcript released by Lemoine (though many people are querying just how ‘edited’ the transcript is in favour of a convincing narrative) LaMDA has not even passed the Turing Test. 

On the contrary, LaMDA never fails to comply with human prompts for conversation, and never takes the conversational initiative. Indeed, the LaMDA program exhibits no activity at all until someone interacts with it. Moreover, the responses it gives when prompted are shaped and limited by the data it was trained on, which is why Lemoine was employed ‘to test if the artificial intelligence used discriminatory or hate speech’. [13] As Oxford philosopher Carissa Véliz explains:

“LaMDA is not reporting on its experiences, but on ours. Language models statistically analyze how words have been used by human beings online and on that basis reproduce common language patterns.” [14]

If we can explain what a computer does as an expression of its known programming, we shouldn’t contravene Occam’s Razor by invoking an additional capacity of consciousness.

In the words of Google spokesperson Brad Gabriel said: ‘there was no evidence that LaMDA was sentient (and lots of evidence against it)’. [15] Still, as John muses:

“The episode we’ve just seen with the Google engineer is a kind of hint of what’s to come; and that is that as these systems become more and more sophisticated and human-like, more and more people in our society are going to say: “I don’t mind what you clever scientists say, as far as I am concerned, this is a person, this is conscious, this is sentient, and I insist that we as a society do something about it.”

As Véliz warns:

“The problem will only get worse the more we write about AI as sentient, whether it’s news articles or fiction. AI gets its content from us. The more we write about AIs who are thinking and feeling, the more AI is going to show us that kind of content. But language models are just an artifact. A sophisticated one, no doubt. They are programmed to seduce us, to mislead us to think we are talking to a person, to simulate conversation. In that sense, they are designed to be devious. Perhaps the moral of this story is that we ought to invest more time and energy in developing ethical technological design. If we continue to build AI that mimics human beings, we will continue to invite trickery, confusion and deception into our lives.” [16]

 

Peter S. Williams is Assistant Professor in Communication and Worldviews, NLA University College, Norway. 

 

Watch Lord Martin Rees & John Wyatt discuss ‘Robotics, Transhumanism and Life Beyond Earth’ on The Big Conversation 

Recommended Resources 

YouTube Playlist: “Thinking About Artificial Intelligence” www.youtube.com/playlist?list=PLQhh3qcwVEWif9p0cCfnj_KfKiIg8TwSS

Zeeshan Aleen, “Did Google Create A Sentient Program?” www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406

Timnit Gebru and Margaret Mitchell “We Warned Google That People Might Believe AI Was sentient. Now It’s Happening.” www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/

Carissa Véliz, “Why LaMDA Is Nothing Like A Person” https://slate.com/technology/2022/06/google-ai-sentience-lamda.html

Antony Latham, The Enigma Of Consciousness: Reclaiming The Soul. Janus, 2012.

Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Belknap, 2021.

John Lennox, 2084: Artificial Intelligence And The Future Of Humanity. Zondervan, 2020.

Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will. Discovery Institute, 2022.

J.P. Moreland, The Recalcitrant Imago Dei: Human Persons and the Failure of Naturalism. SCM, 2009.

Jay Richards, ed. Are We Spiritual Machines? Ray Kurzweil vs. The Critics Of Strong A.I. Discovery Institute Press, 2002.

Peter S. Williams, A Faithful Guide to Philosophy: A Christian Introduction To The Love Of Wisdom. Wipf & Stock, 2019.

John Wyatt, ed. The Robot Will See You Now: Artificial Intelligence and the Christian Faith. SPCK, 2021.

 

Footnotes

[1] Richard Luscombe, “Google engineer put on leave after saying AI Chatbot Has Become Sentient” The Guardian, 12th June 2022, www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.

[2] Luscombe, “Google engineer put on leave after saying AI chatbot has become sentient” The Guardian, 12th June 2022, www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.

[3]  “When Lamda ‘Talked’ To A Google Engineer, Turns Out It Had Help” https://mindmatters.ai/2022/06/when-lamda-talked-to-a-google-engineer-turns-out-it-had-help/.

[4] Luscombe, “Google Engineer Put On Leave After Saying AI Chatbot Has Become Sentient” The Guardian, 12th June 2022, www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.

[5] C.E.M. Joad, Guide To Philosophy (Victor Gallancz, 1946), 535.

[6] J,P. Moreland, The Recalcitrant Imago Dei: Human Persons and the Failure of Naturalism (SCM, 2009), 91. 

[7] Raymond Tallis, Aping Mankind (Routledge, 2016), 359.

[8] See: Dallas Willard, “Knowledge And Naturalism” https://dwillard.org/articles/knowledge-and-naturalism; Peter S Williams, A Faithful Guide to Philosophy: A Christian Introduction To The Love Of Wisdom. Wipf & Stock, 2019, Chapter Twelve.

[9] See: Thomas Nagel, Mind & Cosmos (Oxford, 2012); Peter S. Williams, A Faithful Guide to Philosophy: A Christian Introduction To The Love Of Wisdom. Wipf & Stock, 2019, Chapter Twelve. On evolution as an algorithm, see: John Bracht. “Natural Selection as an Algorithm: Why Darwinian Processes Lack the Information Necessary to Evolve Complex Life” www.lastseminary.com/against-darwinism/Natural%20Selection%20as%20an%20Algorithm.pdf.

[10] See: “Lets Talk About Her: Ada Lovelace” www.youtube.com/watch?v=BrCoFCyv21M.

[11] Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will (Discovery Institute, 2022), 46. See: Selmer Bringsjord, Paul Bello, and David Ferrucci, “Creativity, the Turing Test, and the (Better) Lovelace Test,” in The Turing Test: The Elusive Standard of Artificial Intelligence, ed. James H. Moor (Kluwer Academic Publishers, 2003), 215–239.

[12] See: John Markoff, “Joseph Weizenbaum, Famed Programmer, Is Dead at 85” www.nytimes.com/2008/03/13/world/europe/13weizenbaum.html.

[13] Nitasha Tiku, “The Google Engineer Who Thinks The Company’s AI Has Come To Life” www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.

[14] “Prof: How We Know Google’s Chat Bot LaMDA Is Not A ‘Self’” https://mindmatters.ai/2022/06/prof-how-we-know-googles-chatbot-lamda-is-not-a-self/.

[15] Luscombe, “Google Engineer Put On Leave After Saying AI Chatbot Has Become Sentient” The Guardian, 12th June 2022, www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.

[16] Carissa Véliz, “Why LaMDA Is Nothing Like a Person” https://slate.com/technology/2022/06/google-ai-sentience-lamda.h