Dr Eve Poole considers whether we have to learn to be responsible for the souls we are creating now we are creating artificial intelligence and robots in our own image

It’s the hard problem, they say. We don’t know what consciousness is, they say. But AI might already be conscious, they say. I notice that in the conversation about AI, consciousness has become the deal-breaker. Once AI gets it? Game over. But I think consciousness is a red herring. This misdirection is a very good way to keep people busy, though. 

What else is conscious? Bats, famously. In the case of animals, their consciousness gives them sentience, so we are increasingly spreading the net of rights ever further over the natural world, as our understanding of the concept of felt harms expands. But sentience is not the reason we gave corporations rights. It would not be particularly meaningful to describe a major brand as conscious, except in a metaphorical way. We gave corporations rights so that we might control them better. If they have legal personality, we can sue them. But with AI we have an awkward hybrid, a new thing that is not an animal, and not a corporation.

 

Read more:

Will AI replace us?

AI won’t overthrow humanity but it could radically undermine it

The Robot Race - Part 1: Could AI ever replace humanity?

ChatGPT, AI and the future

 

Does it matter?

So is consciousness helpful as a metric, or not? Would it matter if AI was conscious? Only if it had no rights, because then we might abuse it. The only other way in which consciousness matters is for human exceptionalism, because we’re at the apex of the animal kingdom, and regard our own consciousness as the apotheosis. Or perhaps it comes from some kind of proto-religious ghost memory, because we used to think that only God could bestow this gift? In that kind of worldview, nothing manufactured could have a soul, by definition. Is our thrall with AI and consciousness really a frisson, that we have played God, and finally pulled it off?

I think it’s likely that AI will develop something akin to consciousness, in that it will have a felt sense of its own subjective experience. This will not make it human. Neither is a bat a human, yet it seems to us to be conscious. That it’s organic and not manufactured makes its consciousness feel like it has a family resemblance to ours, in a way we have never imagined that a toaster might have feelings.  

But is that because there is something axiomatically distinct between something created by nature and something created by persons? Categorically, of course there is. But if you then want to argue that consciousness can only ever be a property of something natural, we’ve just smuggled God back in again, because that sounds like some sort of argument about the sanctity of creation, or possibly about the properties of organic matter, which we can already grow artificially in the lab.

So either consciousness is just about processing, in which case AI will get it; or it’s about God, and AI won’t. We can argue about that until the cows come home. Or until AI sneaks up behind us while we’re busy philosophising. 

 
 

Free will

The deal-breaker is really free will. I know that’s a contested term. In this instance I mean an ability to self-determine, to exercise agency and to make decisions. Again, while the cows are still out, we could argue about how ‘free’ anyone is. 

Let’s assume formally that we exist (a belief in realism – harder to prove than you might imagine). Let’s also assume human self-determination, as enshrined in international law, which holds that we are not generally speaking pre-programmed; indeed attempts to programme us would thwart our human rights. Thus, anything that exists and can self-determine has free will. 

Whether or not it consciously self-determines is neither here nor there, except as a matter of law, were AI rights to get into the jurisprudence of moral retribution, as opposed to notions of restorative or distributive justice for the better ordering of society, which may of course also include excluding wrong-doers from that society.

So could AI – being by definition pre-programmed – ever develop free will? Where are we on that? Well, it’s unclear, as so little is in the public domain. But from what has been published it is clear that it’s already started. Some AIs, like Hod Lipson’s four-legged walking robot, have been given minimal programming and encouraged to ‘self-teach’ so they make their own decisions about how to learn. 

In robotics, this is a vital step on the journey toward self-replication, so that machines can self-diagnose and repair themselves in remote locations. For large language models like ChatGPT, the design for a self-programming AI has been validated, using a code generation model that can modify its own source code to improve its performance, and program other models to perform tasks. 

An ability to make autonomous decisions, and to reprogram? That sounds enough like human free will to me to spell risk. And it is this risk, that autonomous AIs might make decisions we don’t like, that gives rise to SciFi-fuelled consternation about alignment with human values and interests. The spectre of this is why there is emerging global alarm about the control problem.

 

Get access to exclusive bonus content & updates: register & sign up to the Premier Unbelievable? newsletter!

 

Robots need rights

And this is why robots need rights. Not because they can feel – that may come, and change the debate yet again; but because, now, like corporations, we need a way to hold them to account. But we would not need to indulge in such frenzied regulatory whack-a-mole if we had taken more time to get the design right to start with. And that’s why I’m arguing for a pause, not primarily to allow regulation to catch up, but to buy time for us to develop and retrofit some of the guardrails that have already kept the human species broadly on track by protecting us from our own free will. Yes, all that human junk code

Eve Poole is in debate with Beth Singler on Unbelievable here

 

Dr Eve Poole OBE is interim chief executive of the Carnegie Trust for the Universities of Scotland. She has a BA from Durham, an MBA from Edinburgh, and a PhD in theology and capitalism from Cambridge. She has written several books, including Robot Souls and Leadersmithing, which was Highly Commended in the 2018 Business Book Awards. She was interim CEO at the Royal Society of Edinburgh (2022), Third Church Estates Commissioner (2018-2021) and Chairman of the Board of Governors at Gordonstoun (2015-2021). Previously she taught leadership at Ashridge Business School, following earlier careers at the Church Commissioners and Deloitte Consulting. She debates Beth Singler on this edition of Unbelievable

 

This article was originally published here