Tech

Chatbots don't have feelings

A couple of weeks ago Google placed one of its engineers on paid administrative leave for breaking its confidentiality policies. The engineer, Blake Lemoine, had become concerned that an AI chatbot system had achieved sentience and decided to tell the world. So we had a chat. About chatbots.
Now Reading:  
Chatbots don't have feelings

A couple of weeks ago Google placed one of its engineers on paid administrative leave for breaking its confidentiality policies. The engineer, Blake Lemoine, had become concerned that an AI chatbot system had achieved sentience and so... he decided to tell the world. 

This, very quickly, got turned into click bait - “AI bot has feelings!”... “will AI take over the world?”... "AI bot has more feelings than your ex boyfriend!"

After reading about it properly, I realised I was pretty disinterested in what Lemoine had to say. But it did cause me to think about wider issues surrounding AI and the more important conversations we need to be having. So I decided to have a chat with my friend Peter about it. Peter Gasston is Innovation Lead at VCCP. I started working with him when I was 21 in a creative tech agency and quickly decided he was going to be my mentor (I really didn’t give him any choice). He has remained a mentor, challenger and teacher to me ever since.

Rosie: Right let’s start shall we? Two weeks have passed and we’ve deliberately taken some time to think about the more interesting conversation that needs to happen here. But let’s start at the beginning. In your opinion - does the chatbot have feelings?

Peter: No. I should start by saying that, first, I’m not an expert in AI or sentience! This is only based on my layman’s understanding of the field. And also, we’re taking Lemoine’s claim at face value, and that there are no ulterior motives behind it. If that’s the case, Lemoine believes that the bot has a consciousness. Does it have consciousness or sentience that can be reliably proved? Absolutely not. But he’s apparently fallen down a rabbit hole of believing it’s true.

Rosie: I think that's interesting because we know that one of the biggest issues with designing and creating chatbots is AI. So if he's fallen down the rabbit hole - he'll have a bias whilst he's training the experience?

Peter: All of the questions he asks are based on believing it’s true. He’s not trying to trick it, or break it. He’s reinforcing his own beliefs.

Rosie: So in theory, consciously or subconsciously, he’s designed the experience so it seems like the chatbot has emotions?

Peter: Yes. This system is Google LaMDA, and there are others similar to it. These systems are trained on massive datasets of human writing and conversation. They try to predict an answer based on the most likely patterns. Like when your phone predicts what word you might use next? It’s like that. They literally exist to give you what you want to hear. So if you ask it questions about consciousness - it will reply with a prediction about what someone would say. 

There are other people who take the opposite approach and try to break and trick large language model chatbots, in order to try to show their limitations. This engineer isn’t trying to do that - he began to develop the belief that this chatbot had feelings, then started to ask questions phrased in a way that would give him back an answer that would back up his theory.

Rosie: The whole point of Decode is to break stuff down so anyone can understand. I'm assuming people know what sentience is - but let's break it down.

Peter: I think sentience is an awareness of yourself. An awareness that you are a being. 

Rosie: Let’s see what Google is saying. So Google says it’s “the ability to have feelings. The capacity for a creature to experience feelings”.

Peter: Yeah - it's from the Latin root ‘to feel’.

Rosie: One of the things I read which I thought was interesting was from Stephen Pinker. His response to this whole thing was that there’s a difference between sentience, intelligence and self knowledge. And he says there’s no evidence that any of these chatbots or language models have any of these.

Peter: I mean, the phrase artificial intelligence is misleading. There’s no intelligence. It’s statistics and probability. The chatbots are not intelligent in the sense that they’re thinking machines. They’re prediction machines. That’s why lots of people in the field call this machine learning or statistical inference or pattern learning; artificial intelligence sets an unfair expectation. This is also propagated by science fiction; most AI in films are self-aware, malicious, and harmful.

Rosie: It's interesting that a lot of AI in science fiction is framed in a dangerous way, because one of the things I noticed was that the more the bot seemed vulnerable - the more real it felt. Like when the bot said it was lonely? It's like there's something in us as humans - our empathy is triggered. I wonder if there’s something interesting here? As humans, we clearly believe emotion = humanity.

Peter: Yeah, there's actually a quote from François Chollet which is:

“A pretty common fallacy in AI is cognitive anthropomorphism: "as a human, I can use my understanding of X to perform Y, so if an AI can perform Y, then it must have a similar understanding of X".

So basically, if I say I’m lonely it’s because I’m actually lonely. If an AI bot says it’s lonely, it doesn’t mean it’s lonely - it’s been taught to say it.

Rosie: Right, there’s a difference between the syntax and the semantics. The words and then the meaning behind the words? I guess that’s what people need to remember - the huge difference between technology and humans. The chatbots don’t know what the words actually mean? People say “the chatbot said it was lonely” but that language was literally taught to the bot. The bot didn’t come up with anything original. In the simplest terms - it's literally been provided with a load of data and then is using that data to make a prediction.

Peter: Yeah exactly and interestingly when Alexa and Google Home first came into the market - people were really concerned and thought they were bad because their kids weren’t saying please and thank you. But I think we should be teaching children the difference between technology and humans. These bots don’t have feelings or emotions so by blurring the lines you’re teaching your children these technologies deserve as much respect as a human. People are anthropomorphising them.

Rosie: So in terms of anthropomorphising – do you think there is a chance AI could ever actually be intelligent? Or have sentience. 

Peter: We’d be so far away from it happening - all we have right now is the illusion of consciousness. Plus, you first have to define consciousness before you can measure it. And what if we are measuring consciousness against the wrong idea? An octopus has consciousness but it’s not the same as a human. To us, it’s essentially an alien being - but it has a level of consciousness. We’re always comparing these things with human consciousness. 

Rosie: I guess with AI though, it’s humans that are teaching it. And that’s why people get worried right? Because if it’s a harmful person with harmful bias designing the bot - then it could have negative consequences?

Peter: A lot of this fear bleeds into science fiction. Andrew Ng was asked “do you worry about AGI (artificial general intelligence) becoming evil?” and he said “I don’t worry about it in the same way I don’t worry about overpopulation on Mars.” There would be so many intermediate challenges to overcome even IF it ever happens.

Rosie: So we obviously know chatbots aren't going to take over the world but I think there are some genuine concerns and conversations that need to happen. Firstly - fake news is an issue. Most people spreading these stories about AI taking over the world or chatbots having emotions know absolutely nothing about machine learning.

Peter: You’re right - but you can also be someone who’s incredibly intelligent and rational and still put your emotion on to it, like this developer at Google apparently did. It’s something that will have its time in the news cycle and then move on, but it will be adopted into beliefs like how people believe in ghosts and the Loch Ness monster. And as these systems get better, we as humans risk believing they are human. We may start to form bonds. We don’t know if this will be harmful or not. We should prepare as if it is.

Rosie: The biggest concern I have is around bias. Timnit Gebru, an AI ethicist Google who was fired in 2020 said the discussion over AI sentience risks “derailing” more important ethical conversations surrounding the use of artificial intelligence, for example ‘the sexism, racism, AI colonialism and centralization of power.’ She said these issues are far more prominent. We’ve had similar discussions. We need to be more conscious about who is actually building these models. 

One of the things I feel most frustrated about is that so many people have spoken out about the clear issues with AI bias - most of them have been Black women. None of these conversations are making the same buzz. Lemoine could have used his platform to talk about this but instead he talks about emotions.

Peter: Exactly. To me, what happened is only interesting because it provokes a wider conversation. What actually happened wasn’t that interesting, it’s these conversations we need to have. They are far more important.

Rosie: It's a shame that this story has derailed the conversation around the potential harms of machine learning systems. It’s frustrating because many people have spoken out about the clear issues with AI bias - most of them have been Black women, for example the incredible Joy Buolamwini, Timnit Gebru and Yeshimabeit Milner. None of these conversations are receiving the same level of interest and it's incredibly frustrating. Lemoine could have used his platform to talk about this but instead chose to talk about sci-fi concepts.