A little while ago, it was my 20th birthday. “Huzzah!” came my cry as my teenage years had come to an end and I entered the void of the time between being a teenager and a proper adult, with a job and a place to live that isn’t my parent’s house. One of my many birthday gifts was a game called The Turing Test. I’d asked for this because it was intriguing; I’d seen a part of the game being played on a YouTube channel, and I became very interested in it, both for the gameplay and the discussion the game creates.
The game is a puzzle game, and clearly takes many themes directly from Portal – you’re solving puzzle rooms to get through a sci-fi facility while being talked to by a sentient robot, who turns out to be a little sinister – very Portal. The difference here is the far more serious tone that game takes, it’s a lot darker and brings up quite complicated ethical and philosophical issues, that really get you thinking while solving these puzzles. The game gets you thinking by presenting you with a well-balanced argument about AI and how you prove something has intelligence. It even goes into arguments about what intelligence even is and how you can define and measure it. By the end of the game, I had a lot to think about, which lead me to do a bit of reading on my own on the subject, but it also helps that one of my modules in University is all about AI, so I’ve been learning from that. The game is good, and I’d recommend playing it, but I don’t really have much to say on the subject. The ending is very good and left me very conflicted about who’s side I was on, the robot’s or the humans’.
But that’s not what I came here to write about, I want to write about the Turing test, as in the actual Turing test. Most people know about the original idea of the Turing test – a person sits at a computer terminal and has two conversations, one with a computer and one with another human. If the person is unable to reliably tell which is which just based on the conversation they had, the computer has passed the Turing test. For a lot of people, this is not a very convincing test, and most would argue that it is possible to program any computer specifically to pass the Turing test, without needing it to be intelligent at all. The main argument for this comes from John Searle in a book he wrote called Minds, Brains, and Programs. The argument is called The Chinese Room. It argues that a computer can be programmed to fake the ability to have a conversation with someone using a rulebook telling it how to reply to every possible input to look convincing as a sentient being, when in fact, it’s just faking, this is basically how things like Cleverbot work. Some people have taken this argument to mean that it is completely impossible for a computer to be truly intelligent, as a computer is unable to understand the meaning of the replies it is giving and is simply pretending to be clever (like a lot of us d0).
Think of it this way: a computer knows the definition of house and it knows the definition of home but does it understand the true meaning of either word. To a human, we understand what it truly is to make a house a home™, but does a computer which is basing its understanding on:
- a building for human habitation, especially one that consists of a ground floor and one or more upper storeys.“a house of Cotswold stone”
- a building in which people meet for a particular activity.“a house of prayer”
- the place where one lives permanently, especially as a member of a family or household.“the floods forced many people to flee their homes”
- an institution for people needing professional care or supervision.“an old people’s home”
Both definitions are taken directly from Google, which is probably where an AI would get its knowledge from (No intelligent being would dare touch Bing). I know which house is my home – it’s not the house I live in permanently, it’s the house I grew up in rather than the house I live in when it’s term time at my University.
But, when one looks at the other side of this philosophical coin, one can see the other argument. Taking a quote directly from the Turing test game, this argument can be summarised quite neatly:
If someone copied, exactly, the brain of a duck into a digital form that could be run by a computer, and put it into a perfect robot copy of a duck, would onlookers not say, “that is a duck”. After all, if it quacks like a duck, swims like a duck and does everything a duck would typically do, would you not simply say “That is a duck!”.
I’m not sure I would make any comment about a duck if I saw one, but it does raise an interesting point: if a computer could mimic intelligence perfectly, why does that not mean it is intelligent? In the Chinese room example – sure, the person in the room doesn’t understand Chinese, but the whole system does understand Chinese, or at least appears to. This is the main argument against the Chinese room experiment, and I think that it is a very interesting one.
Humans have been known to think very highly of themselves, so when it comes to the idea that a computer could become intelligent, we tend to get a bit snooty about the attempts to make a computer intelligent, dismissing them as ‘faking’ or ‘cheating’, but I think that before we can understand how a computer can be intelligent, we need to know how a human can be intelligent, and even the simple question of “what even is intelligence? How do we measure it? Where does it come from?”
In my AI lectures, students have been asked “What is you favourite colour?” to which they reply blue, red or some other colour. Is that an intelligent answer? I walk to University every day, is that an intelligent act? Does someone have to be intelligent to walk from A to B? Or do they just do it, especially when they have walked this route before? What is it to be intelligent? What do we do that is classed as intelligent?
Sorry to end this on a list of questions, but I simply don’t have a solid answer to any of this. If you want to know more, there was a really cool program on Channel 4 a couple of days ago (that was definitely not an hour long advert for Humans season 2) which explored some of these questions, it was called How to Build a Human. Watch it, it was very cool. I won’t watch Humans, though. Maybe someone can tell me if it’s worth watching, and then I probably still won’t; I have a vendetta against Channel 4 at the moment.