Can Machines Think?
Why it’s so hard to tell
If you like reading about philosophy, here's a free, weekly newsletter with articles just like this one: Send it to me!
We are surrounded by intelligent machines: smartphones that can answer questions and book seats in restaurants; fridges that can warn you when the milk expires; cars that drive by themselves; computers that play better chess than humans; Facebook tagging algorithms that recognize human faces. Still, one question is worth asking: these machines can perform all kinds of impressive tricks, but can machines actually think?
The question is interesting because a lot depends on it. If machines could think like we do, will they at some point in the future be better at it than we are? Will they then become a threat to us? Might they develop feelings? Will machines be lazy or angry at us for asking them to work when they don’t want to? If they become conscious, will they claim rights? Will human rights have to be changed to apply to them? Will they have a right to vote? A right to be treated with respect? Will their dignity or their freedom become issues? Will they have to be protected from exploitation? Or will they find ways to exploit us?
Thinking about… ice-cream
Much depends on how one understands the question. “Does X think?” might mean multiple, different things. It might mean, for instance, does it think like a human? In this case, we should expect the machine to have feelings, to be distracted or sleepy sometimes, or to make typos when writing. Because if it didn’t do all these things it wouldn’t think like a human. – But then, what else is involved in “thinking like a human?” If I ask my phone’s intelligent assistant whether it likes ice-cream, what answer do I expect to get? – “Yes, I do like ice-cream, but only strawberry flavour.” Would this be a satisfactory answer? Obviously, the machine cannot mean that, since it doesn’t have the hardware to actually taste anything. So the response must be fake, just a series of words designed to deceive me into thinking that the machine actually understands what ice-cream tastes like. This doesn’t seem to be proof of a high intelligence.
What if it responded: “What a stupid question! I cannot taste ice-cream, so how would I know?” This seems to be a better, more intelligent and honest answer, but it causes another problem. Now the machine doesn’t pretend to be a human any more. In fact, what makes this response a good response is precisely that it gives up the pretence of sounding “human.” So perhaps other aspects of intelligence don’t necessarily have to be human-like either. When AlphaGo, a program playing the oriental game of Go, won in 2016 against Lee Sedol, a very high-ranking player, human commentators sometimes couldn’t understand its moves. Were these moves not intelligent just because they didn’t seem to make sense to human intelligence? After all, the program easily won. So perhaps we can accept intelligence that is not like ours, but that still counts as “real” intelligence.
What is intelligence?
Ray Kurzweil, one of the most well-known AI researchers, proposed this:
“Artificial intelligence is the act of creating machines that perform functions that require intelligence when performed by people.”
So we would assume that a machine is intelligent when it does things that would require intelligence if performed by humans. But this doesn’t seem to work well either. Adding two numbers requires human intelligence. Cats or trees cannot do it. Giving change to a customer who buys something in a shop requires intelligence. So should we assume calculators and vending machines to be intelligent, just because they perform functions that would require intelligence if performed by humans? This doesn’t seem right.
Rich and Knight proposed:
“Artificial intelligence is the study of how to make computers do things at which, at the moment, people are better."
This definition is funny. Do you see why? It would mean that at the moment when the machine can do something better than humans, this activity would no longer count as artificial intelligence. So it wouldn’t count as artificial intelligence any more to play chess, because machines are already better at it. On the other hand, it seems that making a machine that can digest food would be a worthy goal of AI, since humans are certainly better at that right now. But, obviously, we wouldn’t count digesting food as intelligent behaviour, even if humans were better at this than machines.
Can machines think? The Turing Test
So what is intelligence?
Alan Turing, one of the most famous early AI researchers, proposed the so-called “Turing Test”. He said: take two closed rooms. Into one, place a computer. Into the second, a human. No one can see inside the rooms and know which room contains the computer, and which contains the human. Now put a judge outside the rooms, who can talk to these two candidates (as we’ll call them) by typing text messages to them. This is the only communication allowed. If, after a number of tries, the judge cannot distinguish which candidate is the human and which is the machine, then we must admit that the machine is intelligent.
Seems to make sense. But there are problems with this, too. Imagine I spend a few years thinking of all the things the judge could possibly type, and I enter them into the machine, together with a suitable answer for each one. The machine then would just have to look up the right answer for whatever the judge typed. It would always respond perfectly (as long as I have thought of the judge’s question in advance), but it would never understand anything at all. It just matches questions with answers. Such programs actually exist. If you search the Internet for “AIML bots”, you will find many that are programmed just like this. They’re fun to play with, but are they intelligent?
On the other hand, our most capable robots, for example, autonomous spaceships or self-driving cars, are unable to chat in a human language at all. Are they less intelligent because they don’t make small-talk? And what about people who can’t type, or who have never seen a computer? Would we say that they are not intelligent, just because they fail the Turing Test?
Intelligence is a strange thing. We believe that we recognize it easily in others, but all attempts to define it have failed so far. Do machines think? Will they ever? Perhaps something is wrong with the question itself. If they thought, are we sure that we would know? And if they thought in their own, alien, machine-like way, would this actually count for us? Why do we want to know if they think in the first place?
Perhaps, after all, it’s because we want to know if they’re human-like, if they think just like us.
Maybe the whole endeavour of AI is nothing more than another attempt to find our own selves deep inside the soul of our creations.
Thanks for reading! Do you agree? Leave a comment!