Turing test
The Turing test in diagrammatic form.
A Turing test is a proposed way of deciding if a machine can 'think' or, more precisely,
if it can demonstrate intelligence (see nature
of intelligence) at approximately human-level. It is named after Alan Turing who first described in 1950.
In a Turing test, a human examiner would converse in natural language separately
with another human being and with a machine (an advanced computer),
without knowing which was which. If the examiner couldn't be sure whether
the entity to which he or she was communicating was human or artificial,
then the machine would be ajudged to have passed the test. It's assumed
that both the humans and the machine try to appear human.
The origin of the test is a party game in which guests try to guess the gender of a person in another room by writing a series of questions on notes and reading the answers sent back. In Turing's original proposal, the human participants had to pretend to be the other gender, and the test was limited to a five-minute conversation. These features are nowadays not considered to be essential and are generally not included in the specification of the Turing test. Turing proposed the test in order to replace the emotionally-charged and, for him, meaningless question "Can machines think?" with a more well-defined one. Turing predicted that machines would eventually be able to pass the test. In fact, he estimated that by the year 2000, machines with 109 bits (about 119 megabytes) of memory would be able to fool 30% of human judges during a five-minute test. He also predicted that people would then no longer consider the phrase "thinking machine" contradictory.
It has been argued that the Turing test can't serve as a valid definition of artificial intelligence for at least two reasons: (1) A machine passing the Turing test might be able to simulate human conversational behavior, but this could be much weaker than true intelligence. The machine might just follow some cleverly devised rules. (2) A machine might well be intelligent without being able to chat like a human. Simple conversational programs, such as ELIZA, have fooled people into believing they are talking to another human being; however, such limited successes don't amount to passing the Turing test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with.
The Loebner Prize is an annual competition to determine the best Turing Test competitors.
Chinese room argument
In the mid-20th century, a theory of consciousness known as functionalism developed. It explained the human mind in computational terms, rejecting that there is something unique to human consciousness or the human brain. If a computer can pass a Turing test by convincing an expert that it has a mind, then it actually does have a mind. In 1980, philosopher John Searle (1932–) responded to functionalism with his revolutionary article "Minds, Brains, and Programs", in which he critiqued the possibility of artificial intelligence with his Chinese room argument.
Imagine that a man who does not speak Chinese is put in a windowless room with nothing but a Chinese dictionary that contains the appropriate responses for certain queries. A slip of paper is passed under the door with a Chinese phrase on it. The man looks up the appropriate response, writes it on the paper, and passes it under the door. Since he speaks no Chinese, he has no idea what he just said; he is only following the stimulus-response pattern indicated by the dictionary. To an observer outside the room, he seems to be a Chinese speaker, even though the phrases he reads and writes mean nothing to him. Similarly, a computer might pas sa Turing test, but it does so only by blindly following its programming, Though it has syntax (logical structure), it has no semantics (understanding of meaning). Thus, though computers may emulate human minds, they can never actually have minds. Searle's arguments ignited a firestorm of debate in the fields of philosophy and artificial intelligence.