The turing game




















A third idea is that it is a mistake to take a narrow view of the mind, i. Against the solipsistic line of thought, Turing makes the effective reply that he would be satisfied if he could secure agreement on the claim that we might each have just as much reason to suppose that machines think as we have reason to suppose that other people think.

Given the right kinds of responses from the machine, we would naturally interpret its utterances as evidence of pleasure, grief, warmth, misery, anger, depression, etc. However, the important point is that if the claims about self-consciousness, desires, emotions, etc. An interesting question to ask, before we address these claims directly, is whether we should suppose that intelligent creatures from some other part of the universe would necessarily be able to do these things.

Why, for example, should we suppose that there must be something deficient about a creature that does not enjoy—or that is not able to enjoy—strawberries and cream? True enough, we might suppose that an intelligent creature ought to have the capacity to enjoy some kinds of things—but it seems unduly chauvinistic to insist that intelligent creatures must be able to enjoy just the kinds of things that we do.

No doubt, similar considerations apply to the claim that an intelligent creature must be the kind of thing that can make a human being fall in love with it. Yes, perhaps, an intelligent creature should be the kind of thing that can love and be loved; but what is so special about us? Setting aside those tasks that we deem to be unduly chauvinistic, we should then ask what grounds there are for supposing that no digital computing machine could do the other things on the list.

Turing suggests that the most likely ground lies in our prior acquaintance with machines of all kinds: none of the machines that any of us has hitherto encountered has been able to do these things.

In particular, the digital computers with which we are now familiar cannot do these things. However, given the limitations of storage capacity and processing speed of even the most recent digital computers, there are obvious reasons for being cautious in assessing the merits of this inductive argument.

There is at least room for debate about the extent to which current computers can: make mistakes, use words properly, learn from experience, be beautiful, etc. Moreover, there is also room for debate about the extent to which recent advances in other areas may be expected to lead to further advancements in overcoming these alleged disabilities.

Perhaps, for example, recent advances in work on artificial sensors may one day contribute to the production of machines that can enjoy strawberries and cream.

Of course, if the intended objection is to the notion that machines can experience any kind of feeling of enjoyment, then it is not clear that work on particular kinds of artificial sensors is to the point. The key idea is that machines can only do what we know how to order them to do or that machines can never do anything really new, or anything that would take us by surprise. Moreover—as Turing goes on to point out—there are many ways in which even digital computers do things that take us by surprise; more needs to be said to make clear exactly what the nature of this suggestion is.

Bringsjord et al. Thus, on the one hand—for all that Bringsjord et al. Moreover, on the other hand, for all that Bringsjord et al. The human brain and nervous system is not much like a digital computer. In particular, there are reasons for being skeptical of the claim that the brain is a discrete-state machine. Turing observes that a small error in the information about the size of a nervous impulse impinging on a neuron may make a large difference to the size of the outgoing impulse.

From this, Turing infers that the brain is likely to be a continuous-state machine; and he then notes that, since discrete-state machines are not continuous-state machines, there might be reason here for thinking that no discrete-state machine can be intelligent.

Just as differential analyzers can be imitated by digital computers to within quite small margins of error, so too, the conversation of human beings can be imitated by digital computers to margins of error that would not be detected by ordinary interrogators playing the imitation game. It is not clear that this is the right kind of response for Turing to make. If someone thinks that real thought or intelligence, or mind, or whatever can only be located in a continuous-state machine, then the fact—if, indeed, it is a fact—that it is possible for discrete-state machines to pass the Turing Test shows only that the Turing Test is no good.

A better reply is to ask why one should be so confident that real thought, etc. And, before we ask this question, we would do well to consider whether we really do have such good reason to suppose that, from the standpoint of our ability to think, we are not essentially discrete-state machines. As Block points out, it seems that there is nothing in our concept of intelligence that rules out intelligent beings with quantised sensory devices; and nor is there anything in our concept of intelligence that rules out intelligent beings with digital working parts.

This argument relies on the assumption that there is no set of rules that describes what a person ought to do in every possible set of circumstances, and on the further assumption that there is a set of rules that describes what a machine will do in every possible set of circumstances. From these two assumptions, it is supposed to follow—somehow! However, once we make the appropriate adjustments, it is not clear that an obvious difference between people and digital computers emerges.

If the world is deterministic, then there are such rules for both persons and machines though perhaps it is not possible to write down the rules.

If the world is not deterministic, then there are no such rules for either persons or machines since both persons and machines can be subject to non-deterministic processes in the production of their behavior. Either way, it is hard to see any reason for supposing that there is a relevant difference between people and machines that bears on the description of what they will do in all possible sets of circumstances.

Perhaps it might be said that what the objection invites us to suppose is that, even though the world is not deterministic, humans differ from digital machines precisely because the operations of the latter are indeed deterministic.

But, if the world is non-deterministic, then there is no reason why digital machines cannot be programmed to behave non-deterministically, by allowing them to access input from non-deterministic features of the world. Whether or not we suppose that norms can be codified—and quite apart from the question of which kinds of norms are in question—it is hard to see what grounds there could be for this judgment, other than the question-begging claim that machines are not the kinds of things whose behavior could be subject to norms.

And, in that case, the initial argument is badly mis-stated: the claim ought to be that, whereas there are sets of rules that describe what a person ought to do in every possible set of circumstances, there are no sets of rules that describe what machines ought to do in all possible sets of circumstances! Perhaps it is intended to be tongue-in-cheek, though, if it is, this fact is poorly signposted by Turing.

Perhaps, instead, Turing was influenced by the apparently scientifically respectable results of J. At any rate, taking the text at face value, Turing seems to have thought that there was overwhelming empirical evidence for telepathy and he was also prepared to take clairvoyance, precognition and psychokinesis seriously. If the capacity for telepathy were a standard feature of any sufficiently advanced system that is able to carry out human conversation, then there is no in-principle reason why digital computers could not be the equals of human beings in this respect as well.

Perhaps this response assumes that a successful machine participant in the imitation game will need to be equipped with sensors, etc. However, as we noted above, this assumption is not terribly controversial. A plausible conversationalist has to keep up to date with goings-on in the world.

If I had I should not have taken such pains to point out the fallacies in contrary views. First of all—as his brief discussion of solipsism makes clear—it is worth asking what grounds we have for attributing intelligence thought, mind to other people. If it is plausible to suppose that we base our attributions on behavioral tests or behavioral criteria, then his claim about the appropriate test to apply in the case of machines seems apt, and his conjecture that digital computing machines might pass the test seems like a reasonable—though controversial—empirical conjecture.

Second, subsequent developments in the philosophy of mind—and, in particular, the fashioning of functionalist theories of the mind—have provided a more secure theoretical environment in which to place speculations about the possibility of thinking machines.

If mental states are functional states—and if mental states are capable of realisation in vastly different kinds of materials—then there is some reason to think that it is an empirical question whether minds can be realised in digital computing machines.

Of course, this kind of suggestion is open to challenge; we shall consider some important philosophical objections in the later parts of this review. There are a number of much-debated issues that arise in connection with the interpretation of various parts of Turing , and that we have hitherto neglected to discuss. But since some of this interpretation has been contested, it is probably worth noting where the major points of controversy have been.

Turing introduces the imitation game by describing a game in which the participants are a man, a woman, and a human interrogator. The interrogator is in a room apart from the other two, and is set the task of determining which of the other two is a man and which is a woman.

Both the man and the woman are set the task of trying to convince the interrogator that they are the woman. Turing recommends that the best strategy for the woman is to answer all questions truthfully; of course, the best strategy for the man will require some lying. The participants in this game also use teletypewriter to communicate with one another—to avoid clues that might be offered by tone of voice, etc. Now, of course, it is possible to interpret Turing as here intending to say what he seems literally to say, namely, that the new game is one in which the computer must pretend to be a woman, and the other participant in the game is a woman.

For discussion, see, for example, Genova and Traiger And it is also possible to interpret Turing as intending to say that the new game is one in which the computer must pretend to be a woman, and the other participant in the game is a man who must also pretend to be a woman. Moreover, as Moor argues, there is no reason to think that one would get a better test if the computer must pretend to be a woman and the other participant in the game is a man pretending to be a woman; and, indeed, there is some reason to think that one would get a worse test.

Perhaps it would make no difference to the effectiveness of the test if the computer must pretend to be a woman, and the other participant is a woman any more than it would make a difference if the computer must pretend to be an accountant and the other participant is an accountant ; however, this consideration is simply insufficient to outweigh the strong textual evidence that supports the standard interpretation of the imitation game that we gave at the beginning of our discussion of Turing For a dissenting view about many of the matters discussed in this paragraph, see Sterrett ; There are two different theoretical claims that are run together in many discussions of The Turing Test that can profitably be separated.

If something can pass itself off as a person under sufficiently demanding test conditions, then we have very good reason to suppose that that thing is intelligent. Another claim holds that an appropriately programmed computer could pass the kind of test that is described in the first claim.

Some objections to the claims made in Turing are objections to the Thinking Machine Claim, but not objections to the Turing Test Claim. Consider, for example, the argument of Searle , which we discuss further in Section 6. However, other objections are objections to the Turing Test Claim. Until we get to Section 6, we shall be confining our attention to discussions of the Turing Test Claim.

Given the initial distinction that we made between different ways in which the expression The Turing Test gets interpreted in the literature, it is probably best to approach the question of the assessment of the current standing of The Turing Test by dividing cases. True enough, we think that there is a correct interpretation of exactly what test it is that is proposed by Turing ; but a complete discussion of the current standing of The Turing Test should pay at least some attention to the current standing of other tests that have been mistakenly supposed to be proposed by Turing There are a number of main ideas to be investigated.

First, there is the suggestion that The Turing Test provides logically necessary and sufficient conditions for the attribution of intelligence.

Second, there is the suggestion that The Turing Test provides logically sufficient—but not logically necessary—conditions for the attribution of intelligence. Fourth—and perhaps not importantly distinct from the previous claim—there is the suggestion that The Turing Test provides more or less strong probabilistic support for the attribution of intelligence. We shall consider each of these suggestions in turn.

It is doubtful whether there are very many examples of people who have explicitly claimed that The Turing Test is meant to provide conditions that are both logically necessary and logically sufficient for the attribution of intelligence.

Perhaps Block is one such case. However, some of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test does indeed provide logically necessary and logically sufficient conditions for the attribution of intelligence; and many more of the objections that have been proposed against The Turing Test only make sense under the assumption that The Turing Test provides necessary and sufficient conditions for the attribution of intelligence, where the modality in question is weaker than the strictly logical, e.

Consider, for example, those people who have claimed that The Turing Test is chauvinistic; and, in particular, those people who have claimed that it is surely logically possible for there to be something that possesses considerable intelligence, and yet that is not able to pass The Turing Test. Examples: Intelligent creatures might fail to pass The Turing Test because they do not share our way of life; intelligent creatures might fail to pass The Turing Test because they refuse to engage in games of pretence; intelligent creatures might fail to pass The Turing Test because the pragmatic conventions that govern the languages that they speak are so very different from the pragmatic conventions that govern human languages.

None of this can constitute objections to The Turing Test unless The Turing Test delivers necessary conditions for the attribution of intelligence. Rather—as we shall see later—French supposes that The Turing Test establishes sufficient conditions that no machine will ever satisfy.

Floridi and Chiriatti say that The Turing Test provides necessary but insufficient conditions for intelligence: not passing The Turing Test disqualifies an AI from being intelligent, but passing The Turing Test is not sufficient to qualify an AI as intelligent.

There are many philosophers who have supposed that The Turing Test is intended to provide logically sufficient conditions for the attribution of intelligence.

That is, there are many philosophers who have supposed that The Turing Test claims that it is logically impossible for something that lacks intelligence to pass The Turing Test. Often, this supposition goes with an interpretation according to which passing The Turing Test requires rather a lot, e. There are well-known arguments against the claim that passing The Turing Test—or any other purely behavioral test—provides logically sufficient conditions for the attribution of intelligence.

If we agree that Blockhead is logically possible, and if we agree that Blockhead is not intelligent does not have a mind, does not think , then Blockhead is a counterexample to the claim that the Turing Test provides a logically sufficient condition for the ascription of intelligence.

After all, Blockhead could be programmed with a look-up tree that produces responses identical with the ones that you would give over the entire course of your life given the same inputs. First, it could be denied that Blockhead is a logical possibility; second, it could be claimed that Blockhead would be intelligent have a mind, think. In order to deny that Blockhead is a logical possibility, it seems that what needs to be denied is the commonly accepted link between conceivability and logical possibility: it certainly seems that Blockhead is conceivable , and so, if properly circumscribed conceivability is sufficient for logical possibility, then it seems that we have good reason to accept that Blockhead is a logical possibility.

Since it would take us too far away from our present concerns to explore this issue properly, we merely note that it remains a controversial question whether properly circumscribed conceivability is sufficient for logical possibility. For further discussion of this issue, see Crooke Blockhead may not be a particularly efficient processor of information; but it is at least a processor of information, and that—in combination with the behavior that is produced as a result of the processing of information—might well be taken to be sufficient grounds for the attribution of some level of intelligence to Blockhead.

For further critical discussion of the argument of Block , see McDermott , and Pautz and Stoljar If no true claims about the observable behavior of the entity can play any role in the justification of the ascription of the mental state in question to the entity, then there are no grounds for attributing that kind of mental state to the entity.

The claim that, in order to be justified in ascribing a mental state to an entity, there must be some true claims about the observable behavior of that entity that alone—i. It may be—for all that we are able to argue—that Wittgenstein was a philosophical behaviorist; it may be—for all that we are able to argue—that Turing was one, too.

However, if we go by the letter of the account given in the previous paragraph, then all that need follow from the claim that the Turing Test is criterial for the ascription of intelligence thought, mind is that, when other true claims not themselves couched in terms of mentalistic vocabulary are conjoined with the claim that an entity has passed the Turing Test, it then follows that the entity in question has intelligence thought, mind.

Note that the parenthetical qualification that the additional true claims not be couched in terms of mentalistic vocabulary is only one way in which one might try to avoid the threat of trivialization.

The difficulty is that the addition of the true claim that an entity has a mind will always produce a set of claims that entails that that entity has a mind, no matter what other claims belong to the set! Many people have supposed that there is good reason to deny that Blockhead is a nomic or physical possibility.

But, if this is right, then, while it may be true that Blockhead is a logical possibility, it follows that Blockhead is not a nomic or physical possibility. And then it seems natural to hold that The Turing Test does indeed provide nomically sufficient conditions for the attribution of intelligence: given everything else that we already know—or, at any rate, take ourselves to know—about the universe in which we live, we would be fully justified in concluding that anything that succeeds in passing The Turing Test is, indeed, intelligent possessed of a mind, and so forth.

There are ways in which the argument in the previous paragraph might be resisted. At the very least, it is worth noting that there is a serious gap in the argument that we have just rehearsed. Perhaps—for all that has been argued so far—there are nomically possible ways of producing mere simulations of intelligence. McDermott calculates that a look-up table for a participant who makes 50 conversational exchanges would have about 10 nodes. When we look at the initial formulation that Turing provides of his test, it is clear that he thought that the passing of the test would provide probabilistic support for the hypothesis of intelligence.

There are at least two different points to make here. Store Education About Community. Buy Now. By visiting our website, you accept our use of cookies and agree to our privacy policy. Discover how computers work See how simple switches, connected together in clever ways, can do incredibly smart things. How it Works Use the buttons below to watch the computer in action. The Parts Build computers with six different types of parts.

Ramps Direct balls in one direction, either to the left or to the right. Crossovers Let ball paths cross over one another. Bits The bit adds logic. Gears and Gear Bits The gear bits are mind-bending , but they add a whole new level of functionality. System Configuration Turing Tumble comes with enough parts to build some impressive machines. How to Play Easy to learn, hard to stop.

Select a puzzle The goal of each puzzle is to build a computer that completes an objective. Build the starting setup Many of the puzzles require parts to be placed in certain positions to begin.

Plan your solution Your job is to figure out where to put the "Available parts" in order to complete the objective. Build and run your computer To solve this puzzle, you must place the four ramps in such a way that they complete the path from the top of the board to the bottom of the board.

What else can it do? The only limit is the size of the board. Despite major advances in artificial intelligence, no computer has ever passed the Turing test. The Turing Test, named for the British mathematician Alan Turing, is designed to figure out if a machine can fool a person into thinking the machine is a human. No computer has passed the Turing Test at the competition so far.

But some have come close. And the day when a machine finally wins might not be that far off. The Turing Test is really a test of linguistic fluency. Properly understood, it can reveal the thing that is arguably most distinctive about humans: our different cultures. Just because you can imitate intelligent behavior does not mean that you yourself possess the qualities of intelligence.

All other trademarks are the property of their respective owners. You can use this widget-maker to generate a bit of HTML that can be embedded in your website to easily allow customers to purchase this game on Steam.

Sign In. Home Discussions Workshop Market Broadcasts. Change language. Install Steam. Your Store Your Store. Categories Categories. Special Sections. Player Support. Community Hub. The Turing Test.

Bulkhead Interactive. Square Enix. You are Ava Turing, an engineer for the International Space Agency ISA sent to discover the cause behind the disappearance of the ground crew stationed there. Recent Reviews:. All Reviews:. Popular user-defined tags for this product:.



0コメント

  • 1000 / 1000