有途网

Hollywood’s theory that machines with evil(邪恶) minds will drive armies

赵妍妍2017-06-09 19:03:34

D

Hollywood’s theory that machines with evil(邪恶) minds will drive armies of killer robots is just silly. The real problem relates to the possibility that artificial intelligence(AI) may become extremely good at achieving something other than what we really want. In 1960 a well-known mathematician Norbert Wiener, who founded the field of cybernetics(控制论), put it this way: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot effectively interfere(干预), we had better be quite sure that the purpose which we really desire.”

A machine with a specific purpose has another quality, one that we usually associate with living things: a wish to preserve its own existence. For the machine, this quality is not in-born, nor is it something introduced by humans; it is a logical consequence of the simple fact that the machine cannot achieve its original purpose if it is dead. So if we send out a robot with the single instruction of fetching coffee, it will have a strong desire to secure success by disabling its own off switch or even killing anyone who might interfere with its task. If we are not careful, then, we could face a kind of global chess match against very determined, super intelligent machines whose objectives conflict with our own, with the real world as the chessboard.

The possibility of entering into and losing such a match should concentrating the minds of computer scientists. Some researchers argue that we can seal the machines inside a kind of firewall, using them to answer difficult questions but never allowing them to affect the real world. Unfortunately, that plan seems unlikely to work: we have yet to invent a firewall that is secure against ordinary humans, let alone super intelligent machines.

Solving the safety problem well enough to move forward in AI seems to be possible but not easy. There are probably decades in which to plan for the arrival of super intelligent machines. But the problem should not be dismissed out of hand, as it has been by some AI researchers. Some argue that humans and machines can coexist as long as they work in teams—yet that is not possible unless machines share the goals of humans. Others say we can just “switch them off” as if super intelligent machines are too stupid to think of that possibility. Still others think that super intelligent AI will never happen. On September 11, 1933, famous physicist Ernest Rutherford stated, with confidence, “Anyone who expects a source of power in the transformation of these atoms is talking moonshine.” However, on September 12, 1933, physicist Leo Szilard invented the neutron-induced(中子诱导) nuclear chain reaction.

67.Paragraph 1 mainly tells us that artificial intelligence may .(A)

A. run out of human control

B. satisfy human’s real desires

C. command armies of killer robots

D. work faster than a mathematician

68.Machines with specific purposes are associated with living things partly because they might be able to .(A)

A. prevent themselves from being destroyed

B achieve their original goals independently

C. do anything successfully with given orders

D. beat humans in international chess matches

69.According to some researchers, we can use firewalls to .(D)

A. help super intelligent machines work better

B. be secure against evil human beings

C. keep machines from being harmed

D. avoid robots’ affecting the world

70.What does the author think of the safety problem of super intelligent machines?(C)

A. It will disappear with the development of AI.

B. It will get worse with human interference.

C. It will be solved but with difficulty.

以上为北京高考英语试卷的部分试题及答案,仅供参考。

热门推荐

最新文章