苹果任天堂:纽约时报:赢得未来,人脑还是电脑?

来源:百度文库 编辑:九乡新闻网 时间:2024/04/28 19:41:38

纽约时报:赢得未来,人脑还是电脑?

daiwq发表于2011-02-18 18:47:12

《纽约时报》2月14日:IBM的新智能计算机Watson再次引发了我们对人脑和电脑的区别、计算机技术的功能等问题的思考,Watson在和人类的竞赛中获胜,这将产生深远的影响。取代还是辅助人类,抑或两者并存,这是计算机科技亟待回答的问题。

1963年,计算机学家John McCarthy开创了斯坦福大学人工智能实验室,成员相信能在10年内创造思维机器;同年,计算机学家Douglas Engelbart沿着完全不同的思路组织了“扩大研究中心”(ARC)的前身,他们要设计一套计算机系统来扩展所有人的智能。举个例子,Yahoo的网站编辑用软件来收集读者口味,区分读者群,撤下没有可读性的文章,而 Google新闻站 则将这些工作全交给了一种新软件算法,前者是I.A.(intelligence augmentation,更智能的人),而后者就是A.I.(人工智能)。

正如他们的名字一样,四十年来两个实验室一直“争锋相对”,他们的造物也深刻地改变了整个世界。而A.I.的理念已经被媒体聚焦到了当前IBM的超级计算机Watson参与的电视智力竞赛中,Watson在“Jeopardy!”智力问答节目的决赛以绝对优势彻底打败了它的两位人类选手。IBM研究者努力创造能处理人类语言的技术,而Watson证明了机械不仅能回应简单的命令,还越来越会猜谜。人类语言具有模糊性,搜索“Paris Hilton”,可能是要找家巴黎的酒店,也可能不过是想看点艳照,而计算机已经逐渐能够分辨其中的区别。Watson的获胜,将引发深远的科学、哲学、社会和经济变化。比如经济学家一直认为,由于“机械无法理解人类语言”,工作岗位的增长始终会超过任何减少岗位的自动化技术的发展速度。机械对自然语言的解读将引发新的自动化浪潮,触及只有人能掌握的领域。这对人来说有什么意义?这是所有技术人员都面临的伦理难题。

IBM准备将Watson推向市场用于商业、教育和医疗业的咨询工作,影响还未知,但大量薪资优厚的工作有一定可能会被计算机获取——所有的咨询和电话连线工作都如是,虽然这还很遥远。A.I.凶猛的同时I.A.也在强大,Google就是集合集体智慧的数字“富矿”,通过算法来摸索人对答案的详细选择,再进行快速匹配。其实,个人电脑就是人类智能扩大的第一步,创造了一代“信息工作者”,用工具集聚、生成和分享信息;而“信息看门人”智能手机几乎将我们所有的官感无缝在一起。

未来A.I.和I.A.面临的同一问题在于明晰“如何应用技术”(有计算机学家认为,技术人员应当和全社会订立“社会契约”,用技术创造更多更好的工作岗位),同一技术是取代人还是辅助人,存乎一心。而Watson的真正作用也许在于敦促人类思考人与机械的关系,正如计算机学家John Seely Brown所言,“人之为人,在于提出而不是解答问题。”

刊物:《纽约时报》2月14日导读者:daiwq原文:

A Fight to Win the Future: Computers vs. Humans

February 14, 2011

STANFORD, Calif. — At the dawn of the modern computer era, two Pentagon-financed laboratories bracketed Stanford University. At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it.

In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine.

Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.

For the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.

Now, as the pace of technological change continues to accelerate, it has become increasingly possible to design computing systems that enhance the human experience, or now — in a growing number of cases — completely dispense with it.

The implications of progress in A.I. are being brought into sharp relief now by the broadcasting of a recorded competition pitting the I.B.M. computing system named Watson against the two best human Jeopardy players, Ken Jennings and Brad Rutter.

Watson is an effort by I.B.M. researchers to advance a set of techniques used to process human language. It provides striking evidence that computing systems will no longer be limited to responding to simple commands. Machines will increasingly be able to pick apart jargon, nuance and even riddles. In attacking the problem of the ambiguity of human language, computer science is now closing in on what researchers refer to as the “Paris Hilton problem” — the ability, for example, to determine whether a query is being made by someone who is trying to reserve a hotel in France, or simply to pass time surfing the Internet.

If, as many predict, Watson defeats its human opponents on Wednesday, much will be made of the philosophical consequences of the machine’s achievement. Moreover, the I.B.M. demonstration also foretells profound sociological and economic changes.

Traditionally, economists have argued that while new forms of automation may displace jobs in the short run, over longer periods of time economic growth and job creation have continued to outpace any job-killing technologies. For example, over the past century and a half the shift from being a largely agrarian society to one in which less than 1 percent of the United States labor force is in agriculture is frequently cited as evidence of the economy’s ability to reinvent itself.

That, however, was before machines began to “understand” human language. Rapid progress in natural language processing is beginning to lead to a new wave of automation that promises to transform areas of the economy that have until now been untouched by technological change.

“As designers of tools and products and technologies we should think more about these issues,” said Pattie Maes, a computer scientist at theM.I.T. Media Lab. Not only do designers face ethical issues, she argues, but increasingly as skills that were once exclusively human are simulated by machines, their designers are faced with the challenge of rethinking what it means to be human.

I.B.M.’s executives have said they intend to commercialize Watson to provide a new class of question-answering systems in business, education and medicine. The repercussions of such technology are unknown, but it is possible, for example, to envision systems that replace not only human experts, but hundreds of thousands of well-paying jobs throughout the economy and around the globe. Virtually any job that now involves answering questions and conducting commercial transactions by telephone will soon be at risk. It is only necessary to consider how quickly A.T.M.’s displaced human bank tellers to have an idea of what could happen.

To be sure, anyone who has spent time waiting on hold for technical support, or trying to change an airline reservation, may welcome that day. However, there is also a growing unease about the advances in natural language understanding that are being heralded in systems like Watson. As rapidly as A.I.-based systems are proliferating, there are equally compelling examples of the power of I.A. — systems that extend the capability of the human mind.

Google itself is perhaps the most significant example of using software to mine the collective intelligence of humans and then making it freely available in the form of a digital library. The search engine was originally based on a software algorithm called PageRank that mined human choices in picking Web pages that contained answers to a particular typed query and then quickly ranked the matches by relevance.

The Internet is widely used for applications that employ a range of human capabilities. For example, experiments in Web-based gamesdesigned to harness the human ability to recognize patterns — which still greatly exceeds what is possible by computer — are generating a new set of scientific tools. Games like FoldIt, EteRNA and Galaxy Zoo make it possible for individuals to compete and collaborate in fields like astronomy to biology, medicine and possibly even material science.

Personal computing was the first step toward intelligence augmentation that reached a broad audience. It created a generation of “information workers,” and equipped them with a set of tools for gathering, producing and sharing information. Now there is a cyborg quality to the changes that are taking place as personal computing has evolved from desktop to laptop and now to the smartphones that have quickly become ubiquitous.

The smartphone is not just a navigation and communication tool. It has rapidly become an almost seamless extension of almost all of our senses. It is not only a reference tool but is quickly evolving to be an “information concierge” that can respond to typed or spoken queries or simply volunteer advice.

Further advances in both A.I. and I.A. will increasingly confront the engineers and computer scientists with clear choices about how technology is used. “There needs to be an explicit social contract between the engineers and society to create not just jobs but better jobs,” said Jaron Lanier, a computer scientist and author of “You are not a Gadget: A Manifesto.”

The consequences of human design decisions can be clearly seen in the competing online news systems developed here in Silicon Valley.

Each day Katherine Ho sits at a computer and observes which news articles millions of Yahoo users are reading.

Her computer monitor displays the results of a cluster of software programs giving her almost instant updates on precisely how popular each of the news articles on the company’s home page is, based on her readers’ tastes and interests.

Ms. Ho is a 21st-century version of a traditional newspaper wire editor. Instead of gut and instinct, her decisions on which articles to put on the Yahoo home page are based on the cues generated by the software algorithms.

Throughout the day she constantly reorders the news articles that are displayed for dozens of demographic subgroups that make up the Yahoo readership. An article that isn’t drawing much interest may last only minutes before she “spikes” it electronically. Popular articles stay online for days and sometimes draw tens of millions of readers.

Just five miles north at Yahoo’s rival Google, however, the news is produced in an entirely different manner. Spotlight, a popular feature on Google’s news site, is run entirely by a software algorithm which performs essentially the same duties as Ms. Ho does.

Google’s software prowls the Web looking for articles deemed interesting, employing a process that is similar to the company’s PageRank search engine ranking system to make decisions on which articles to present to readers.

In one case, software-based technologies are being used to extend the skills of a human worker, in another case technology replaces her entirely.

Similar design decisions about how machines are used and whether they will enhance or replace human qualities are now being played out in a multitude of ways, and the real value of Watson may ultimately be in forcing society to consider where the line between human and machine should be drawn.

Indeed, for the computer scientist John Seely Brown, machines that are facile at answering questions only serve to obscure what remains fundamentally human.

“The essence of being human involves asking questions, not answering them,” he said.

source: http://www.nytimes.com/2011/02/15/science/15essay.html?_r=1&ref=science