科技: 人物 企业 技术 IT业 TMT
科普: 自然 科学 科幻 宇宙 科学家
通信: 历史 技术 手机 词典 3G馆
索引: 分类 推荐 专题 热点 排行榜
互联网: 广告 营销 政务 游戏 google
新媒体: 社交 博客 学者 人物 传播学
新思想: 网站 新书 新知 新词 思想家
图书馆: 文化 商业 管理 经济 期刊
网络文化: 社会 红人 黑客 治理 亚文化
创业百科: VC 词典 指南 案例 创业史
前沿科技: 清洁 绿色 纳米 生物 环保
知识产权: 盗版 共享 学人 法规 著作
用户名: 密码: 注册 忘记密码?
    创建新词条

历史版本1:人工智能大事记 返回词条

目录

人工智能历史回目录

人工智能(AI)是一门极富挑战性的科学,从事这项工作的人必须懂得计算机知识,心理学和哲学。人工智能是包括十分广泛的科学,它由不同的领域组成,如机器学习,计算机视觉等等,总的说来,人工智能的目的就是让计算机这台机器能够象人一样思考。这可是不是一个容易的事情。 如果希望做出一台能够思考的机器,那就必须知识什么是思考,更进一步讲就是什么是智慧,它的表现是什么,你可以说科学家有智慧,可你决不会说一个路人什么也不会,没有知识,你同样不敢说一个孩子没有智慧,可对于机器你就不敢说它有智慧了吧,那么智慧是如何分辨的呢?我们说的话,我们做的事情,我们的想法如同泉水一样从大脑中流出,如此自然,可是机器能够吗,那么什么样的机器才是智慧的呢?科学家已经作出了汽车,火车,飞机,收音机等等,它们模仿我们身体器官的功能,但是能不能模仿人类大脑的功能呢?到目前为止,我们也仅仅知道这个装在我们天灵盖里面的东西是由数十亿个神经细胞组成的器官,我们对这个东西知之甚少,模仿它或许是天下最困难的事情了。

在定义智慧时,英国科学家图灵做出了贡献,如果一台机器能够通过称之为图灵实验的实验,那它就是智慧的,图灵实验的本质 就是让人在不看外型的情况下不能区别是机器的行为还是人的行为时,这个机器就是智慧的。不要以为图灵只做出这一点贡献就会名垂表史,如果你是学计算机的就会知道,对于计算机人士而言,获得图灵奖就等于物理学家获得诺贝尔奖一样,图灵在理论上奠定了计算机产生的基础,没有他的杰出贡献世界上根本不可能有这个东西,更不用说什么网络了。

科学家早在计算机出现之前就已经希望能够制造出可能模拟人类思维的机器了,在这方面我希望提到另外一个杰出的数学家,哲学家布尔,通过对人类思维进行数学化精确地刻画,他和其它杰出的科学家一起奠定了智慧机器的思维结构与方法,今天我们的计算机内使用的逻辑基础正是他所创立的。

我想任何学过计算机的人对布尔一定不会陌生,我们所学的布尔代数,就是由它开创的。当计算机出现后,人类开始真正有了一个可以模拟人类思维的工具了,在以后的岁月中,无数科学家为这个目标努力着,现在人工智能已经不再是几个科学家的专利了,全世界几乎所有大学的计算机系都有人在研究这门学科,学习计算机的大学生也必须学习这样一门课程,在大家不懈的努力下,现在计算机似乎已经变得十分聪明了,刚刚结束的国际象棋大赛中,计算机把人给胜了,这是人们都知道的,大家或许不会注意到,在一些地方计算机帮助人进行其它原来只属于人类的工作,计算机以它的高速和准确为人类发挥着它的作用。人工智能始终是计算机科学的前沿学科,计算机编程语言和其它计算机软件都因为有了人工智能的进展而得以存在。

现在人类已经把计算机的计算能力提高到了前所未有的地步,而人工智能也在下世纪领导计算机发展的潮头,现在人工智能的发展因为受到理论上的限制不是很明显,但它必将象今天的网络一样深远地影响我们的生活。

在世界各地对人工智能的研究很早就开始了,但对人工智能的真正实现要从计算机的诞生开始算起,这时人类才有可能以机器的实现人类的智能。AI这个英文单词最早是在1956年的一次会议上提出的,在此以后,因此一些科学的努力它得以发展。人工智能的进展并不象我们期待的那样迅速,因为人工智能的基本理论还不完整,我们还不能从本质上解释我们的大脑为什么能够思考,这种思考来自于什么,这种思考为什么得以产生等一系列问题。但经过这几十年的发展,人工智能正在以它巨大的力量影响着人们的生活。

让我们顺着人工智能的发展来回顾一下计算机的发展,在1941年由美国和德国两国共同研制的第一台计算机诞生了,从此以后人类存储和处理信息的方法开始发生革命性的变化。第一台计算机的体型可不算太好,它比较胖,还比较娇气,需要工作在有空调的房间里,如果希望它处理什么事情,需要大家把线路重新接一次,这可不是一件省力气的活儿,把成千上万的线重新焊一下我想现在的程序员已经是生活在天堂中了。

终于在1949发明了可以存储程序的计算机,这样,编程程序总算可以不用焊了,好多了。因为编程变得十分简单,计算机理论的发展终于导致了人工智能理论的产生。人们总算可以找到一个存储信息和自动处理信息的方法了。
虽然现在看来这种新机器已经可以实现部分人类的智力,但是直到50年代人们才把人类智力和这种新机器联系起来。我们注意到旁边这位大肚子的老先生了,他在反馈理论上的研究最终让他提出了一个论断,所有人类智力的结果都是一种反馈的结果,通过不断地将结果反馈给机体而产生的动作,进而产生了智能。我们家的抽水马桶就是一个十分好的例子,水之所以不会常流不断,正是因为有一个装置在检测水位的变化,如果水太多了,就把水管给关了,这就实现了反馈,是一种负反馈。如果连我们厕所里的装置都可以实现反馈了,那我们应该可以用一种机器实现反馈,进而实现人类智力的机器形式重现。这种想法对于人工智能早期的有着重大的影响。

在1955的时候,香农与人一起开发了The Logic Theorist程序,它是一种采用树形结构的程序,在程序运行时,它在树中搜索,寻找与可能答案最接近的树的分枝进行探索,以得到正确的答案。这个程序在人工智能的历史上可以说是有重要地位的,它在学术上和社会上带来的巨大的影响,以至于我们现在所采用的方法思想方法有许多还是来自于这个50年代的程序。

1956年,作为人工智能领域另一位著名科学家的麦卡希召集了一次会议来讨论人工智能未来的发展方向。从那时起,人工智能的名字才正式确立,这次会议在人工智能历史上不是巨大的成功,但是这次会议给人工智能奠基人相互交流的机会,并为未来人工智能的发展起了铺垫的作用。在此以后,工人智能的重点开始变为建立实用的能够自行解决问题的系统,并要求系统有自学习能力。在1957年,香农和另一些人又开发了一个程序称为General Problem Solver(GPS),它对Wiener的反馈理论有一个扩展,并能够解决一些比较普遍的问题。别的科学家在努力开发系统时,右图这位科学家作出了一项重大的贡献,他创建了表处理语言LISP,直到现在许多人工智能程序还在使用这种语言,它几乎成了人工智能的代名词,到了今天,LISP仍然在发展。

在1963年,麻省理工学院受到了美国政府和国防部的支持进行人工智能的研究,美国政府不是为了别的,而是为了在冷战中保持与苏联的均衡,虽然这个目的是带点火药味的,但是它的结果却使人工智能得到了巨大的发展。其后发展出的许多程序十分引人注目,麻省理工大学开发出了SHRDLU。在这个大发展的60年代,STUDENT系统可以解决代数问题,而SIR系统则开始理解简单的英文句子了,SIR的出现导致了新学科的出现:自然语言处理。在70年代出现的专家系统成了一个巨大的进步,他头一次让人知道计算机可以代替人类专家进行一些工作了,由于计算机硬件性能的提高,人工智能得以进行一系列重要的活动,如统计分析数据,参与医疗诊断等等,它作为生活的重要方面开始改变人类生活了。在理论方面,70年代也是大发展的一个时期,计算机开始有了简单的思维和视觉,而不能不提的是在70年代,另一个人工智能语言Prolog语言诞生了,它和LISP一起几乎成了人工智能工作者不可缺少的工具。不要以为人工智能离我们很远,它已经在进入我们的生活,模糊控制,决策支持等等方面都有人工智能的影子。让计算机这个机器代替人类进行简单的智力活动,把人类解放用于其它更有益的工作,这是人工智能的目的,但我想对科学真理的无尽追求才是最终的动力吧。

人工智能大事记回目录

To 1900
Date Development
Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).[1]
Antiquity Yan Shi presented King Mu of Zhou with mechanical men.[2]
Antiquity Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.[3]
384-322 BC Aristotle described the syllogism, a method of formal, mechanical thought.
1st century Heron of Alexandria created mechanical men and other automatons.[4]
260 Porphyry of Tyros wrote Isagogê which categorized knowledge and logic.[5]
~800 Geber (Jabir ibn Hayyan) develops the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[6]
1206 Al-Jazari created a programmable orchestra of mechanical human beings.[7]
1275 Ramon Llull, Catalan theologian invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.[8]
~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[9]
~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[10]
Early 1600s René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[11]
1623 Wilhelm Schickard created the first mechanical calculating machine.
1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[12][13]
1652 Blaise Pascal created the second mechanical and first digital calculating machine[14]
1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[15]
1727 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[16] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism.
1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[17]
1769 Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk. [18] The Turk was later shown to be a hoax, involving a human chess player.
1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[19]
1822-1859 Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[20]
1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics.
1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[21]
1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[22]


 1900-1950
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic.
1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista and published speculation about thinking and automata.[23]
1923 Karel Čapek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[24]
1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap lead philosophy into logical analysis of knowledge. Alonzo Church develops Lambda Calculus to investigate computability using recursive functional notation.
1931 Kurt Gödel showed that sufficiently powerful consistent formal systems permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science".
1941 Konrad Zuse built the first working program-controlled computers.[25]
1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.[26]
1943 Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948.
1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern.
1945 Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities.
1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer.


 1950s
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.[27]
1950 Claude Shannon published a detailed analysis of chess playing as search.
1950 Isaac Asimov published his Three Laws of Robotics.
1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1952-1962 Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a world champion. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[28]
1955 The first Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon.
1956 The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy[29]
1956 The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert Simon (Carnegie Institute of Technology, now Carnegie Mellon University). This is often called the first AI program, though Samuel's checkers program also has a strong claim.
1957 The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon.
1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language.
1958 Herb Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases.
1958 Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence.
1959 John McCarthy and Marvin Minsky founded the MIT AI Lab.
Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation.


 1960s
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1960 Man-Computer Symbiosis by J.C.R. Licklider.
1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level.
1961 In Minds, Machines and Gödel, John Lucas[30] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
1962 First industrial robot company, Unimation, founded.
1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
1963 Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.
1963 Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt
1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly.
1964 Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
1965 J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language.
1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.
1965 Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets.
1966 Machine Intelligence workshop at Edinburgh - the first of an influential annual series organized by Donald Michie and others.
1966 Negative report on machine translation kills much work in Natural language processing (NLP) for many years.
1967 Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning.
1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics.
1968 Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play.
1968 Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor.
1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving.
1969 Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
1969 Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge.
1969 First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford.
1969 Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless significant progress in the field continued (see below).
1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence".


 1970s
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI.
1970 Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge.
1970 Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding.
1970 Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks.
1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
1971 Work on the Boyer-Moore theorem prover started in Edinburgh.[31]
1972 Prolog programming language developed by Alain Colmerauer.
1972 Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS.
1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.)
1973 The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
1975 Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan.
1975 Marvin Minsky published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together.
1975 The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal.
Mid 1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing.
Mid 1970s David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.
1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely-guided search for interesting conjectures).
1976 Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
1978 Tom Mitchell, at Stanford, invented the concept of Version Spaces for describing the search space of a concept formation program.
1978 Herbert Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing".
1978 The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.
1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells".
1979 Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge.
1979 Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming.
1979 The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
1979 BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion.
1979 Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.


 1980s
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
Early 1980s The team of Ernst Dickmanns at Bundeswehr University Munich builds the first robot cars, driving up to 55 mph on empty streets.
1980s Lisp machines developed and marketed. First expert system shells and commercial applications.
1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford.
1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines, Inc.)
1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism.
1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program).
1983 James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events.
Mid 1980s Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974).
1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments).
1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).
1987 Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI.
1989 Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network).


 1990s
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[32]
1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second).
1993 Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely-publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years.
1993 ISX corporation wins "DARPA contractor of the year"[33] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[34]
1994 With passengers onboard, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
1995 Semi-autonomous ALVINN steered a car coast-to-coast under computer control for all but about 50 of the 2850 miles. Throttle and brakes, however, were controlled by a human driver.
1995 In the same year, one of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes.
1997 The Deep Blue chess machine (IBM) beats the world chess champion, Garry Kasparov.
1997 First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.
1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment.
1998 Tim Berners-Lee published his Semantic Web Road map paper.[35]
1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous.
Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web.
Late 1990s Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab.
Late 1990s Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.


 2000 and Beyond
 This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2007)
Date Development
2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers.
2000 Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions.
2000 The Nomad robot explores remote regions of Antarctica looking for meteorite samples.
2004 OWL Web Ontology Language W3C Recommendation (10 February 2004).
2004 DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.
2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings.
2005 Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions.
2005 Blue Brain is born, a project to simulate the brain at molecular detail.[4].
2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (July 14-16 2006)
2007 Philosophical Transactions of the Royal Society, B -- Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[36]
2007 Checkers is solved by a team of researchers at the University of Alberta.

参考文献回目录

→如果您认为本词条还有待完善,请 编辑词条

标签: 人工智能 人工智能历史