Sei sulla pagina 1di 45

Artificial

intelligence
Ability of a machine to perform tasks thought to require
human intelligence. Typical applications include game
playing, language translation, expert systems, and robotics.
Although pseudo-intelligent machinery dates back to
antiquity, the first glimmerings of true intelligence awaited
the development of digital computers in the 1940s. AI, or at
least the semblance of intelligence, has developed in
parallel with computer processing power, which appears to
be the main limiting factor. Early AI projects, such as
playing chess and solving mathematical problems, are now
seen as trivial compared to visual pattern recognition,
complex decision making, and the use of natural language.
See also Turing test.
The subfield of computer science concerned with
understanding the nature of intelligence and constructing
computer systems capable of intelligent action. It embodies
the dual motives of furthering basic scientific
understanding and making computers more sophisticated in
the service of humanity.
Many activities involve intelligent action—problem
solving, perception, learning, planning and other symbolic
reasoning, creativity, language, and so forth—and therein
lie an immense diversity of phenomena. Scientific concern
for these phenomena is shared by many fields, for example,
psychology, linguistics, and philosophy of mind, in
addition to artificial intelligence. The starting point for
artificial intelligence is the capability of the computer to
manipulate symbolic expressions that can represent all
manner of things, including knowledge about the structure
and function of objects and people in the world, beliefs and
purposes, scientific theories, and the programs of action of
the computer itself.
Artificial intelligence is primarily concerned with symbolic
representations of knowledge and heuristic methods of
reasoning, that is, using common assumptions and rules of
thumb. Two examples of problems studied in artificial
intelligence are planning how a robot, or person, might
assemble a complicated device, or move from one place to
another; and diagnosing the nature of a person's disease, or
of a machine's malfunction, from the observable
manifestations of the problem. In both cases, reasoning
with symbolic descriptions predominates over calculating.
The approach of artificial intelligence researchers is largely
experimental, with small patches of mathematical theory.
As in other experimental sciences, investigators build
devices (in this case, computer programs) to carry out their
experimental investigations. New programs are created to
explore ideas about how intelligent action might be
attained, and are also developed to test hypotheses about
concepts or mechanisms involved in intelligent behavior.
The foundations of artificial intelligence are divided into
representation, problem-solving methods, architecture, and
knowledge. To work on a task, a computer must have an
internal representation in its memory, for example, the
symbolic description of a room for a moving robot, or a set
of features describing a person with a disease. The
representation also includes all the knowledge, including
basic programs, for testing and measuring the structure,
plus all the programs for transforming the structure into
another one in ways appropriate to the task. Changing the
representation used for a task can make an immense
difference, turning a problem from impossible to trivial.
Given the representation of a task, a method must be
adopted that has some chance of accomplishing the task.
Artificial intelligence has gradually built up a stock of
relevant problem-solving methods (the so-called weak
methods) that apply extremely generally.
An important feature of all the weak methods is that they
involve search. One of the most important generalizations
to arise in artificial intelligence is the ubiquity of search. It
appears to underlie all intelligent action. In the worst case,
the search is blind. In heuristic search extra information is
used to guide the search.
Some of the weak methods are generate-and-test (a
sequence of candidates is generated, each being tested for
solutionhood); hill climbing (a measure of progress is used
to guide each step); means-ends analysis (the difference
between the desired situation and the present one is used to
select the next step); impasse resolution (the inability to
take the desired next step leads to a subgoal of making the
step feasible); planning by abstraction (the task is
simplified, solved, and the solution used as a guide); and
matching (the present situation is represented as a schema
to be mapped into the desired situation by putting the two
in correspondence).
An intelligent agent—person or program—has multiple
means for representing tasks and dealing with them. Also
required is an architecture or operating framework within
which to select and carry out these activities. Often called
the executive or control structure, it is best viewed as a total
architecture (as in computer architecture), that is, a machine
that provides data structures, operations on those data
structures, memory for holding data structures, accessing
operations for retrieving data structures from memory, a
programming language for expressing integrated patterns of
conditional operations, and an interpreter for carrying out
programs. Any digital computer provides an architecture,
as does any programming language. Architectures are not
all equivalent, and one important scientific question is what
architecture is appropriate for a general intelligent agent.
In artificial intelligence, the basic paradigm of intelligent
action is that of search through a space of partial solutions
(called the problem space) for a goal situation. Each step
offers several possibilities, leading to a cascading of
possibilities that can be represented as a branching tree.
The search is thus said to be combinatorial or exponential.
For example, if there are 10 possible actions in any
situation, and it takes a sequence of 12 steps to find a
solution (a goal state), then there are 1012 possible
sequences in the exhaustive search tree. What keeps the
search under control is knowledge, which suggests how to
choose or narrow the options at each step. Thus the fourth
fundamental concern is how to represent knowledge in the
memory of the system so it can be brought to bear on the
search when relevant.
An intelligent agent will have immense amounts of
knowledge. This implies another major problem, that of
discovering the relevant knowledge as the solution attempt
progresses. Although this search does not include the
combinatorial explosion characteristic of searching the
problem space, it can be time consuming and hard.
However, the structure of the database holding the
knowledge (called the knowledge base) can be carefully
tailored to suit the architecture in order to make the search
efficient. This knowledge base, with its accompanying
problems of encoding and access, constitutes the final
ingredient of an intelligent system.
An example of artificial intelligence is computer
perception. Perception is the formation, from a sensory
signal, of an internal representation suitable for intelligent
processing. Though there are many types of sensory
signals, computer perception has focused on vision and
speech. Perception might seem to be distinct from
intelligence, since it involves incident time-varying
continuous energy distributions prior to interpretation in
symbolic terms. However, all the same ingredients occur:
representation, search, architecture, and knowledge. Speech
perception starts with the acoustic wave of a human
utterance and proceeds to an internal representation of what
the speech is about. A sequence of representations is used:
the digitization of the acoustic wave into an array of
intensities; the formation of a small set of parametric
quantities that vary continuously with time (such as the
intensities and frequencies of the formants, bands of
resonant energy characteristic of speech); a sequence of
phons (members of a finite alphabet of labels for
characteristic sounds, analogous to letters); a sequence of
words; a parsed sequence of words reflecting grammatical
structure; and finally a semantic data structure representing
a sentence (or other utterance) that reflects the meaning
behind the sounds.
A class of artificial intelligence programs called expert
systems attempt to accomplish tasks by acquiring and
incorporating the same knowledge that human experts
have. Many attempts to apply artificial intelligence to
medicine, government, and other socially significant tasks
take the form of expert systems. Even though the emphasis
is on knowledge, all the standard ingredients are present.
In careful tests, a number of expert systems have shown
performance at levels of quality equivalent to or better than
average practicing professionals (for example, average
practicing physicians) on the restricted domains over which
they operate. Nearly all large corporations and many
smaller ones use expert systems. A common application is
to provide technical assistance to persons who answer
customers' trouble calls. Computer companies use expert
systems to assist in configuring components from a parts
catalog into a complete system that matches a customer's
specifications, a kind of application that has been replicated
in other industries tailoring assembled products to
customers' needs. Troubleshooting and diagnostic programs
are commonplace. Another widespread use of this
technology is in software for home computers that assists
taxpayers. One important lesson learned from incorporating
artificial intelligence software into ongoing practice is that
its success depends on many other aspects besides the
intrinsic intellectual quality, for example, ease of
interaction, integration into existing workflow, and costs.
Expert systems have sparked important insights in
reasoning under uncertainty, causal reasoning, reasoning
about knowledge, and acceptance of computer systems in
the workplace. They illustrate that there is no hard
separation between pure and applied artificial intelligence;
finding what is required for intelligent action in a complex
applied area makes a significant contribution to basic
knowledge. See also Expert systems.
In addition to the subject areas mentioned above,
significant work in artificial intelligence has been done on
puzzles and reasoning tasks, induction and concept
identification, symbolic mathematics, theorem proving in
formal logic, natural language understanding and
generation, vision, robotics, chemistry, biology,
engineering analysis, computer-assisted instruction, and
computer-program synthesis and verification, to name only
the most prominent. As computers become smaller and less
expensive, more and more intelligence is built into
automobiles, appliances, and other machines, as well as
computer software, in everyday use. See also Automata
theory; Computer; Control systems; Cybernetics; Digital
computer; Intelligent machine; Robotics.

Umbrella terminology for several main categories of


research. They include natural language systems, visual and
voice recognition systems, robotic systems, and Expert
Systems. Artificial intelligence generally is the attempt to
build machines that think, as well as the study of mental
faculties through the use of computational models. A
reasoning process is involved with self-correction.
Significant data are evaluated and relevant relationships,
such as the determination of a warranty reserve, uncovered.
The computer learns which kind of answers are reasonable
and which are not. Artificial intelligence performs
complicated strategies that compute the best or worst way
to achieve a task or avoid an undesirable result. An
example of an application is in tax planning involving tax
shelter options given the client's financial position.

Computer systems are becoming commonplace; indeed,


they are almost ubiquitous. We find them central to the
functioning of most business, governmental, military,
environmental, and health-care organizations. They are
also a part of many educational and training programs.
But these computer systems, while increasingly affecting
our lives, are rigid, complex and incapable of rapid
change. To help us and our organizations cope with the
unpredictable eventualities of an ever-more volatile world,
these systems need capabilities that will enable them to
adapt readily to change. They need to be intelligent. Our
national competitiveness depends increasingly on
capacities for accessing, processing, and analyzing
information. The computer systems used for such purposes
must also be intelligent. Health-care providers require
easy access to information systems so they can track
health-care delivery and identify the most recent and
effective medical treatments for their patients' conditions.
Crisis management teams must be able to explore
alternative courses of action and support decision making.
Educators need systems that adapt to a student's individual
needs and abilities. Businesses require flexible
manufacturing and software design aids to maintain their
leadership position in information technology, and to
regain it in manufacturing. (Grosz and Davis, 1994)
The history of artificial intelligence (AI) predates the
development of the first computing machines. On a general
level, intelligence has been the subject of philosophical
study for 2000 years. At the computational level,
mathematician Alan Turing constructed a framework for AI
during the era of analog computers.
While precise definitions are still the subject of debate, AI
may be usefully thought of as the branch of computer
science that is concerned with the automation of intelligent
behavior. The intent of AI is to develop systems that have
the ability to perceive and to learn, to accomplish physical
tasks, and to emulate human decision making. AI seeks to
design and develop intelligent agents as well as to
understand them. Currently, the main fields of research and
development include the following:
1. Natural languages: These studies focus on problems
related to natural language interface, machine
translation, understanding spoken language, and so
forth.
2. Expert systems: No generalizable solutions are
researched, but expertise is used to deal with ill-
defined problems and relationships.
3. Cognition and learning: Investigations are being made
into modes of thinking, learning, and problem solving.
4. Computer vision: Efforts are being made to develop
principles and algorithms for machine vision and the
interpretation of visual data.
5. Automatic deduction: This area deals with the
resolution of problems, theorem proving, and logic
programming.

The term "AI" was applied about 1956, giving a formal


name to work that had been developing over the previous
five or six years. Individuals and organizations have an
abiding interest in AI for several important reasons,
including the following:
1. To preserve expertise that might be lost when an
acknowledged expert is unavailable.
2. To create organizational knowledge bases so that
others may learn from past problem-solving successes.
3. To help decision makers be consistent in their
evaluation of complex problems.
During its early years AI was dominated by reliance on
logic as a means of representing knowledge and on logical
inference as the primary mechanism for intelligent
reasoning. In the 1990s other paradigms arrived on the
scene, some of which had a dramatic impact. Artificial
neural networks (ANNs) were motivated by assumptions
about how the brain functions— particularly the ideas of
massively parallel connections, each of which performs
simple computational tasks. Taken together, they represent
knowledge as a property of patterns of relationships.
Genetic algorithms apply principles of biological evolution
to the problems of searching complex solution spaces. The
programs do not use logical reasoning either, but evolve
toward better and better solutions to complex problems.
Multiagent systems have recently come to the fore of AI
research. This emergence has been driven by a recognition
that intelligence may be reflected by the collective
behaviors of large numbers of very simple interacting
members of a community of agents. These agents can be
computers, software modules, or virtually any object that
can perceive aspects of its environment and proceed in a
rational way toward accomplishing a goal.
A variety of disciplines have influenced the development of
AI. These include philosophy (logic), mathematics
(intractibility, computability, algorithms), psychology
(cognition), engineering (computer hardware and software),
and linguistics (knowledge representation and natural-
language processing).
Long before the development of computers, the notion that
thinking was a form of computation motivated the
formalization of logic. These efforts continue today. Graph
theory provided the architecture for searching a solution
space for a problem. Operations research, with its focus on
optimization algorithms, used graph theory and other
methods to solve complex decision-making problems.
In 1950, Alan Turing proposed what has become known as
the Turing Test for defining intelligent behavior. The idea
was to specify requirements that a computer would have to
exhibit in order to demonstrate intelligence. Briefly, the
Turing Test proposes that the computer should be
interrogated via telecommunications by a human.
Intelligence is exhibited by the computer if the interrogator
cannot tell whether there is a human or a computer at the
other end. In order to pass the test, a computer would need
to have capabilities for natural-language processing,
knowledge representation, automated reasoning, and
machine learning.
An Evolution of Applications
While computer systems have become commonplace, they
are generally rigid, complex, and incapable of rapid
change. According to A Report to ARPA on Twenty-First
Century Intelligent Systems, for us and our organizations to
cope with the unpredictable eventualities of an ever-more
volatile world, these systems need capabilities that will
enable them to adapt readily to change. The report argues
that our national competitiveness depends increasingly on
capacities for accessing, processing, and analyzing
information (Grosz and Davis, 1994).
One of the early milestones in AI was Newell and Simon's
General Problem Solver (GPS). The program was designed
to imitate human problem-solving methods. This and other
developments such as Logic Theorist and the Geometry
Theorem Prover generated enthusiasm for the future of AI.
Simon went so far as to assert that in the near-term future
the problems that computers could solve would be
coextensive with the range of problems to which the human
mind has been applied.
Soon difficulties in achieving this objective began to
manifest themselves. In scaling up from earlier successes,
problems of intractability were encountered. A search for
alternative approaches led to attempts to solve typically
occurring cases in narrow areas of expertise. This prompted
the development of expert systems. A seminal model was
MYCIN, developed to diagnose blood infections. Having
about 450 rules, MYCIN was able to perform as well as
many experts. This and other expert-systems research led to
the first commercial expert system, R1, implemented at
Digital Equipment Corporation (DEC) to help configure
orders for new computer systems. Sub-sequent to R1's
implementation, it was estimated to save DEC about $40
million a year.
Other classic systems include the PROSPECTOR program
for determining the probable location and type of ore
deposits and the INTERNIST program for performing
medical diagnosis in internal medicine.
The Future
A Report to ARPA on Twenty-First Century Intelligent
Systems identified four types of systems that will have a
substantial impact on applications: intelligent simulation,
intelligent information resources, intelligent project
coaches, and robot teams (Grosz and Davis, 1994).
Intelligent simulations generate realistic simulated worlds
that enable extensive affordable training and education that
can be made available any time and anywhere. Examples
may be hurricane crisis management, exploration of the
impacts of different economic theories, tests of products on
simulated customers, and technological design—testing
features through simulation that would cost millions of
dollars to test using an actual prototype.
Intelligent information resources systems (IRSS) will
enable easy access to information related to a specific
problem. For instance, a rural doctor whose patient presents
with a rare condition might use IRSS to help assess
different treatments or identify new ones. An educator
might find relevant background materials, including
information about similar courses taught elsewhere.
Intelligent project coaches (IPC) could function as co-
workers, assisting and collaborating with design or
operations teams for complex systems. Such systems could
remember and recall the rationale of previous decisions
and, in times of crisis, explain the methods and reasoning
previously used to handle that situation. An IPC for aircraft
design, for example, could enhance collaboration by
keeping communication flowing among the large,
distributed design staff, the program managers, the
customer, and the subcontractors.
Robot teams could contribute to manufacturing by
operating in a dynamic environment with minimal
instrumentation, thus providing the benefits of economies
of scale. They could also participate in automating
sophisticated laboratory procedures that require sensing,
manipulation, planning, and transport.
Artificial Intelligence, a branch of computer science that
seeks to create a computer system capable of sensing the
world around it, understanding conversations, learning,
reasoning, and reaching decisions, just as would a human.
In 1950 the pioneering British mathematician Alan Turing
proposed a test for artificial intelligence in which a human
subject tries to talk with an unseen conversant. The tester
sends questions to the machine via teletype and reads its
answers; if the subject cannot discern whether the
conversation is being held with another person or a
machine, then the machine is deemed to have artificial
intelligence. No machine has come close to passing this
test, and it is unlikely that one will in the near future.
Researchers, however, have made progress on specific
pieces of the artificial intelligence puzzle, and some of their
work has had tangible benefits.
One area of progress is the field of expert systems, or
computer systems designed to reproduce the knowledge
base and decision-making techniques used by experts in a
given field. Such a system can train workers and assist in
decision making. MYCIN, a program developed in 1976 at
Stanford University, suggests possible diagnoses for
patients with infectious blood diseases, proposes
treatments, and explains its "reasoning" in English.
Corporations have used such systems to reduce the labor
costs involved in repetitive calculations. A system used by
American Express since November 1988 to advise when to
deny credit to a customer saves the company millions of
dollars annually.
A second area of artificial intelligence research is the field
of artificial perception, or computer vision. Computer
vision is the ability to recognize patterns in an image and to
separate objects from background as quickly as the human
brain. In the 1990s military technology initially developed
to analyze spy-satellite images found its way into
commercial applications, including monitors for assembly
lines, digital cameras, and automotive imaging systems.
Another pursuit in artificial intelligence research is natural
language processing, the ability to interpret and generate
human languages. In this area, as in others related to
artificial intelligence research, commercial applications
have been delayed as improvements in hardware—the
computing power of the machines themselves—have not
kept pace with the increasing complexity of software.
The field of neural networks seeks to reproduce the
architecture of the brain—billions of connected nerve cells
—by joining a large number of computer processors
through a technique known as parallel processing. Fuzzy
systems is a subfield of artificial intelligence research based
on the assumption that the world encountered by humans is
fraught with approximate, rather than precise, information.
Interest in the field has been particularly strong in Japan,
where fuzzy systems have been used in disparate
applications, from operating subway cars to guiding the
sale of securities. Some theorists argue that the technical
obstacles to artificial intelligence, while large, are not
insurmountable. A number of computer experts,
philosophers, and futurists have speculated on the ethical
and spiritual challenges facing society when artificially
intelligent machines begin to mimic human personality
traits, including memory, emotion, and consciousness.
artificial intelligence (AI), the use of computers to model
the behavioral aspects of human reasoning and learning.
Research in AI is concentrated in some half-dozen areas. In
problem solving, one must proceed from a beginning (the
initial state) to the end (the goal state) via a limited number
of steps; AI here involves an attempt to model the
reasoning process in solving a problem, such as the proof
of a theorem in Euclidean geometry. In game theory (see
games, theory of), the computer must choose among a
number of possible "next" moves to select the one that
optimizes its probability of winning; this type of choice is
analogous to that of a chess player selecting the next move
in response to an opponent's move. In pattern recognition,
shapes, forms, or configurations of data must be identified
and isolated from a larger group; the process here is similar
to that used by a doctor in classifying medical problems on
the basis of symptoms. Natural language processing is an
analysis of current or colloquial language usage without the
sometimes misleading effect of formal grammars; it is an
attempt to model the learning process of a translator faced
with the phrase "throw mama from the train a kiss."
Cybernetics is the analysis of the communication and
control processes of biological organisms and their
relationship to mechanical and electrical systems; this study
could ultimately lead to the development of "thinking"
robots (see robotics). Machine learning occurs when a
computer improves its performance of a task on the basis of
its programmed application of AI principles to its past
performance of that task.
In the public eye advances in chess-playing computer
programs have become symbolic of progress in AI. In 1948
British mathematician Alan Turing developed a chess
algorithm for use with calculating machines-it lost to an
amateur player in the one game that it played. Ten years
later American mathematician Claude Shannon articulated
two chess-playing algorithms: brute force, in which all
possible moves and their consequences are calculated as far
into the future as possible; and selective mode, in which
only the most promising moves and their more immediate
consequences are evaluated. In 1988 Hitech, a program
developed at Carnegie-Mellon Univ., defeated former U.S.
champion Arnold Denker in a four-game match, becoming
the first computer to defeat a grandmaster. A year later,
Gary Kasparov, the reigning world champion, bested Deep
Thought, a program developed by the IBM Corp., in a two-
game exhibition. In 1990 the German computer Mephisto-
Portrose became the first program to defeat a former world
champion; while playing an exhibition of 24 simultaneous
games, Anatoly Karpov bested 23 human opponents but
lost to the computer. Kasparov in 1996 became the first
reigning world champion to lose to a computer in a game
played with regulation time controls; the Deep Blue
computer, developed by the IBM Corp., won the first game
of the match, lost the second, drew the third and fourth, and
lost the fifth and sixth. Deep Blue used the brute force
approach, evaluating more than 100 billion chess positions
each turn while looking six moves ahead; it coupled this
with the most efficient chess evaluation software yet
developed and an extensive library of chess games it could
analyze as part of the decision process. Subsequent matches
between Vladimir Kramnik and Deep Fritz (2002, 2006)
and Kasparov and Deep Junior (2003) have resulted in two
ties and a win for the programs. Unlike Deep Blue, which
was a specially designed computer, these more recent
computer challengers are chess programs that run on
powerful personal computers. Such programs have become
an important tool in chess, and are used by chess masters to
analyze games and experiment with new moves.
Artificial intelligence may be said to have begun in 1950
when Claude Shannon of the Bell Telephone Laboratories
in the United States wrote an ingenious program that was to
be the forerunner of all chess-playing machines. This work
drastically changed the accepted perception of stored-
program computers which, since their birth in 1947, had
been seen just as automatic calculating machines.
Shannon's program added the promise of automated
intelligent action to the actuality of automated calculation.

In Shannon's program the programmer stores in the


computer the value of important features of board
positions. A 'checkmate' being a winning position would
have the highest value and the capture of more or less
important pieces would be given relatively lower values.
So, say that the computer is to take the next move, it would
(by being programmed to follow the rules of the game)
work out all the possible moves that the opponent might
take. It could then work out which moves are available to
itself at the next playing period and so on for several
periods ahead in the search for a winning path through this
'tree' of possibilities. Sadly, the amount of computation
needed to evaluate board positions grows prodigiously the
further ahead the computer is meant to look. This process
of searching through a large number of options became
central in AI programs throughout the 1960s and the early
1970s. Other intelligent tasks besides game playing came
under scrutiny: general problem solving, the control of
robots, computer vision, speech processing, and the
understanding of natural language. Solving general
problems requires searches that are similar to those in the
playing of board games. For example, to work out how to
get from an address in London to the Artificial Intelligence
laboratory at the University of Edinburgh, the problem can
be represented as a search among subgoals (e.g. get to
Edinburgh airport) and the use of 'means' such as airlines,
taxis, or railways. The paths through the scheme are
evaluated in terms of the reduction of cost and/or the
reduction of time to the user. Robot control is similar. The
physical rearrangement of objects in a space has to follow a
strategy that involves the most efficient path between a
current arrangement and the desired one, via several
intermediate ones.
Computer vision and the recognition of speech required the
programmer to determine that some features of the sensory
input generated as signals from a video camera or a
microphone are important. For example, for face
recognition, the program has to identify the central
positions of eyes, nose, and mouth and then measure the
size of these objects and the distances between them. These
measurements for a collection of faces are stored in a
database, each together with the identity of the face. So
were a face in the known set to be presented to the camera,
finding the closest fit to the measurements stored in the
database could identify it. Similarly the features of voices
and the sound of words could be stored in databases for the
purpose of eventual recognition.

Perhaps the most ambitious target for AI designers was the


extraction of meaning from language. This goes beyond
speech recognition and sentences could be presented in
their written form. The difficulty is for the programmer to
find rules that distinguish between sentences such as 'he
broke the window with a rock' and 'he broke the window
with a curtain'. This required a storage of long lists of
words indicating whether they were instruments (rock) or
embellishments (curtain) so that the correct meaning could
be ascribed to them as they appear in a sentence.

However, early enthusiasm that AI computers could


perform tasks comparable to those of humans were to be
curtailed by the mid-1970s when poignant shortcomings
emerged because the techniques used suffered from serious
limitations. In 1971, the British mathematician Sir James
Lighthill advised the major science-funding agency in the
United Kingdom that AI was suffering from something he
called the 'combinatorial explosion' which has been
mentioned above in the chess-playing example. Every time
the computer needs to look a further step ahead, the number
of moves to be evaluated is that of the previous level
multiplied by a large amount. In 1980, US philosopher
John Searle levelled a second criticism at those who had
claimed that they had enabled computers to understand
natural language. Through his celebrated 'Chinese Room'
argument he pointed out that the computer, by stubbornly
following rules, was like a non-Chinese speaker using a
massive set of rules to match questions expressed in
Chinese symbols about a story also written in Chinese
symbols. Given the time to examine many rules, the non-
Chinese speaker could find the correct answers in Chinese
symbols, without there being any understanding in the
procedure. According to Searle, understanding requires a
feeling of 'aboutness' for words and phrases which
computers do not have. Also a third difficulty began to
emerge: artificial intelligence depended too heavily on
programmers having to work out in detail how to specify
intelligent tasks. In pattern recognition, for example, ideas
about how to recognize faces, scenes, and sounds turned
out to be inadequate, particularly with respect to human
performance.
These censures had a healthy effect on AI. The 1980s saw a
maturing of the field through the appearance of new
methodologies dubbed knowledge-based systems, expert
systems, and artificial neural networks (or connectionism).
Effort in knowledge-based systems used formal logic to
greater effect than before. The application of the logical
rules of inheritance and resolution made more efficient use
of knowledge stored in databases. For example, 'Socrates is
a man' and 'All men are mortals' could lead to the
knowledge that 'Socrates is mortal' by logical inference
rather than by explicit storage, thus easing the problem of
holding vast amounts of data in databases.

Expert systems was the name given to applications of AI


which sought to transfer human expertise into knowledge
bases so as to make such knowledge widely available to
non-experts. This employed a 'knowledge engineer' who
elicited knowledge from the expert and structured it
appropriately for inclusion in a database. Facts and rules
were clearly distinguished to enable them to be logically
manipulated. Typical applications are in engineering design
and fault finding, medical diagnosis and advice, and
financial advice.

The aim of artificial neural network studies is to simulate


mechanisms which, in the brain, are responsible for mind
and intelligence. An artificial neuron learns to respond or
not to respond ('fire' or not) to a pattern of signals from
other neurons to which it is connected. A multi-layered
network of such devices can learn (by automatically
adjusting the strengths of the interconnections in the
network) to classify patterns by learning to extract
increasingly telling features of such patterns as data
progresses through the layers. The presence of learning
overcomes some of the difficulties previously due to the
programmer having to decide exactly how to recognize
complex visual and speech patterns. Also, a totally different
class of artificial neural networks may be used to store and
retrieve knowledge. Known as dynamic neural networks,
such systems rely on the inputs of neurons being connected
to the outputs of other neurons. This allows the net to be
taught to keep a pattern of firing activity stable at its
outputs. It can also learn to store sequences of patterns.
These stable states or sequences are the stored knowledge
of the network which may be retrieved in response to some
starting state or a set of inputs also connected to the
neurons in the net. So in terms of pattern recognition, not
only can these networks learn to label patterns, but also
'know' what things look like in terms of neural firing
patterns.

Despite the above two major phases in the history of


artificial intelligence, the subject is still developing,
particularly in three domains: new techniques for creating
intelligent programs, using computers to understand the
complexities of brain and mind, and, finally, contributing to
philosophical debate.The techniques added to the AI
repertoire are evolutionary programming, artificial life, and
intelligent software agents. Evolutionary programming
borrows from human genetic development in the sense that
some variants of a program may have a better performance
than others. It is possible to represent the design parameters
of a system as a sequence of values resembling a
chromosome. An evolutionary program tests a range of
systems against a performance criterion (the fitness
function). It chooses the chromosomes of various system
pairs that have good fitness behaviour to combine them and
create a new generation of systems. This gives rise to
increasingly more able systems even to the extent that their
design holds surprises for expert designers. Such
mimicking of a major mechanism of biological life leads to
the concept of artificial life. For example, UK entrepreneur
Steve Grand includes in his 'Creatures' game simulations of
some biochemical processes to produce societies of virtual
(computer-bound but observable) creatures with realistic
life cycles and social interactions. This allows the game
player to take care of a virtual creature in a game that gets
close to the problems of survival in real life. The more
general study of intelligent software agents takes virtual
creatures into domains where they could perform useful
tasks such as finding desired data on the internet. They are
little programs that store the needs of a user and trawl the
World Wide Web for this desired information. Also a
burgeoning interest is in societies of such agents to
discover how cooperation between them may lead to the
solution of problems in distributed domains. Translated to
multiple interacting robots, agent studies lead to a better
understanding of flocking behaviour and the way that this
achieves goals for the flock.

A better understanding of the brain flows from the study of


artificial neural networks (ANNs). Accepting that the brain
is the most complex machine in existence, ANNs are now
being used to isolate some of its structural features in order
to begin to understand their interactions. For example, it
has been possible to suggest a theoretical basis for
understanding dyslexia, visual hallucinations under the
influence of drugs, and the nature of visual awareness in
general. The latter and grander ambition feeds a
philosophical debate on whether machines could think like
humans that has paralleled AI for its entire existence. The
question was first raised by British mathematician Alan
Turing in 1950. His celebrated test was based on the
external behaviour of an AI machine and its ability to fool a
human interlocutor into thinking that it too was human.
This debate has now moved on to discuss whether a
machine could ever be conscious. The main arguments
against this come from a belief that consciousness, being a
'first-person' phenomenon, cannot be approached from the
'third-person' position which is inherent in all man-made
designs. The contrary arguments are put by those who feel
that by simulating with great care the function and structure
of the brain it will be possible both to understand the
mechanisms of consciousness and to transfer them to a
machine.

The science of making machines do the sorts of things that


are done by human minds. Such things include holding a
conversation, answering questions sensibly on the basis of
incomplete knowledge, assembling another machine from
its components given the blueprint, learning how to do
things better, playing chess, writing or translating stories,
understanding analogies, neurotically repressing knowledge
that is too threatening to admit consciously, learning to
classify visual or auditory patterns, composing a poem or a
sonata, and recognizing the various things seen in a room
— even an untidy and ill-lit room. AI helps one to realize
how enormous is the background knowledge and thinking
(computational) power needed to do even these everyday
things.

The 'machines' in question are typically digital computers,


but AI is not the study of computers. Rather, it is the study
of intelligence in thought and action. Computers are its
tools, because its theories are expressed as computer
programs which are tested by being run on a machine.
Some AI programs are lists of symbolic rules (if this is the
case then do that, else do another ...). Others specify
'brainlike' networks made of many simple, interconnected,
computational units. These types of AI are called
traditional (or classical) and connectionist, respectively.
They have differing, and largely complementary, strengths
and weaknesses.

Other theories of intelligence are expressed verbally, either


as psychological theories of thinking and behaviour, or as
philosophical arguments about the nature of knowledge and
purpose and the relation of mind to body (the mind–body
problem). Because it approaches the same subject matter in
different ways, AI is relevant to psychology and the
philosophy of mind.

Similarly, attempts to write programs that can interpret the


two-dimensional image from a TV camera in terms of the
three-dimensional objects in the real world (or which can
recognize photographs or drawings as representations of
solid objects) help make explicit the range and subtlety of
knowledge and unconscious inference that underlie our
introspectively 'simple' experiences of seeing. Much of this
knowledge is tacit (and largely innate) knowledge about the
ways in which, given the laws of optics, physical surfaces
of various kinds can give rise to specific visual images on a
retina (or camera). Highly complex computational
processes are needed to infer the nature of the physical
object (or of its surfaces), on the basis of the two-
dimensional image.

If we think of an AI system as a picture of a part of the


mind, we must realize that a functioning program is more
like a film of the mind than a portrait of it. Programming
one's hunches about how the mind works is helpful in two
ways. First, it enables one to express richly structured
psychological theories in a rigorous, and testable, fashion.
Second, it forces one to suggest specific hypotheses about
precisely how a psychological change can come about.
Even if (as in connectionist systems: see below) one only
provides a learning rule, rather than telling the AI system
precisely what to learn, that rule has to be rigorously
expressed; a different rule will lead to different
performance.

In general, it is easier to model logical and mathematical


reasoning (which people find difficult) than to simulate
high-level perception or language understanding (which we
do more or less effortlessly). Significant progress has been
made, for instance, in recognizing keywords and
grammatical structure, and AI programs can even come up
with respectable, though juvenile, puns and jokes. But
many sentences, and jokes, assume a large amount of world
knowledge, including culture-specific knowledge about
sport, fashion, politics, soap operas ... the list is literally
endless. There is little or no likelihood than an actual AI
system could use language as well as we can, because it is
too difficult to provide, and to structure, the relevant
knowledge (much of it is tacit, and very difficult to bring
into consciousness). But this need not matter, if all we want
is a psychological theory that explains how these human
capacities are possible. Similarly, research in AI has shown
that highly complex, and typically unconscious,
computational processes are needed to infer the nature of
physical objects from the image reaching the retina/camera.

Traditional philosophical puzzles connected with the mind–


body problem can often be illuminated by AI, because
modelling a psychological phenomenon on a computer is a
way of showing that and how it is possible for that
phenomenon to arise in a physical system. For instance,
people often feel that only a spiritual being (as opposed to a
bodily one) could have purposes and try to achieve them,
and the problem then arises of how the spiritual being, or
mind, can possibly tell the body what to do, so that the
body's hand can try to achieve the mind's purpose of, say,
picking a daisy. It is relevant to ask whether, and how, a
program can enable a machine to show the characteristic
features of purpose. Is its behaviour guided by its idea of a
future state? Is that idea sometimes illusory or mistaken (so
that the 'daisy' is made of plastic, or is really a buttercup)?
Does it symbolize what it is doing in terms of goals and
subgoals (so that the picking of the daisy may be
subordinate to the goal of stocking the classroom nature
table)? Does it use this representation to help plan its
actions (so that the daisies on the path outside the
sweetshop are picked, rather than those by the petrol
station)? Does it vary its means–end activities so as to
achieve its goal in different circumstances (so that
buttercups will do for the nature table if all the daisies have
died)? Does it learn how to do so better (so that daisies for
a daisy-chain are picked with long stalks)? Does it judge
which purposes are the more important, or easier to
achieve, and behave accordingly (if necessary, abandoning
the daisy picking when a swarm of bees appears with an
equally strong interest in the daisies)? Questions like these,
asked with specific examples of functioning AI systems in
mind, cannot fail to clarify the concept of purpose.
Likewise, philosophical problems about the nature and
criteria of knowledge can be clarified by reference to
programs that process and use knowledge, so that AI is
relevant to epistemology.

AI is concerned with mental processing in general, not just


with mathematics and logical deduction. It includes
computer models of perception, thought, motivation, and
emotion. Emotion, for instance, is not just a feeling:
emotions are scheduling mechanisms that have evolved to
enable finite creatures with many potentially conflicting
motives to choose what to do, when. (No matter how
hungry one is, one had better stop eating and run away if
faced by a tiger.) So a complex animal is going to need
some form of computational interrupt, and some way of
'stacking' and realerting those unfulfilled intentions that
shouldn't, or needn't, be abandoned. In human language
users, motivational–emotional processing includes
deliberately thought–out plans and contingency plans, and
anticipation of possible outcomes from the various actions
being considered.

One important variety of AI is connectionism, or artificial


neural networks. Very few connectionist systems are
implemented in fundamentally connectionist hardware.
Most are simulated (as virtual machines) in digital
computers. That is, the program does not list a sequence of
symbolic rules but simulates many interconnected
'neurons', each of which does only very simple things.
Connectionism enables a type of learning wherein the
'weights' on individual units in the network are gradually
altered until recognition errors are minimized. Unlike
learning in classical AI, the unfamiliar pattern need not be
specifically described to the system before it can be learnt;
however, it must be describable in the 'vocabulary' used for
the system's input. Connectionism allows that beliefs and
perceptions may be grounded on partly inconsistent
evidence, and that most concepts are not strictly defined in
terms of necessary and sufficient conditions. Many
connectionist systems represent a concept as a pattern of
activity across the whole network; the units eventually
settle into a state of maximum, though not necessarily
perfect, equilibrium. Connectionism is a powerful way of
implementing pattern recognition and the 'intuitive'
association of ideas. But it is very limited for implementing
hierarchical structure of sequential processes, such as are
involved in deliberate planning. Some AI research aims to
develop 'hybrid' systems combining the strengths of
traditional and connectionist AI. Certainly, the full range of
adult human psychology cannot be captured by either of
these approaches alone.

The main areas of AI include natural language


understanding (see speech recognition by machine),
machine vision (see pattern recognition), problem solving
and game playing (see computer chess), robotics, automatic
programming, and the development of programming
languages. Among the practical applications most recently
developed or currently being developed are medical
diagnosis and treatment (where a program with specialist
knowledge of, say, bacterial infections answers the
questions of and elicits further relevant information from a
general practitioner who is uncertain which drug to
prescribe in a given case); prediction of share prices on the
stock exchange; assessment of creditworthiness; speech
analysis and speech synthesis; the composition of music,
including jazz improvisation; location of mineral deposits,
such as gold or oil; continuous route planning for car
drivers; programs for playing chess, bridge, or Go, etc.;
teaching some subject such as geography, or electronics, to
students with differing degrees of understanding of the
material to be explored; the automatic assembly of factory-
made items, where the parts may have to be inspected first
for various types of flaw and where they need not be
accurately positioned at a precise point in the assembly
line, as is needed for the automation in widespread use
today; and the design of complex systems, whether
electrical circuits or living spaces or some other, taking into
account factors that may interact with each other in
complicated ways (so that a mere 'checklist' program would
not be adequate to solve the design problem).
An area closely related to AI is artificial life (A-life). This
is a form of mathematical biology. It uses computational
concepts and models to study (co-)evolution and self-
organization, both of which apply to life in general, and to
explain specific aspects of living things — such as
navigation in insects or flocking in birds. (The dinosaurs in
Jurassic Park were computer generated using simple A-life
algorithms.) One example of A-life is evolutionary
robotics, where the robot's neural network 'brain' and/or
sensorimotor anatomy is not designed by hand but evolved
over thousands of generations. The programs make random
changes in their own rules, and a fitness function is applied,
either automatically or manually, to select the best from the
resulting examples; these are then used to breed the next
generation. Some A-life scientists, but not all, accept
'strong' A-life: the view that a virtual creature, defined by
computer software, could be genuinely alive. And some
believe that A-life could help us to find an agreed definition
of what 'life' is. All the minds we know of are embodied in
living things, and some people argue that only a living
thing could have a mind, or be intelligent. If that is right,
then success in AI cannot be achieved without success in
A-life. (In both cases, however, 'success' might be
interpreted either as merely showing mindlike/lifelike
behaviour or as being genuinely intelligent/alive.)

The social implications of AI are various. As with all


technologies, there are potential applications which may
prove bad, good, or ambiguous in human terms. A
competent medical diagnosis program could be very useful,
whereas a competent military application would be horrific
for those at the receiving end, and a complex data-handling
system could be well or ill used in many ways by
individuals or governments. Then there is the question of
what general implication AI will be seen to have for the
commonly held 'image of man'. If it is interpreted by the
public as implying that people are 'nothing but clockwork,
really', then the indirect effects on self-esteem and social
relations could be destructive of many of our most deeply
held values. But it could (and should) be interpreted in a
radically different and less dehumanizing way, as showing
how it is possible for material systems (which, according to
the biologist, we are) to possess such characteristic features
of human psychology as subjectivity, purpose, freedom,
and choice. The central theoretical concept in AI is
representation, and AI workers ask how a (programmed)
system constructs, adapts, and uses its inner representations
in interpreting and changing its world. On this view, a
programmed computer may be thought of as a subjective
system (subject to illusion and error much as we are)
functioning by way of its idiosyncratic view of the world.
By analogy, then, it is no longer scientifically disreputable,
as it has been thought to be for so long, to describe people
in these radically subjective terms also. AI can therefore
counteract the dehumanizing influence of the natural
sciences that has been part of the mechanization of our
world picture since the scientific revolution of the 16th and
17th centuries.

Since the mid-1980s, there has been sustained development


of the core ideas of artificial intelligence, e.g.
representation, planning, reasoning, natural language
processing, machine learning, and perception. In addition,
various subfields have emerged, such as research into
agents (autonomous, independent systems, whether in
hardware or software), distributed or multi-agent systems,
coping with uncertainty, affective computing/models of
emotion, and ontologies (systems of representing various
kinds of entities in the world — achievements that, while
new advances, are conceptually and methodologically
continuous with the field of artificial intelligence as
envisaged at the time of its modern genesis: the Dartmouth
conference of 1956.

However, a substantial and growing proportion of research


into artificial intelligence, while often building on the
foundations just mentioned, has shifted its emphasis. This
change in emphasis, inasmuch as it constitutes a conceptual
break with those foundations, promises to make substantial
contributions to our understanding and concepts of mind. It
remains to be seen whether these contributions will replace
or (as may seem more likely) merely supplement those
already provided by what might be termed the 'Dartmouth
approach' and its direct successors.

The new developments, which have their roots in the


cybernetics work of the 1940s and 1950s as much as, if not
more than, they do in mainstream AI, can be divided into
two broad areas: adaptive systems and embodied/situated
approaches. This is not to say that they are exclusive; much
promising work, such as the field of evolutionary robotics,
combines elements of both areas.Adaptive systems The
1980s saw a rise in the popularity of both neural networks
(sometimes also called connectionist models) and genetic
algorithms. Neural networks are systems comprising
thousands or more of (usually simulated) simple processing
units; the computational result of the network is determined
by the input and the connections between the units, which
may vary their ability to pass a signal from one unit to the
next. Nearly all of these networks are adaptive in that they
can learn. Learning typically consists in finding a set of
connections that will make the network give the right
output for each input in a given training set.

Genetic algorithms produce systems that perform well on


some tasks by emulating natural selection. An initial
random population of systems (whose properties are
determined by a few parameters) are ranked according to
their performance on the task; only the best performers are
retained (selection). A new population is created by
mutating or combining the parameters of the winners
(reproduction and variation). Then the cycle repeats.

Although the importance of learning had been


acknowledged since the earliest days of AI, these two
approaches, despite their differences, had a common effect
of making adaptivity absolutely central to AI.

While machine learning assumed conceptual building


blocks with which to build learned structures, neural
networks allowed for subsymbolic learning: the acquisition
of the conceptual 'blocks' themselves, in a way that cannot
be understood in terms of logical inference, and that may
involve a continuous change of parameters, rather than
discrete steps of accepting or rejecting sentences as being
true or false. By allowing systems to construct their own
'take' on the world, AI researchers were able to begin
overcoming the obstacles that were thrown up when they
attempted to put adult human conceptual structures into
systems that were quite different from us.

Standard AI methodology for giving some problem-solving


capability to a machine had at first been: think about how
you would solve the problem, write down the steps of your
solution in a computer language, give the program to the
machine to run. This was refined and extended in several
ways. For example, the knowledge engineering approach
asks an expert about the important facts of the domain,
translates these into sentences in a knowledge
representation language, gives these sentences to the
machine, and lets the machine perform various forms of
reasoning by manipulating these sentences. But it remained
the case that, in these extensions of the basic AI
methodology, the machine was limited to using the
programmer's or expert's way of representing the world. By
using adaptive approaches like artificial evolution, AI
systems are no longer limited to solutions that humans can
conceptualize — in fact the evolved or learned solutions
are often inscrutable. Our concepts and intuitions might not
be of much use in getting a six-legged robot to walk; our
introspection might even lead us astray concerning the
workings of our own minds. For both reasons, genetic
algorithms are an impressive addition to the AI
methodological toolbox.
However, along with these advantages come limitations.
There is a general consensus that the simple, incremental
methods of the adaptive approaches, while giving relatively
good results for tasks closely related to perception and
action, cannot scale up to tasks that require sophisticated,
abstract, and conceptual abilities. Give a system some
symbols and some rules for combining them, and it can
potentially produce an infinite number of well-formed
symbol structures — a feature that parallels human
competence. But a neural network that has learned to
produce a set of complex structures will usually fail to
generalize this into a systematic competence to construct an
infinite number of novel combinations. Genetic algorithms
have similar limitations to their 'scaling up'. But even if
these obstacles are overcome, and systems with advanced
forms of mentality are created by these means, the very fact
that we shall not have imposed our own concepts on them
may render their behaviour itself inexplicable. What we do
not need is another mind we cannot understand! With
respect to AI's goal of adding to our understanding of the
mind, adaptive (especially evolved) systems may be as
much a part of the problem as a part of the solution (see
section 2). And technological AI is also hindered if the
systems it produces cannot be understood well enough to
be trusted for use in the real world.Embodied and situated
systems Embodied and situated approaches to AI
investigate the role that the body and its sensorimotor
processes (as opposed to symbols or representations on
their own) can and do play in intelligent behaviour.
Intelligence is viewed as the capacity for real-time, situated
activity, typically inseparable from and often fully
interleaved with perception and action. Further, it is by
having a body that a system is situated in the world, and
can thus exploit its relations to things in the world in order
to perform tasks that might previously have been thought to
require the manipulation of internal representations or data
structures. For an example of embodied intelligence,
suppose a child sees something of interest in front of him,
points to it, turns his head back to get his mother's
attention, and then returns his gaze to the front. He does not
need to have some internal representation that stores the
eye, neck, torso, etc. positions necessary to gaze on the
item of interest; the child's arm itself will indicate where
the child should look; the child's exploitation of his own
embodiment obviates the need for him to store and access a
complex inner symbolic structure. For an example of
situated problem solving, suppose another child is solving a
jigsaw puzzle. The child does not need to look at each piece
intently, forming an internal representation of its shape, and
then when all pieces have been examined, close her eyes
and solve the puzzle in her head! Rather, the child can
manipulate the pieces themselves, making it possible for
her to perceive whether two of them will fit together. If
nature has sometimes used these alternatives to complex
inner symbol processing, then AI can (perhaps must) as
well.

These are a cluster of other AI approaches that, while


properly distinct from embodiment and situatedness, are
nevertheless their natural allies.
(i) Some researchers have found it useful to turn away from
discontinuous, atemporal, logic-based formalisms and
instead use the continuous mathematics of change offered
by dynamical systems theory as a way to characterize and
design intelligent systems.
(ii) Some researchers have claimed that AI should,
whenever possible, build systems working in the real
world, with, for example, real cameras receiving real light,
instead of relying on ray-traced simulations of light; a real-
world AI system might exploit aspects of a situation we are
not aware of and which we therefore do not incorporate in
our simulations.
(iii) Some insist that AI should concentrate on building
complete working systems, with simple but functioning and
interacting perceptual, reasoning, learning, action, etc.
systems, rather than working on developed yet isolated
competences, as has been the method in the past.

Architectures A change of emphasis common to both the


more and less traditional varieties of AI is a move away
from a search for specific algorithms and representations,
and toward a search for the architectures that support
various forms of mentality. An architecture specifies how
the various components of a system, which may in fact be
representations or algorithms, fit together and interact in
order to yield a working system. Thus, an architecture-
based approach can render irrelevant many debates over
which algorithm or representational scheme is 'best'.

The general problem of simulating (or creating)


intelligence has been broken down into a number of
specific sub-problems. These consist of particular traits or
capabilities that researchers would like an intelligent
system to display. The traits described below have received
the most attention.[11]

There is no established unifying theory or paradigm that


guides AI research. Researchers disagree about many
issues.[76] A few of the most long standing questions that
have remained unanswered are these: should artificial
intelligence simulate natural intelligence, by studying
psychology or neurology? Or is human biology as
irrelevant to AI research as bird biology is to aeronautical
engineering?[77] Can intelligent behavior be described using
simple, elegant principles (such as logic or optimization)?
Or does it necessarily require solving a large number of
completely unrelated problems?[78] Can intelligence be
reproduced using high-level symbols, similar to words and
ideas? Or does it require "sub-symbolic" processing?[79]
Cybernetics and brain simulation
Main articles: Cybernetics and Computational neuroscience

There is no consensus on how closely the brain should be


simulated.
In the 1940s and 1950s, a number of researchers explored
the connection between neurology, information theory, and
cybernetics. Some of them built machines that used
electronic networks to exhibit rudimentary intelligence,
such as W. Grey Walter's turtles and the Johns Hopkins
Beast. Many of these researchers gathered for meetings of
the Teleological Society at Princeton University and the
Ratio Club in England.[24] By 1960, this approach was
largely abandoned, although elements of it would be
revived in the 1980s.
Symbolic
Main article: Good old fashioned artificial intelligence
When access to digital computers became possible in the
middle 1950s, AI research began to explore the possibility
that human intelligence could be reduced to symbol
manipulation. The research was centered in three
institutions: CMU, Stanford and MIT, and each one
developed its own style of research. John Haugeland named
these approaches to AI "good old fashioned AI" or
"GOFAI".[80]
Cognitive simulation
Economist Herbert Simon and Allen Newell studied human
problem solving skills and attempted to formalize them,
and their work laid the foundations of the field of artificial
intelligence, as well as cognitive science, operations
research and management science. Their research team
used the results of psychological experiments to develop
programs that simulated the techniques that people used to
solve problems. This tradition, centered at Carnegie Mellon
University would eventually culminate in the development
of the Soar architecture in the middle
80s.[81][82]
Logic basedUnlike
Newell and Simon, John McCarthy felt that machines did
not need to simulate human thought, but should instead try
to find the essence of abstract reasoning and problem
solving, regardless of whether people used the same
algorithms.[77] His laboratory at Stanford (SAIL) focused on
using formal logic to solve a wide variety of problems,
including knowledge representation, planning and learning.
[83]
Logic was also focus of the work at the University of
Edinburgh and elsewhere in Europe which led to the
development of the programming language Prolog and the
science of logic programming.[84]
"Anti-logic" or "scruffy"
Researchers at MIT (such as Marvin Minsky and
Seymour Papert)[85] found that solving difficult
problems in vision and natural language processing
required ad-hoc solutions – they argued that there was
no simple and general principle (like logic) that would
capture all the aspects of intelligent behavior. Roger
Schank described their "anti-logic" approaches as
"scruffy" (as opposed to the "neat" paradigms at CMU
and Stanford).[78] Commonsense knowledge bases
(such as Doug Lenat's Cyc) are an example of
"scruffy" AI, since they must be built by hand, one
complicated concept at a time.[86]
Knowledge based
When computers with large memories became
available around 1970, researchers from all three
traditions began to build knowledge into AI
applications.[87] This "knowledge revolution" led to the
development and deployment of expert systems
(introduced by Edward Feigenbaum), the first truly
successful form of AI software.[35] The knowledge
revolution was also driven by the realization that
enormous amounts of knowledge would be required
by many simple AI applications.

Potrebbero piacerti anche