Notes for debate on artificial intelligence,
[The notes below were prepared in response to the
italicized questions posed by the students of the Framingham State College
Computer Science Club. Other debate participants were Yiling Chen (Economics
and Business Administration) and Rene Leblanc (Biology).]
See definitions, appended, of the AI Hypothesis, the
Physical Symbol System Hypothesis, and the notions of Strong and Weak AI.
I see the question of what is true AI as falling under
the more general question of what intelligence is. At one time intelligence was
defined as the ability to think, or deduce, or process symbols and ideas at a
certain level. This is being replaced by a many-dimensioned idea of what intelligence
is. There's emotional intelligence and social intelligence. Some researchers
are saying that you don't have intelligence or even language without a social
component. All our intelligence is created and defined by our interactions with
other people.
Is intelligence characterized by deduction (like one person thinking); two-way interaction (like two people talking); or multi-way interaction? Some of us see it
as multi-way interactive; that is, social.
Is there disembodied intelligence? No.
Is there intelligence outside a physical context? No.
Is natural evolution intelligent? Yes, there is
adaptation and learning in the natural evolution of species.
Are insects intelligent? No, but insect colonies can be. They can adapt to an
environment and can build complex structures without a blueprint.
Intelligence can be associated with emergent behavior, self-organization, and
stigmergy.
The Turing test of AI, proposed by Alan Turing, says
that a system is intelligent if it can imitate a human answering written
questions in such a way that a human could not tell that the artificial system
is not a human. The research team that I belong to proposes
a modified Turing test that requires all answers to be subject to follow-up, so
that we can go down a path of questions with an artificial or human subject.
This interaction (as opposed to the algorithm by which a system would answer a
single question) is necessary to distinguish an intelligent system from one
that lacks intelligent.
The idea that machines can in principle simulate thinking is
called Weak AI. The idea that machines could actually think is called Strong
AI. I accept Strong AI. I disagree with writers like John Searle who say that
thinking is specific to humans. I agree with the Symbol Systems Thesis of
Herbert Simon and Newell, that symbol manipulation at a sufficiently high
level, regardless of the hardware, is intelligence.
If thinking and feeling can be defined precisely, then
the philosophical question comes up as follows: Is simulated thinking at a
sufficiently high level a kind of thinking? Is simulated feeling a kind of
feeling? If nobody can tell the difference between thinking and simulated
thinking by observing their results, then what's the difference? My opinion is
that if we can't tell the difference between the thinking-type behavior of a
machine and a human, then we have to admit the machine's thinking. The same is
true for feeling. We have to judge thinking and feeling by observable
interactive behavior.
The MIT researcher Rodney Brooks proposed that a vacuum cleaner
that rolls around a room by itself and finds dust would be the next big
achievement in AI. Now that is on the market. That's the level at which AI
exists today. I don't consider the Paper Clip if MS Windows to be intelligent.
The Old AI was expert systems that applied rules of deduction to make
recommendations for treatment of illness or finding oil wells. That line of
research was not very productive.
Can AI be designed? No, it must be evolved. It is too complex to design.
Intelligence and its creation tend to be decentralized.
No. Software does not have rights. As soon as we have a system
that can be easily copied, there is no point in giving rights to a physical
entity that embodies that system. As soon as the system is injured or
destroyed, it could be instantly restored. The right to exist forever is
guaranteed by the technology, so it needs no protection in law.
Definitions
Artificial Intelligence Hypothesis
“The conjecture that
every aspect of learning or any other feature of intelligence can in principle be so precisely
described that a machine can be made to simulate it”
(John McCarthy, suggesting 1956 Dartmouth conference)
Physical symbol
system hypothesis
"A physical symbol system has the necessary and sufficient means for general intelligent action." (Newell, Simon, 1975)
Physical gounding hypothesis
"the grounding of symbols in the physical world is a necessary condition for building a system that is intelligent." (Weiss, Sen, 200x)
Strong AI
“the claim that some forms of artificial intelligence can truly reason
and solve problems; strong AI states that it is
possible for machines to becomes sapient, or self-aware, but may or may not exhibit human-like
thought processes. The term strong AI was originally coined by John Searle,
who writes:
"according to strong AI, the computer is not
merely a tool in the study of the mind; rather, the appropriately programmed
computer really is a mind" (J. Searle in Minds, Brains and Programs.
The Behavioral and Brain Sciences, vol. 3, 1980).
“In
contrast, weak AI
refers to the use of software to study the behavioristic and pragmatic view of
intelligence. In weak AI, there is not the claim for software actually being
intelligent, but just being a tool we use to assess hypotheses regarding the
nature of intelligence.
“What
distinguishes strong from weak AI is that in strong AI the computer becomes a
conscious mind, not simply an intelligent, problem-solving device. The
distinction is philosophical and does not mean that devices that demonstrate
weak AI are necessarily weaker or less good at solving problems than devices
that demonstrate strong AI."
(Wikipedia
entry, ca. 2005. The entry was later modified.)