Monday, October 5, 2009
The turing test
One of the earliest papers to address the question of machine intelligence specifically in the relation to modern digital computers was written in 1950 by Alan Turing. The turing test measures th eperformance of an allegedly intelligent machine against that of a huaman being, arguably the best abd only the standard for intelligent behaviour. The test, which Turing called 'the imitation game' , places the machine and a human counterpart in rooms apart from a human interrogator. He is not able to see or directly speak to any of these, does not know which one is computer or human, may communicate to them using text entered through a terminal. The interrogator is asked to distinguish from machine and human solely based on the questions asked by him through the terminal. If the interrogator cannot distinguish between the machine and the human, then turing argues that the machine may be considered to be intelligent.
Strategies for space search- Data driven and Goal driven
Goal driven search is suggested if :
- A goal is given in the problem statement or can easily be formulated. In a mathematics theorem prover, for example, the goal is the theorem to be proved. Many diagnostics system consider potential diagnoses in a systematic fashion.
- There are large no. of rules that match the facts of the problem and thus produce an increase in no of conclusions or goals.In a mathematics theorem prover, for example, the total no of rules that may be applied to the entire set of axioms.
- Problem data are not given and must be acquired by the problem solver.In a medical diagnosis program,for example, a wide range of diagnostic tests can be applied.Doctors order only those that are necessary to confirm or deny a particular hypothesis.
- All or most of the data are given in the initial problem statement.Interpretation problems often fit this mold by presenting a collection of data and asking the system for interpretation.
- There are large no. of potential goals,but there are only a few ways to use the facts and given information of a particular problem instance.
- It is difficult to form a goal or hypothesis.
Potted History of AI
1943- McCulloch & Pitts: Boolean circuit model of brain
1950 -Turing’s “Computing Machinery and Intelligence”
1950s- Early AI programs, including Samuel’s checkers program,
Newell & Simon’s Logic Theorist, Gelernter’s Geometry Engine
1956 -Dartmouth meeting: “Artificial Intelligence” adopted
1965- Robinson’s complete algorithm for logical reasoning
1966–74 AI discovers computational complexity
Neural network research almost disappears
1969–79 Early development of knowledge-based systems
1980–88 Expert systems industry booms
1988–93 Expert systems industry busts: “AI Winter”
1985–95 Neural networks return to popularity
1988– Resurgence of probability; general increase in technical depth
“Nouvelle AI”: ALife, GAs, soft computing
1995– Agents agents everywhere . . .
1950 -Turing’s “Computing Machinery and Intelligence”
1950s- Early AI programs, including Samuel’s checkers program,
Newell & Simon’s Logic Theorist, Gelernter’s Geometry Engine
1956 -Dartmouth meeting: “Artificial Intelligence” adopted
1965- Robinson’s complete algorithm for logical reasoning
1966–74 AI discovers computational complexity
Neural network research almost disappears
1969–79 Early development of knowledge-based systems
1980–88 Expert systems industry booms
1988–93 Expert systems industry busts: “AI Winter”
1985–95 Neural networks return to popularity
1988– Resurgence of probability; general increase in technical depth
“Nouvelle AI”: ALife, GAs, soft computing
1995– Agents agents everywhere . . .
AI versus human intelligence enhancement
I do not think it plausible that Homo sapiens will continue into the indefinite future, thousands or millions of billions of years, without any mind ever coming into existence that breaks the current upper bound on intelligence. If so, there must come a time when humans first face the challenge of smarter-than-human intelligence. If we win the first round of the challenge, then humankind may call upon smarter-than-human intelligence with which to confront later rounds.
Perhaps we would rather take some other route than AI to smarter-than-human intelligence - say, augment humans instead? To pick one extreme example, suppose the one says: The prospect of AI makes me nervous. I would rather that, before any AI is developed, individual humans are scanned into computers, neuron by neuron, and then upgraded, slowly but surely, until they are super-smart; and that is the ground on which humanity should confront the challenge of superintelligence.
We are then faced with two questions: Is this scenario possible? And if so, is this scenario desirable? (It is wiser to ask the two questions in that order, for reasons of rationality: we should avoid getting emotionally attached to attractive options that are not actually options.)
Let us suppose an individual human is scanned into computer, neuron by neuron, as proposed in Moravec (1988). It necessarily follows that the computing capacity used considerably exceeds the computing power of the human brain. By hypothesis, the computer runs a detailed simulation of a biological human brain, executed in sufficient fidelity to avoid any detectable high-level effects from systematic low-level errors. Any accident of biology that affects information-processing in any way, we must faithfully simulate to sufficient precision that the overall flow of processing remains isomorphic. To simulate the messy biological computer that is a human brain, we need far more useful computing power than is embodied in the messy human brain itself.
I do not assign strong confidence to the assertion that Friendly AI is easier than human augmentation, or that it is safer. There are many conceivable pathways for augmenting a human. Perhaps there is a technique which is easier and safer than AI, which is also powerful enough to make a difference to existential risk. If so, I may switch jobs. But I did wish to point out some considerations which argue against the unquestioned assumption that human intelligence enhancement is easier, safer, and powerful enough to make a difference.
Perhaps we would rather take some other route than AI to smarter-than-human intelligence - say, augment humans instead? To pick one extreme example, suppose the one says: The prospect of AI makes me nervous. I would rather that, before any AI is developed, individual humans are scanned into computers, neuron by neuron, and then upgraded, slowly but surely, until they are super-smart; and that is the ground on which humanity should confront the challenge of superintelligence.
We are then faced with two questions: Is this scenario possible? And if so, is this scenario desirable? (It is wiser to ask the two questions in that order, for reasons of rationality: we should avoid getting emotionally attached to attractive options that are not actually options.)
Let us suppose an individual human is scanned into computer, neuron by neuron, as proposed in Moravec (1988). It necessarily follows that the computing capacity used considerably exceeds the computing power of the human brain. By hypothesis, the computer runs a detailed simulation of a biological human brain, executed in sufficient fidelity to avoid any detectable high-level effects from systematic low-level errors. Any accident of biology that affects information-processing in any way, we must faithfully simulate to sufficient precision that the overall flow of processing remains isomorphic. To simulate the messy biological computer that is a human brain, we need far more useful computing power than is embodied in the messy human brain itself.
I do not assign strong confidence to the assertion that Friendly AI is easier than human augmentation, or that it is safer. There are many conceivable pathways for augmenting a human. Perhaps there is a technique which is easier and safer than AI, which is also powerful enough to make a difference to existential risk. If so, I may switch jobs. But I did wish to point out some considerations which argue against the unquestioned assumption that human intelligence enhancement is easier, safer, and powerful enough to make a difference.
Sunday, October 4, 2009
Artificial Intelligence - Introduction
Artificial intelligence may be defined as a branch of computer science that is concerned with the automation of intelligent behavior. These principles include the data structures used in knowledge representation, the algorithms needed to apply that knowledge, and the languages and programming techniques used in their implementation.
However, this defenition suffers from the fact that intelligence itself is not very well defined or understood. Although most of us are certain that we know intelligent behaviour when we see it, it is doubtful that anyone could come close to defining intelligence in a way that would be specific enough to help in the evaluation of a supposedly intelligent computer program while still capturing the vitality and complexity of the human mind.
Thus the problem of defining AI becomes one of defining intelligence itself. Is intelligence a single faculty or is it just a namefor a collection of distinct and unrelated abilities?
Because of its scope and ambition AI defies simple definition. For the time being, we will simply define it as the collection of problems and methodologies studied by AI researchers. This defenitiom may seem silly and meaningless, but it makes an important point: AI , like every science , is a human endeavor , and perhaps , is best understood in that context.
Subscribe to:
Posts (Atom)