Talk:Strong AI vs. Weak AI/Removed text

Latest comment: 16 years ago by CharlesGillingham in topic Digression

Weak/Narrow/Applied AI edit

An example of weak AI software would be a chess program such as Deep Blue. Unlike strong AI, a weak AI does not achieve self-awareness or demonstrate a wide range of human-level cognitive abilities, and at its finest is merely an intelligent, more specific problem-solver.

Found a place for Drew McDermott in synthetic intelligence ---- CharlesGillingham (talk) 05:16, 8 December 2007 (UTC)Reply

Others note that Deep Blue is merely a powerful, heuristic search tree, stating that claims of its "thinking" about chess are similar to claims of single cells' "thinking" about protein synthesis; both are unaware of anything at all, and both merely follow a program which has been encoded within them. Many among these critics are proponents of Weak AI, claiming that machines can never be truly intelligent, while Strong AI proponents simply state that true self-awareness and thought as we know it may require a specific kind of "program" designed to observe and take into account the processes of one's own brain. Some evolutionary psychologists point out that humans may have developed just such a program especially strongly for the purpose of social interaction or perhaps even deception.

Synthetic intelligence edit

Found uses for all the text in this section. ----CharlesGillingham (talk) 02:29, 8 December 2007 (UTC)Reply

Artificial consciousness edit

When discussing the possibility of Strong AI, issues arise about the nature of the 'Mind-Body Distinction' and the role of symbolic computation. John Searle and most others involved in this debate address whether a machine that works solely through the transformation of encoded data could be a mind, not the wider issue of monism versus dualism (i.e., whether a machine of any type, including biological machines, could contain a mind).

Searle states in his Chinese room argument that information processors carry encoded data which describe other things. The encoded data itself is meaningless without a cross reference to the things it describes. This leads Searle to assert that there is no meaning or understanding in an information processor itself. As a result, Searle claims that even a machine that passed the Turing test would not necessarily be conscious in the human sense.

Some philosophers hold that if Weak AI is possible then Strong AI must also be possible. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence or 'mind'. In the same work, he proposes his Multiple Drafts Model of consciousness. Simon Blackburn, in his introduction to philosophy, Think, points out that you might appear intelligent but there is no way of telling if that intelligence is real (i.e., a 'mind'). However, if the discussion is limited to strong AI rather than artificial consciousness, it may be possible to identify features of human minds that do not occur in information processing computers.

Many strong AI proponents believe the mind is subject to the Church-Turing thesis. This belief is seen by some as counter-intuitive and even problematic, because an information processor can be constructed out of balls and wood. However, although such a device would be very slow and failure-prone, it could do anything that a modern computer can do. If the mind is Turing-compatible, it implies that, at least in principle, a device made of rolling balls and wooden channels can contain a conscious mind.

Roger Penrose attacked the applicability of the Church-Turing thesis directly by drawing attention to the halting problem, in which certain types of computation cannot be performed by information systems yet are alleged to be performed by human minds. However, this is arguably not a computability issue (involving potentially infinite computations), but it is an issue of simulation—the making of copies of the same computations using different technologies.

The massively parallel pattern matching of the brain's neural systems produce an immediacy of perception and awareness. Notions like 'seeing' in the sense of awareness of what is in view, 'consciousness' in the sense of self referential feelings, and 'emotions' in the sense of mentally induced physicality of feeling are emergent higher level concepts. Searle's Chinese Room interpretation of symbolic processing does not account for the 'semantic mapping' whereby symbolic computations link to the physicality of biological systems. The brain does not feel, but it does do the feeling.

Ultimately, the truth of Strong AI depends upon whether information processing machines can include all the properties of minds such as consciousness. However, Weak AI is independent of the Strong AI problem and there can be no doubt that many of the features of modern computers such as multiplication or database searching might have been considered 'intelligent' only a century ago.

Digression edit

The point made in this section is now made in the section Strong AI#Mainstream AI research ---- CharlesGillingham (talk) 23:59, 12 December 2007 (UTC)Reply

Notes edit