Two options for re-factoring the AI outline from two leading scholars (expanded to four options)

One editor has suggested that a re-factoring of the AI article could address many of the issues with its original 2004 form when it was first written by one of the participants in the Talk page discussion above. The first is adapted from Professor Sack and U.Cal. at Santa Cruz, and the other is from Peter Norvig's 2008 book on AI for purposes of discussion/comment/revision. Yet another editor has expanded the list to four outlines as providing further ideas for the refactoring of the old outline option. FelixRosch TALK 17:36, 8 November 2014 (UTC)

Option one: Adapted from Prof. Sack, Univ. Cal. at Santa Cruz:

1...Early AI

2...Behaviorism and AI

3...Cybernetics and AI

4...AI as a Kantian Endeavor

5...Common sense challenges to AI

5.1...Kant and Common Sense

5.2...AI and Common Sense

6...AI and non-military concerns

7...Two strands of AI research

7.1...The neo-encyclopedists

7.2...The Computational Phenomenologists

8...The Aesthetic turn in AI

8.1...Husserl, Heidegger and AI

8.2...AI and cultural difference

9...Turing's Imitation Game

10...AI and the Uncanny

11...Deep learning and WATSON

12...Directions for Future Research


Option two: Adapted from Peter Norvig, Director research, Google, Inc., formerly USC:

Part I Artificial Intelligence

Part II Problem Solving

Part III Knowledge and Reasoning

Part IV Uncertain Knowledge and Reasoning

Part V Learning

Part VII Communicating, Perceiving, and Acting

Part VIII WATSON and the future of AI

These adapted outlines are options for possible contemporary re-factoring of the article on AI from its current version. FelixRosch (talk) 17:41, 7 November 2014 (UTC)

These are good. The first is more appropriate for philosophy of AI than this article. Have you seen Talk:Artificial intelligence/Textbook survey? You could add these to that. ---- CharlesGillingham (talk) 04:24, 8 November 2014 (UTC)
Perhaps we could refresh the survey with some more recent textbooks? I've been bold and added Poole and Mackworth (2010). Hope that's ok.
Other candidates include
  • The Cambridge Handbook of Artificial Intelligence (2014).
  • Artificial Intelligence: A Modern Approach (3rd Edition)
pgr94 (talk) 15:00, 8 November 2014 (UTC)
Firstly, my apologies to anyone concerned. I arrived here in response to the RFC, not because of any expertise in AI, a field in which I had attenuated interest, and hardly any practical skills. So my points were made as an outsider. Meanwhile I have had an interruption in my personal PC resources, from which I am only now recovering, which prevented my continued participation in the discussion that I had interrupted.
Now then, most of the discussion since I left has been, I think, more coherent and cooperative than much that went before I came in, in particular with more explicit recognition of the need to distinguish and define the concerns and threads before deciding how to construct a theme from them. Look at the foregoing TOC examples for example. All Good Stuff, but I suggest that the basic idea needs a bit of adjustment. For one thing, all the TOC examples (necessarily?) lead to book-like structures, and very possibly structures for good books at that. But a book is not as a rule an encyclopaedic article (and vice of course versa). Also, note that the alternatives would produce very different books; in fact skimming the alternatives, they even give me the impression of theses, not necessarily conflicting or incompatible theses, but certainly not parallel and coherent theses. In fact, I argue that it is not even clear that any of the layouts is necessarily better than the others.
As I see it, we might well begin with such TOCs, but not to construct the outline for an article; a better approach should be to collect as many independent "chapters" as we agree need writing. Then contemplating those chapters as independent articles. In principle, with suitable interlinking, that could cover the whole topic, and the user could skip the way eclectically through whichever articles s/he chose in any way s/he chose. "Just a second," (I hear you cry) "you must be havering; thirty five articles, plus any more that anyone thinks upont the fly? Get real!"
Maybe. But to begin with, if the subject matter comprises 35 or 350 material topics that can stand on their own merits, meaning that any reasonable reader might reasonably wish to consult any one of them without having to plod through extraneous material, then that is how many articles we (ideally) should have. As I have had occasion to point out, the subject is huge! This is not a consideration unique to AI of course; consider how many articles say, biology has been split into. Or medicine. Or ecology.
Secondly, having once decided on anything resembling such a list or structure, we should be in a far sounder position to say which "chapters" could naturally and usefully be combined into the same article; the fact that particular articles could be separated need not imply that they should be. For instance, jumping the gun by way of example, at a guess the first three (or four?) chapters above might constitute one article. But having done so, we should quite naturally banish problems such as where or whether "human-like" AI should be mentioned in any chapter (article) or not. And that is just one of many confusions that naturally emerge in any undisciplined or unstructured discussion of such a field.
Then we could look at the question of which teams should work on which articles and we could create suitable stubs to pre-empt the use of the names. Finally, we could a global guide to the topics, so that anyone who would indeed love to work the way through the whole field book-wise, could do so naturally, a click at a time.
I am well aware that this is a bottom-up design approach in an age in which top-down is the holy tenet, but I urge you to consider that top-down design demands a deep command of the underlying primitives. Where they do not yet exist, they must be created -- a bottom-up element from some points of view.
The superficiality of my own acquaintance with the field precludes my guiding any such endeavour (just the construction of the overview guide might seem trivial, but it would demand considerable depth of mastery of the field and its didactics, and these I lack). I don't mind anyone rattling my cage at any point where I could help with discussion, if ever. Nor do I mind assisting with editing, but that is as far as I could be useful if welcome.
Just thoughts... JonRichfield (talk) 20:02, 8 November 2014 (UTC)
Yet again I agree with you. I am also in rather the same boat, being able to contribute little more to the articles than the odd snippet from a popular source. I have responded in an earlier thread to the idea of a broad introductory structure. Perhaps that could serve as the top level for a more comprehensive subject tree. The Outline of artificial intelligence is another place to start, especially if one enjoys jumping in at the deep end. Like many such Outlines, it's really just a massive bullet list, though it makes some poor pretence that the lengthier entries are introductory content. — Cheers, Steelpillow (Talk) 21:20, 8 November 2014 (UTC)
@Steelpillow and @JonRichfield; Further agreement with both. This is the expansion of another 2014 book on AI to provide ideas for helping to identify the sections in the refactoring. The link I am including for one of the articles in it is very worthwhile for practitioners. Here is the outline and link to the 2014 article by Stan Franklin:
  • The Cambridge Handbook of Artificial Intelligence (2014).

Part I: Foundations

1. History, motivations, and core themes, Stan Franklin [1] (linked must reading for AI practitioners.)

2. Philosophical foundations, Konstantine Arkoudas and Selmer Bringsjord

3. Philosophical challenges, William S. Robinson

Part II: Architectures

4. GOFAI, Margaret A. Boden

5. Connectionism and neural networks, Ron Sun

6. Dynamical systems and embedded cognition, Randall D. Beer

Part III: Dimensions

7. Learning, David Danks

8. Perception and computer vision, Markus Vincze, Sven Wachsmuth, and Gerhard Sagerer

9. Reasoning and decision making, Eyal Amir

10. Language and communication, Yorick Wilks

11. Actions and agents, Eduardo Alonso

12. Artificial emotions and machine consciousness, Matthias Scheutz

Part IV: Extensions

13. Robotics, Phil Husbands

14. Artificial life, Mark A. Bedau

15. The ethics of artificial intelligence, Nick Bostrom and Eliezer Yudkowsky

The above four part outline looks useful. FelixRosch TALK 21:33, 8 November 2014 (UTC)

I certainly agree that the outline would be useful and even might be a basis for the overview article on AI. Purely as a superficial personal reaction, and decidedly without suggesting that the list would be adequate, I suspect that the AI overview article could owe a great deal to the foundations (Part I of the book), with special emphasis on the Franklin chapter. Whether in our context the other two chapters in Part I would best be included in the same overview, or in either one or two separate WP articles on the philosophy, I cannot say without having read them, which, as it happens I have not.
Equally superficially I incline to think that Part II could fit into one article, but I do not deny that it might spawn more related articles on specialised themes (which of course might happen with practically any article on any theme).
I suspect that Part III chapters 7--12 each might best be in its own article, and in fact an extra article on Dimensions in AI might be desirable to give an overview over the six linked articles.
Much the same would apply for chapters 13--15 and not having read them, but going by title alone, I reckon that chapters 2--3 might fit into the same group as the extensions, sharing an overview article.
At a rough guess, that might mean something like twelve to fifteen articles. However there already are articles on many of the themes, though we cannot assume a priori that those are already adequate, or even acceptable in their present form. Examples include Ethics of artificial intelligence, Artificial life#See also (check the list!) Synthetic biology, Robotics, Artificial consciousness, and there is plenty where that came from,even without leaving the confines of WP.
Going more deeply into the theme, and still without having read the source material (i.e. very likely talking nonsense) I suspect from the chapter titles of the Sack & Norvig books, that we could find ourselves with a good ten or so more topics if the overlap with the Cambridge book isn't broader than the titles suggest. But let's face it, an article on say, Knowledge and reasoning in artificial intelligence could be a big, fat one, and just scouting for already existing, non-trivial, related articles in WP (eg Semantic reasoner) would be a non-trivial exercise.
In short, to do a really gratifying job would be a daunting challenge for a team, but the realisation leaves me with the question of whether anything less would be worth doing at all. If anything comes of this as a project I could not drive it,but would be willing to do a bit of water-carrying if it is perceived as useful. JonRichfield (talk) 11:31, 9 November 2014 (UTC)
@JonRichfield; Yes, that all sounds on target and your emphasis on the reader's viewpoint is important (rather that only looking at editor's viewpoints alone). The outline @Steelpillow presented in the previous section was the following and maybe it offers some further ideas for reflection on the refactoring question:
This 5-part outline was updated by Steelpillow directly below
The following discussion has been closed. Please do not modify it.
Artificial Intelligence (continue to tease apart its Applications and the thing itself)
1. Created intelligence, cognition, mind
2. Academic study of Weak-AI
3. Hard research
4. Philosophy/ethics
5. Fiction
Is this close to what you had in mind... FelixRosch (TALK) 17:39, 10 November 2014 (UTC)
For the record that was not my list. Modifying it per my comments might lead to something like:
Artificial Intelligence (continuing to tease apart its Applications and the thing itself)
1. Created intelligence, cognition, mind
2. History
3. Research
4. Philosophy/ethics
5. Fiction
— Cheers, Steelpillow (Talk) 19:48, 10 November 2014 (UTC)

() @JonRichfield I support everything you're saying. I think you grossly underestimate the number of articles being summarized here -- dig through Category:Artificial intelligence and its subcategories for 10 or 20 minutes and you'll begin to see the breadth of the field. Also, I wanted to make sure you were aware that this project was carried out back in 2007 (see Talk:Artificial intelligence/Textbook survey). ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@CharlesGillingham: Thanks Charles;that there is so much breadth to conceive is far from being ANY surprise, but the degree to which it appeared in the categories was, somewhat! :) It leaves me somewhat nonplussed. On one hand I am tempted to suggest that a wider range of topics be put into "bottom-up" separate articles as satellites to a main article or group of articles, but I suspect that it would be more practical to begin by blinkering ourselves and start from the outlines that appear below. Having established a sound nucleus on which we can build, either as a single article, or as a structure of linked articles, we can extend indefinitely. From my experience with large articles, I suspect that if any article becomes too large for editors unfamiliar with the established text to scan it fairly conveniently, there will be a plague of updates out of place or simply confused and inaccurate. The maintenance problem can be forbidding. The outlines below look promising. However, we should remain alert for articles that either should be split into separate articles, or structured to contain clearly distinct sections. Failure to discriminate between concerns in suitably linked but conceptually continent topics, is one of the most insidious enemies of cogency and lucidity in technical writing, formal or informal. JonRichfield (talk) 15:57, 14 November 2014 (UTC)

@Pgr94 and everyone. We do need to update Talk:Artificial intelligence/Textbook survey with the current editions of everything. I don't have time to do this right now, and I would be surprised if it's really all that different, but I would like to know. I expect it would mostly effect the tools section -- we need to add some new tools and drop some deprecated ones. I also think we may need to add a paragraph to "knowledge" and "reasoning" to emphasize statistical AI a little more -- these sections leave off where symbolic AI failed and don't really cover how far statistical AI has come since then. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@FelixRosch and Steelpillow. My original proposal for this article, back in 2007, was divided into three sections Perspectives,Research,Applications. Perspectives came first, and included the sections "history", "philosophy", "ethics, fiction and speculation". Later editors thought this was a bad idea. So I understand what you're trying to do: I also feel that most readers are more interested in these perspectives than they are in the technical stuff, and maybe this stuff should get more emphasis. I don't object to moving Problems, Approaches and Tools into a section called Research. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@Steelpillow The only problem I have is with "created intelligence, cognition, mind". I have three problems.

  1. We have to cover this material elsewhere in the article. I'll hit each of the three: Created intelligence I'm not sure what we want to say about "created intelligence", unless you are talking about the ethics of creating artificial beings, and we have a comprehensive section on this already (and it's also touched on in the first paragraph of AI#History). Cognition By "cognition" and assume you mean cognitive science. Right now, cognitive science is all over the article -- it's origin in AI#Cognitive simulation, AI#Sub-symbolic mentions embodiment, the second half of AI#Reasoning and problem solving, the second half of AI#Knowledge representation. Cognitive science is important to AI, but then again so is statistics and so is computer science, but this article isn't about statistics or computer science or cognitive science. I don't think you need a separate section on this, especially because we have to discuss it where it's relevant, so what else is there to say? Mind Clearly this is philosophy of AI, and we cover all the greatest hits here.
  2. What are the sources for this section, unless they are sources we're already using in these other sections? If you stray from the main sources, you're going to find literally thousands of different points of view, none of which is widely accepted and most of which are highly speculative. There is no end to how much people say about this, and what little difference all this talk makes to actual AI programs.
  3. What other articles will this section summarize? Isn't it the same articles as speculation and philosophy? ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)
I borrowed "created intelligence, cognition, mind" unchanged from the earlier proposal. I assumed it to be a conceptual introduction to the field, something like explaining the difference between the weaker forms of AI and cognitive AI, leading on to the strongest forms of AI, which in turn can be used to introduce the philosophical questions about an artificial mind, consciousness and ethics. It would set all the subsequent sections in context. If it turns out to be short enough, it could just be the article lead. As I have said, I am weak on sources. I would assume that conceptual introductions to the field exist. Tertiary sources are better than secondary, which in turn are far better than primary research papers. It would not so much summarise as lay conceptual foundations for all the articles on AI. — Cheers, Steelpillow (Talk) 20:38, 11 November 2014 (UTC)

@FelixRosch What is "hard research"? Who are we talking about and how is it different than "academic study of weak AI"? Is "hard research" artificial general intelligence? If so, then why can't we call it by its name? Forgive me for being direct: please learn the correct terminology before you keep proposing things here. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

I think it's clear that he's still imagining that AI is mostly about people creating "Strong/Complete AIs", but that some people want "weak ai" to be included even though it's not really AI.
This whole understanding of the field is so wrong that it's almost the exact opposite of reality. It's not going to be possible to "compromise" with that. APL (talk) 18:02, 11 November 2014 (UTC)
@APL: Can you be more specific about what you are rejecting? For example here is my suggestion, further refined per ongoing discussion:
Introduction to artificial intelligence
  1. Conceptual foundations
  2. History
  3. Research
  4. Philosophy and ethics
  5. Fiction
Are you rejecting this whole structure or just Felix's suggested subdivision of the research? — Cheers, Steelpillow (Talk) 20:57, 11 November 2014 (UTC)
Sorry, I was just rejecting the subdivision of the research into "weakAI" and "hard research".
The rest tentatively looks good to me.
At least, as described just now by @Steelpillow: I don't think leading with "Created intelligence, cognition, mind" is smart, and the phrase "weakAI" should probably not even appear.
History and Research overlap a bit. Even if the research section is a list of different techniques. But I don't think that's a problem.
Fiction will actually be a tough one to balance. It's an important part of common conceptions about AI, but it'll be tough to not devolve it into a laundry list. APL (talk) 23:06, 11 November 2014 (UTC)

Feel free to respond in-between up there. ---- CharlesGillingham (talk) 15:15, 11 November 2014 (UTC)

@Steelpillow; Your comments and @JonRichfield have been useful in emphasizing the readers standpoint rather than just what editors might like to pursue. It would be very useful now if you could start to present a cross-section table to how the old article sections would be separated into the new factoring which you are putting forward. The assumption is that much of the current material in the current article would be more or less directly absorbed into the new refactored version and it would useful to see this in a table or list of some sort. (For example, old section "A" goes into new section X, and old section "B" goes into new section Y, etc.). Cheers. FelixRosch (TALK) 16:34, 12 November 2014 (UTC)

Hi Felix. I am not sure if we have yet reached a view on whether we want one article or two. Should my reader-oriented outline apply to the present article refactored, or to a new Introduction to artificial intelligence to accompany it? — Cheers, Steelpillow (Talk) 18:08, 12 November 2014 (UTC)
@Steelpillow; My thought was along the lines of looking at the main article refactoring first, and then having that version would make it easier to evaluate and formulate an Introduction article right after its done. The present article refactored into the new reader-oriented outline would be useful at this time. Cheers. FelixRosch (TALK) 20:13, 12 November 2014 (UTC)
Let me add a bit to your outline, and see if this is what you have in mind:
  1. Conceptual foundations: definition of AI, Turing test, intelligent agent paradigm (cf. Philosophy of AI#Intelligence)
  2. History: (social history)
  3. Research (and technical history)
    1. Goals
    2. Approaches (- without the intelligent agent paradigm, which moved into foundations)
    3. Tools
  4. Applications and industry (I think we do need this section, if anyone ever has the time to make it good; it's a multibillion dollar industry, and the article should cover it comprehensively.
  5. Philosophy (I think that philosophy of AI is so cleanly defined that it belongs in separate section from the speculation.
  6. Speculation and ethics (as is -- just tidied this up today)
  7. Fiction (I think we also need this section, but I argued above (and in 2008) that it works best if we mix it into speculation, only to emphasize the serious ideas that have appeared in fiction)
Does that work? ---- CharlesGillingham (talk) 01:15, 13 November 2014 (UTC)
My original outline was this:
  1. Conceptual foundations
  2. History
  3. Perspectives
    1. Philosophy
    2. Speculation and ethics
    3. Fiction
  4. Research
    1. Goals
    2. Approaches
    3. Tools
  5. Applications
I actually think this is even more reader-friendly, but other editors didn't like the "perspectives" header. ---- CharlesGillingham (talk) 01:15, 13 November 2014 (UTC)
I am impressed by how close we are getting to a common foundational structure for the article. My own observations at this stage:
  • The separation of research and applications now makes sense to me.
  • The "perspectives" grouping is interesting. I see where it is coming from but am unsure whether it is informative enough to be worth the extra nesting of sub-sections.
  • Putting my philosopher hat on, I very firmly see ethics as a branch of philosophy and not as a branch of speculation or prediction. "Roboethics" (shudder) is wholly contingent on the philosophical idea of a machine as a conscious mind, a being in its own right, while "machine ethics" is simply an extension of our own ideas of human ethics, how we should behave towards others and therefore by extension how we should make our tools behave towards others. Both are well established branches of philosophy. One may speculate and predict on these topics of course, but that is a trivial observation, one may speculate and predict on any aspect of AI.
  • Ultimately, the technological content (research and applications) may overwhelm the article. In this event, I would suggest that my idea of moving the wider context to an introductory article might be revisited.
  • Between speculation, prediction and fiction, I would plump for "speculative developments" and "fiction" as two (overlapping but) fundamentally independent aspects.
Most of these are minor comments, the only one I feel strongly about is the place of ethics in the field.
— Cheers, Steelpillow (Talk) 12:08, 13 November 2014 (UTC)
@Steelpillow and @JonRichfield; General agreement across the board. The one topic of "History" has not been discussed yet and its placement. Neither the 2014 Cambridge version nor the Norvig version of the outline on the science of AI lists History with this level of prominence in its outline form. My thinking is that they may be on to something here and Wikipedia already has a peer reviewed-version of the History page for AI. Is it possible to just link it directly at the top of this page and maybe move a shorter version of the section on History to the end of this main AI page. The outline for the 2014 Cambridge AI outline is included below in shortened form for ready reference.
  • The Cambridge Handbook of Artificial Intelligence (2014).
Part I: Foundations, Stan Franklin [2] (linked must reading for AI practitioners.)
Part II: Architectures
Part III: Dimensions
Part IV: Extensions
@Steelpillow; The cross-reference table from the current article outline to your new outline would still be useful. FelixRosch (TALK) 16:09, 13 November 2014 (UTC)

Two most recent versions for refactoring new AI version outline

@Steelpillow and @JonRedfield and @APL; My thought continues along the lines of looking at the main article refactoring first, and then having the present article refactored into the new reader-oriented outline would be useful at this time. One of you could start the useful cross-mapping of the current section numbers, such as where does 2.4 go in the newly refactored outline, where does 2.5 go in the new outline, where does 2.6 go, etc. FelixRosch (TALK) 17:20, 14 November 2014 (UTC)

Having established a sound nucleus on which we can build, either as a single article, or as a structure of linked articles, we can extend indefinitely. From my experience with large articles, I suspect that if any article becomes too large for editors unfamiliar with the established text to scan it fairly conveniently, there will be a plague of updates out of place or simply confused and inaccurate. The maintenance problem can be forbidding. The outlines below look promising. However, we should remain alert for articles that either should be split into separate articles, or structured to contain clearly distinct sections. Failure to discriminate between concerns in suitably linked but conceptually contingent topics, is one of the most insidious enemies of cogency and lucidity in technical writing, formal or informal. JonRichfield (talk) 15:57, 14 November 2014 (UTC) (Reposted by FelixRosch (TALK) 17:20, 14 November 2014 (UTC))

Refactored Outline I:

Introduction to artificial intelligence
  1. Conceptual foundations
  2. History
  3. Research
  4. Philosophy and ethics
  5. Fiction

Refactored Outline II:

Let me add a bit to your outline, and see if this is what you have in mind:
  1. Conceptual foundations: definition of AI, Turing test, intelligent agent paradigm (cf. Philosophy of AI#Intelligence)
  2. History: (social history)
  3. Research (and technical history)
    1. Goals
    2. Approaches (- without the intelligent agent paradigm, which moved into foundations)
    3. Tools
  4. Applications and industry (I think we do need this section, if anyone ever has the time to make it good; it's a multibillion dollar industry, and the article should cover it comprehensively.
  5. Philosophy (I think that philosophy of AI is so cleanly defined that it belongs in separate section from the speculation.
  6. Speculation and ethics (as is -- just tidied this up today)
  7. Fiction (I think we also need this section, but I argued above (and in 2008) that it works best if we mix it into speculation, only to emphasize the serious ideas that have appeared in fiction)

This is the closest approximation to what the two leading new outlines seem to be now which brings together everyone's thoughts (if anyone is not attributed fully just chime in with your name and where it belongs since there about a half dozen participating editors on this topic at this point). Can someone in the group at least take a first starting attempt at the cross-mapping of section numbers from the current version to the new version outline. The History section, for example, is easy to identify and could be moved to the end of each outline, though some of the other sections need a little more thinking and elaboration to cross-map accurately. FelixRosch (TALK) 17:20, 14 November 2014 (UTC)

OK, here goes nothing

As mentioned, my latest short version looks a little different. Here it is:

  1. Conceptual foundations
  2. History
  3. Research
  4. Applications
  5. Philosophy and ethics
  6. Fiction

It is expanded below. I hope that this will give some clue as to why I have put ethics where I have. Many of the philosophy subsection headings are inspired by, if not culled directly from, Philosophy of artificial intelligence and Ethics of artificial intelligence. I hope that you can appreciate from this how the philosophy and ethics stitch so closely together.

My copying of the technical stuff is very slavish, due to my general ignorance of the details.

Artificial intelligence

1 Conceptual foundations

2 History

3 Research
3.1 Goals
3.1.1 Deduction, reasoning, problem solving
3.1.2 Knowledge representation
3.1.3 Planning
3.1.4 Learning
3.1.5 Natural language processing (communication)
3.1.6 Perception
3.1.7 Motion and manipulation
3.1.8 Long-term goals
3.1.8.1 Social intelligence
3.1.8.2 Creativity
3.1.8.3 General intelligence
3.2 Approaches
3.2.1 Cybernetics and brain simulation
3.2.2 Symbolic
3.2.3 Sub-symbolic
3.2.4 Statistical
3.2.5 Integrating the approaches
3.3 Tools
3.3.1 Search and optimization
3.3.2 Logic
3.3.3 Probabilistic methods for uncertain reasoning
3.3.4 Classifiers and statistical learning methods
3.3.5 Neural networks
3.3.6 Control theory
3.3.7 Languages
3.4 Evaluating progress

4 Applications
4.1 Competitions and prizes
4.2 Platforms
4.3 Toys

5 Philosophy and ethics
5.1 Intelligent behaviour and machine ethics
5.1.1 Criteria for intelligence
5.1.2 Machine ethics
5.1.3 Malevolent and friendly AI
5.1.4 Decrease in demand for human labor
5.2 Machine consciousness
5.2.1 Criteria for consciousness
5.2.2 Robot rights
5.2.3 The threat to human dignity (devaluation of humanity)
5.3 Superintelligence
5.3.2 The singularity
5.3.3 Transhumanism

6 Fiction

The following table maps the current sections onto the new, and is done primarily to show that nothing has been forgotten.

Current Proposed
1 History 2 History
2 Goals 3.1 Goals
2.1 Deduction, reasoning, problem solving 3.1.1 Deduction, reasoning, problem solving
2.2 Knowledge representation 3.1.2 Knowledge representation
2.3 Planning 3.1.3 Planning
2.4 Learning 3.1.4 Learning
2.5 Natural language processing (communication) 3.1.5 Natural language processing (communication)
2.6 Perception 3.1.6 Perception
2.7 Motion and manipulation 3.1.7 Motion and manipulation
2.8 Long-term goals 3.1.8 Long-term goals
2.8.1 Social intelligence 3.1.8.1 Social intelligence
2.8.2 Creativity 3.1.8.2 Creativity
2.8.3 General intelligence 3.1.8.3 General intelligence
3 Approaches 3.2 Approaches
3.1 Cybernetics and brain simulation 3.2.1 Cybernetics and brain simulation
3.2 Symbolic 3.2.2 Symbolic
3.3 Sub-symbolic 3.2.3 Sub-symbolic
3.4 Statistical 3.2.4 Statistical
3.5 Integrating the approaches 3.2.5 Integrating the approaches
4 Tools 3.3 Tools
4.1 Search and optimization 3.3.1 Search and optimization
4.2 Logic 3.3.2 Logic
4.3 Probabilistic methods for uncertain reasoning 3.3.3 Probabilistic methods for uncertain reasoning
4.4 Classifiers and statistical learning methods 3.3.4 Classifiers and statistical learning methods
4.5 Neural networks 3.3.5 Neural networks
4.6 Control theory 3.3.6 Control theory
4.7 Languages 3.3.7 Languages
5 Evaluating progress 3.4 Evaluating progress
6 Applications 4 Applications
6.1 Competitions and prizes 4.1 Competitions and prizes
6.2 Platforms 4.2 Platforms
6.3 Toys 4.3 Toys
7 Philosophy 5 Philosophy and ethics
8 Predictions and ethics merged into 5 Philosophy and ethics
8.1 Decrease in demand for human labor 5.1.4 Decrease in demand for human labor
8.2 Devaluation of humanity 5.2.3 The threat to human dignity (devaluation of humanity)
8.3 Malevolent and friendly AI 5.1.3 Malevolent and friendly AI
8.4 Robot rights 5.2.2 Robot rights
8.5 The singularity 5.3.1 The singularity
8.6 Transhumanism 5.3.2 Transhumanism
9 In fiction 6 In Fiction

Comments? — Cheers, Steelpillow (Talk) 20:46, 14 November 2014 (UTC)


@Steelpillow and @JonRichfield and @APL; This all looks strong as a usable outline for a 2014 version of the new refactored ouline. My small suggestion is to give some consideration to merging the first two sections in both outlines in order that the history section is absorbed into the Conceptual foundation sections. Since Wikipedia already has a peer reviewed article for History of AI, there does not seem to be a reason to distract readers from a very good article on the subject which already exists and can be readily linked. Here is a short draft of what the opening section could start to look like:
Conceptual foundation
The conceptual foundations defining artificial intelligence in the second decade of the 21st century are effectively best summarized as a list of strongly endorsed research pairings of contemporary 21-century research areas as follows: (a) Symbolic AI versus neural nets; (b) Reasoning versus perception; (c) Reasoning versus knowledge; (d) Representationalism versus non-representationalism; (e) Brains-in-vats versus embodied AI; and (f) Narrow AI verus human-level intelligence. [Franklin (2014), Cambridge Univ Press, pp15-16. The Cambridge Handbook of Artificial Intelligence (2014). [3]]
Several key moments in the history of AI have contributed to defining the major 21 century research areas in AI. These early historical research areas from the last century, although by now well-rehearsed, are revisited occasionally with some recurrent reference to: (i) McCullock and Pitts early research in schematizing digital circuitry, (ii) Alan Turing's pioneering efforts and thought experiments; (iii) the early Dartmouth workshop in AI; (iv) Samuel's early checker player; (v) Minsky's early dissertation; and (vi) the misstep of early Perceptrons and the early neural net winter. From these followed the 4 more historical research areas currently being pursued in updated form which include: (a) means-ends problem solvers (Newell 1959), (b) Nautral language processing (Winograd 1972), (c) knowledge engineering (Lindsay 1980), and (d) early automated medical diagnosis (Shortliffe 1977). [Franklin, Cambridge Univ Press, pp16-22. The Cambridge Handbook of Artificial Intelligence (2014). [4]]]
Major recent accomplishments in AI defining future research paths in the 21st century have included the development of (a) Extensive knowledge-based expert systems, (b) Deep Blue defeating Gary Kasparov, (c) Solution to the Robbins conjecture, (d) Watson defeat of Jeopardy human champions, and (e) Killer App ( and Gaming applications as a major force of research and innovation). [Franklin, Cambridge Univ Press, pp22-24. The Cambridge Handbook of Artificial Intelligence (2014). [5]]]
The current major leading 21-century research areas appear to be (i) Knowledge maps, (ii) Heuristic search, (iii) Planning, (iv) Expert systems, (v) Machine vision, (vi) Machine learning, (vii) Natural language, (viii) Software agents, (ix) Intelligent tutoring, and (x) Robotics. The most recent 21-century trends appear as the fields of: (a) Soft computing, (b) AI for data mining, (c) Agent based AI, (d) Cognitive computing, and (e) AI and cognitive science. [Franklin, Cambridge Univ Press, pp24-30. The Cambridge Handbook of Artificial Intelligence (2014). [6]]]
This should provide at least something to draft into the missing Conceptual foundation section which you identified. Otherwise your outline looks strongly like it is ready to move forward and it would be nice to see it begin transitioning into the existing material since it otherwise seems to match up almost one-for-one. Possibly one of you could suggest the easiest way to start this and maybe even take the next first steps in this direction of refactoring and transitioning. Cheers. FelixRosch (TALK) 20:38, 15 November 2014 (UTC)

Comment: I don't know what hit this discussion, but the contrast to what I first saw is startling. I am very impressed. I'll be very willing to assist where I can offer assistance on request, but frankly, as things stand I don't feel essential to the effort. Is everything solved? Of course not, but at least something healthy seems to be emerging. Would I argue for any changes? Most likely, but none that at this point would be worth interrupting the progress for. My main caveat would be that because the top is now so soundly (though still conditionally) structured, therefore as far as practical, the rest of the progress be continued systematically top-down. By this I don't mean that sections have to be tackled in any special order, but that each section be tackled independently as an author becomes available, possibly in stages. Cross-reference should be thorough, but any cross-reference between sections should as far as practical avoid duplication of material. Difficulties in that respect should be taken as grounds for redistributing or splitting or recombining sections. Finally, as I see it, all the section topics should retain their presence in the main article, but the editors should remain alert for the advisability of splitting out the main substance of a section that becomes too large, complex, or intimately involved with external topics. Just as a single arbitrary example, toys should be mentioned and the topic summarised, but this probably is not the article for a detailed treatment, and that topic deserves at least one major article elsewhere. JonRichfield (talk) 14:11, 17 November 2014 (UTC)

Definition of intelligence

I think it is good to define intelligence separately from human intelligence. Intelligence for me is the ability to solve problems. For example if a machine designed an airplane, in natural language we would attribute intelligence to it. Note that designing an airplane may or may not require learning. So I would see learning as separate from problem solving. Human intelligence perhaps has the abilities,

  • Problem solving
  • Learning
  • Reflection (and meta processing)
  • Consciousness
  • Perception
  • Motivation intelligence (theory of mind)
  • Emotion
  • Artistic creativity

Defining intelligence as human intelligence seems to me to be self serving. For me semantically intelligence only means the ability to solve problems. Perhaps you could say,

  • To control an agent so that it adapts to and makes best use of an environment.

But this seems too limiting. Perhaps the intelligence is not control of a body or machine to interact with the world.

To me, only problem solving is at the core of intelligence. Learning is second. But when people loose their learning abilities do we say they are not intelligent? You could argue that the ability to learn is part of intelligence, but to me it is a separate ability with separate name. And it is possible for an agent to be highly intelligent without learning, given sufficient initial knowledge.

Problem solving may described in two closely related ways,

  • Calculating what actions achieve a particular result.
  • Calculating what values meet a particular set of constraints.

All the other properties are aspects of human minds, but should not be taken as the definition of intelligence.

The other abilities of the human mind such as reflection and learning may enhance problem solving but are not necessary for it.

Consciousness and qualia are subjective and personal qualities. How can you say from outside whether an intelligence is conscious? It makes no sense to say that an intelligence must be conscious, if we cannot measure consciousness.

Motivation intelligence (theory of mind) is a particular sub type of intelligence, related to solving problems involving intelligent agents. So it still fits the definition of problem solving.

Emotion seems to me to be something separate from intelligence.

If we look at the product of artist creativity, then to me the Mandelbrot set is artistic. But the algorithm to create it is not intelligent. Artistic ability may also be characterized as solving the problem of determining what people find good to perceive.

Thepigdog (talk) 08:26, 17 November 2014 (UTC)

Sorry this is a bit half baked, but the fully baked version is too technical.
* The intelligence probability is the probability of an intelligence solving any problem given to it.
* A problem is a time limited interaction with an environment, with a goal that is either achieved or not achieved.
Thepigdog (talk) 11:01, 17 November 2014 (UTC)

@Thepigdog:

  • Most of what you say makes good sense, but I have a few reservations. For example: "...may or may not require learning. So I would see learning as separate from problem solving..." I do not argue that learning and problem are identical in concept or as entities, but it is not clear to me how solving a problem without acquiring and processing data could meaningfully be called intelligence, nor how such acquiring and processing of data is to be distinguished from learning. Even a mousetrap or a carpenter's plane, which are not very advanced examples of intelligent systems, use physical and logical feedback. You rightly mentioned reflection, metaprocessing et al, in connection with intelligence, but I argue that there are such things as intelligent action without them. Rather than waste our time on debating them by flat statement and flat contradiction, I recommend that the authors/editors be permitted to write as they please without anticipation of their errors, instead raising such matters only after they have failed in cogency. The topic is too big to make progress if it is to stop for discussion of every debatable point. Although I could imagine learning and problem solving justifying their own topics, I find it hard to imagine one or the other as characterising intelligence in any useful sense, without the other.
Yes on reflection the learning/problem solving distinction is a bit dubious. The Universal Artificial Intelligence people group them together, as one task for intelligence..
  • Self-serving or not, as I see it, human intelligence is intelligence, but intelligence is not necessarily human intelligence. I suggest we drop that topic from the main thread of this article as a disruptive red herring. Some section might mention it and link to a number of other articles, ranging from ethology and psychology to IQ, but except in aspects that can be shown to be germane to the the topic of this article the question of the vehicle of the intelligence, whether man, mouse, Martian or machine, should not be mentioned, any more than we would discuss whether it is better to make a mechanical Turk chess player of plastic, meat or metal. After all, every other aspect has to earn its place in the article by demonstrable relevance and cogency. SP regarded my "horse intelligence" as a bit stretched, and I can understand his reservations, but I could demonstrate startling empirical intelligence in a spider, never mind a horse, intelligence that would be very challenging to match in a robot. That is why I say that we should eschew the term "human intelligence" at least wherever the text could equally well read "spider intelligence", or where the unqualified term "intelligence" would do.
Yes I agree. I would even include the evolutionary system as an intelligence. A chess playing algorithm also even though it is a restricted form of intelligence. Should talk about what an intelligence does.
  • Consciousness is such a loaded term that the consciousness/perception topic should be mentioned only where it is immediately of inescapable relevance. We have enough trouble defining consciousness in humans, animals, plants, communities, and in machines that can input relevant data, let alone using it in the discussion of the essence of intelligence. I have never even seen any coherent attempt to define the location or relevance of "consciousness" in the "Chinese room".
Agreed, I have no idea what consciousness is, other than that I have it.
  • All of those are just examples. I do not argue that they or other examples of conceptual dilemmas should not be discussed, but that they should be avoided wherever possible, and where they are inescapable, they should not be mentioned in any greater detail than that demanded in context, but instead that serious discussion be banished to linked articles. Trying to deal with them satisfactorily in the main article on artificial intelligence not only would destroy this article, but would arrogate points relevant to and valid in articles in general concepts of intelligence, or of human intelligence. Artificial intelligence is not the only topic dealing with intelligence, any more than cardiology is the only topic dealing with pumps. We must remain alert for all variations on themes that tempt us to commit intellectual colonialism by including material that could be dealt with more fairly elsewhere. We are after all not at a loss for material; this article, more than most, should concentrate on exclusion. :) JonRichfield (talk) 14:11, 17 November 2014 (UTC)
Agreed. See comments. On reflection my original comments were a bit imprecise.
Thepigdog (talk) 00:15, 18 November 2014 (UTC)\

Transposition of refactored outline, no text deleted

@Steelpillow and @JonRichfield and @Pigdog; After reading the refactored outline there was only one short section missing which I put together over the weekend. The rest of the material was basically a direct refactoring from when the outline was done last week. This should position the current article for the next part of its revising/expansion/redraft if someone could take an attempt at starting to prioritorize the next phase topics in some order. Perhaps one of you could suggest a list of the top priorities in something which looks like a preliminary ordering. Cheers. FelixRosch (TALK) 21:22, 18 November 2014 (UTC)

I have now finished my refactoring of the philosophy and ethics material. I hope it is at least better-structured than the previous ordering. — Cheers, Steelpillow (Talk) 20:32, 19 November 2014 (UTC)
Looks great, Steelpillow. I moved the "feasibility" material into it's own section. I like the set of topics, especially the separation of feasibility and consciousness questions. ---- CharlesGillingham (talk) 17:14, 20 November 2014 (UTC)

Deleting Human-Like from Lede

Since there does appear to be consensus to delete "human-like" from the lede, with, as far as I can tell, one dissenting editor, I have gone ahead and deleted "human-like". Robert McClenon (talk) 18:09, 15 November 2014 (UTC)

RfC is currently pending and open. Overlapping and separate RfC established a consensus of 4 editors Supporting. FelixRosch (TALK) 20:03, 15 November 2014 (UTC)
Consensus for what? Robert McClenon (talk) 22:15, 18 November 2014 (UTC)
In the first RfC, I see a medium-strong consensus for removing "human-like" with a couple of editors who wanted some kind of compromise. (I see 2, but maybe it's 4 somehow.) In the second RfC, I see a strong consensus against the compromise. So that's that. ---- CharlesGillingham (talk) 17:30, 20 November 2014 (UTC)
One thing that no-one has seemed to notice: It is original research to add "human-like" to the article. Some researchers do try to emulate human intelligence, some do not, but most of those do not say they do or do not. We're adding a term with no definition. It should go, except in contexts where the reliable sources use the term. — Arthur Rubin (talk) 10:42, 21 November 2014 (UTC)
It is certainly not OR. The phrase appears in reliable tertiary sources - I have cited at least one example from New Scientist magazine elsewhere on this page. Terms are often not precisely defined, or their definition varies with context. New Scientist uses the term in exactly this woolly, undefined way. This is perfectly normal use of language for the lead of an encyclopedia article too. WP:NOT PAPERS never mind WP:NOTTEXTBOOK. There's another WP thing somewhere about synonyms and paraphrasing being perfectly acceptable. But as I can't recall it instantly, I'll offer you my current favourite on contrived arguments - WP:LEGS. — Cheers, Steelpillow (Talk) 12:01, 21 November 2014 (UTC)
Steelpillow If you're using a reference to support your argument, please be so kind as to link to it. pgr94 (talk) 12:42, 21 November 2014 (UTC)
OK, I hope this is easier for you that doing Ctrl-F and typing in New Scientist: I wrote, 'Take too for example the latest (1 Nov 2014) copy of New Scientist, page 21, which has the subheading, "A hybrid computer will combine the best of number-crunching with human-like adaptability – so it can even invent its own programs." To quote from later in the article, "DeepMind Technologies ... is designing computers that combine the way ordinary computers work with the way the human brain works."' — Cheers, Steelpillow (Talk) 14:45, 21 November 2014 (UTC)
I found that. I searched for "artificial intelligence" in the New Scientist's archive and had no hits for 1st November. Perhaps you could also supply the title? pgr94 (talk) 14:50, 21 November 2014 (UTC)
My apologies. In full: Jacob Aron, "Ditch the programmers", New Scientist No.2993, 1 November 2014, Page 21. — Cheers, Steelpillow (Talk) 15:06, 21 November 2014 (UTC)
Thanks. They're using a different title online. It's so much easier if you just paste a URL.
Here's the link for everyone: Computer with human-like learning will program itself pgr94 (talk) 15:18, 21 November 2014 (UTC)

Should the Devaluation of Humanity and Computationalism be merged?

They are both talking about the same topic, but from different viewpoints. 204.8.193.130 (talk) 18:38, 21 November 2014 (UTC)

No. They are quite different topics within AI. The devaluation of humanity is an argument suggesting that weak (i.e. inhuman) AI systems might make bad decisions about us. Computationalism is a proposed systemic model of how the human brain works. There is no comparison. — Cheers, Steelpillow (Talk) 19:33, 21 November 2014 (UTC)