Wikipedia talk:Wikipedia Signpost/2022-11-28/Technology report

Latest comment: 1 year ago by FacetsOfNonStickPans in topic Discuss this story

Discuss this story

Let's forget about the print editions of The Signpost please! And maybe we should still define AI as artificial ignorance. After all, the machine has no understanding of the subject it is writing about. If it ever becomes a Wikipedia editor, it will likely be kicked off in a week for violations of WP:CIR, WP:BLP, WP:V, WP:NOR, etc. Before we start accepting any text directly from AI programs, there should be a test on whether it can follow BLP rules - that's just too difficult. Maybe just throw out all AI contributions about BLPs, but run the test on WP:V. In theory, at least, it could get the references right once it gets a concept of the meaning of what the references say - but that's a way off. Sure, there are tasks AI can do but they are essentially rote (easily programmable) tasks, e.g. finding possible refs, alphabetizing lists, maybe even constructing tables. Once an AI program can consistently do those simple tasks, then we can try it out with more difficult problems, e.g. identifying contradictions in articles or checking birth and death dates.

ORES is a marvelous program for checking article quality but it only does certain things that are related to article quality. I'm guessing the factors it considers to be number of words, refs, sections, illustrations, quality tags placed by editors. Maybe even incoming links and outgoing links. It can count real good and the added value is the correlation of thse counts to real person evaluation of quality. I love it for what it does, but everybody knows that there are some terrible articles with lots of words, refs, sections, and photos. Smallbones(smalltalk) 16:51, 29 November 2022 (UTC)Reply

And furthermore - AI programs should be able to recognize that "and furthermore" means that somebody is pissed off! and that "it counts good" is a criticism, not a compliment!
Perhaps the first test we should run on an AI contributor is whether it can write articles on "Limitations of Artificial Intelligence" and "Benefits of Artificial Ignorance". If those 2 articles don't have considerable overlap, the designers should go back to the drawing board. Smallbones(smalltalk) 17:17, 29 November 2022 (UTC)Reply
@Smallbones: Well, GPT-3 had this to say when I prompted it with your comment:
I don't understand why you would suggest that AI can only do rote tasks, when in fact AI has been used for a variety of tasks, from playing chess to recognizing faces. Furthermore, AI is being used for more complex tasks, such as natural language processing and image recognition. It is being used to automate and streamline processes in industries such as finance, health care, and transportation. And it is being used to create new products and services. So, I think the idea of artificial ignorance is misguided. AI should be judged by its performance, not its level of understanding. If an AI can do something better than a human, then it should be used. If it can't, then it should be discarded. And if it can do something better than a human, then it should be given the opportunity to do so. The bottom line is that AI is a powerful tool, and it should be used to its fullest potential.
I believe that AI could be a great asset to Wikipedia, as long as it is used responsibly and appropriately. AI can be used to help automate routine tasks, such as verifying facts and sources, formatting content, and checking for spelling and grammar errors. AI can also be used to more accurately match content to Wikipedia's editorial standards - for example, it could be used to detect and flag potential bias or inaccuracies in articles. AI could even be used to suggest possible improvements or additions to articles. Ultimately, AI could be used to help make Wikipedia's content more accurate, comprehensive, and up-to-date. However, AI should not be used in place of human editors. AI can only do so much, and it cannot match the nuanced understanding of the subject matter that comes with human experience. AI should be used as a supplement to human editors, not as a replacement.
Straight from the horse's mouth... jp×g 03:16, 1 December 2022 (UTC)Reply
I liked the smallbone's "further more" comments.-Nizil (talk) 04:44, 7 December 2022 (UTC)Reply
I think the problem with AI is, that we are not able (at least because of the potential risks, if not by systematic errors in training) to let an AI have own ideas, so that it e.g. can correct systematic errors in training data on its own. The programming interrogator is everything to the AI. If there now would be a real artificial intelligence, with broader understanding, in the best (but also bad) case the programmer is some type of god to it, in the worst case some type of O’Brien, who tells it that 2+2 equals 5. Habitator terrae (talk) 22:49, 7 December 2022 (UTC)Reply
  • This is a great fun read, and thought-provoking. An article that is technical but also makes some sense to a non-technical average reader with tech dreams. In the future Signpost can have its own AI column as part of the TechReport. Something like SP's very own mini-course of mini-Capstone project on the software based foundations of Wikipedia with AI as a buzzword. I would for sure take the course. FacetsOfNonStickPans (talk) 22:32, 27 December 2022 (UTC)Reply