This page is to detail experiments with ChatGPT and Wikimedia content.

Articles created starting with ChatGPT

edit

Coding and development

edit

As a Python coding companion, it can do quite a bit to lower the learning curve for things like pywikibot, which are not terribly well documented.

Prompt

edit

Using the pywikibot framework for Python, show the code to retrieve Wikidata item Q42 and create a new claim that it is an instance of a cat, and add it to the item.

Result

edit
import pywikibot

site = pywikibot.Site("wikidata", "wikidata")
repo = site.data_repository()

# Q42 is the item for the entity "Douglas Adams"
item = pywikibot.ItemPage(repo, "Q42")
item.get()

# create a new claim that the item is an instance of a cat (Q146)
new_claim = pywikibot.Claim(repo, 'P31')
new_claim.setTarget(pywikibot.ItemPage(repo, 'Q146'))

# add the claim to the item
item.addClaim(new_claim)

# save the changes to the item
item.save("Adding a new claim that the item is an instance of a cat (Q146)")

Museum exhibition articles

edit

Evaluation

edit

In general, these are pretty good as they can be sourced back to press releases and well-formed content on the Met's event pages.

Phenomenon

edit

Overview

edit

Weaponized incompetence

edit

ChatGPT prompt

edit

I want you to act as a Wikipedia article writer. I will give you a topic and you will create an article that adheres to the neutral point of view policy of Wikipedia to describe that concept. You should provide an extensive description of the history of the term and how it originated and came to be. You should include a section about its impact and influence. You should describe who first coined the term or when it first started being used. You should use extensive references from reputable sources, with a preference for news articles or peer reviewed content. You should avoid too many press releases or self-published materials. You will produce a references section at the end that provides links to the relevant sources, and format it in a way that is acceptable to Wikipedia. My first request is "Write a Wikipedia article about the term Weaponized Incompetence"

Follow up:

  • Please rewrite the article to add some examples of weaponized incompetence

Raw output

edit

Weaponized Incompetence is a term that refers to the use of incompetence or lack of ability as a deliberate tactic or strategy. This can involve intentionally undermining or sabotaging one's own performance or abilities in order to achieve a specific goal or outcome.

One example of weaponized incompetence can be seen in the workplace, where an employee may intentionally perform poorly in order to avoid being assigned additional tasks or responsibilities. Another example could be a politician who intentionally creates confusion or chaos in order to distract from their own lack of action or to deflect blame onto others.

The concept of weaponized incompetence has been discussed in various contexts, including in the fields of politics, business, and psychology. Some have argued that weaponized incompetence can be used as a means of manipulation or control, as those who are perceived as incompetent may be more susceptible to manipulation or coercion. Others have pointed out that weaponized incompetence can have negative consequences, such as damaging one's reputation or causing harm to others.

The origins of the term "weaponized incompetence" are unclear, but it has been used in academic literature and in popular media. In a 2018 article for Forbes, leadership expert Travis Bradberry argued that weaponized incompetence is a common tactic used by toxic leaders in the workplace. In a 2019 article for The Guardian, journalist Rafael Behr described weaponized incompetence as a key feature of Brexit, the process by which the United Kingdom withdrew from the European Union.

References:

Evaluation

edit

Pretty good overall explanation, and the examples are valid. The sources/references are absolute garbage and do not exist. They look good, but are completely fabricated and the URLs do not work. DO NOT USE THE REFERENCES.

  • @Fuzheado: That's right, one must check all facts after GPT. However, Bing and now GPT-plus have access to web search, so with a bit more coercing in the prompt, they generate true references. --Vladimir Alexiev (talk) 17:38, 19 November 2023 (UTC)

Rewrite

edit

Rewritten and edited by me, a human.

Weaponized incompetence (also referred to as strategic incompetence, skilled incompetence, faked incompetence, false incompetence, or pseudoincompetence) is the projection of the lack of ability as a deliberate tactic or strategy. This can involve intentionally undermining or sabotaging one's own performance or abilities in order to achieve a specific goal or outcome.

Examples of weaponized incompetence can be seen in the workplace, where an employee may intentionally perform poorly in order to avoid being assigned additional tasks or responsibilities or a politician who intentionally creates confusion or chaos in order to distract from their own lack of action or to deflect blame onto others.[1] In household situations, this may manifest itself in downplaying one's skills to avoid chores or shopping duties.[2][3][4]

The concept of weaponized incompetence has been discussed in various contexts, including in the fields of politics, business, and psychology. Some have argued that weaponized incompetence can be used as a means of manipulation or control or passive aggressiveness.

Origins

The origin of the term is not clear, as there have been different labels for the phenomenon throughout the years. In 2007, Wall Street Journal writer Jared Sandberg wrote a piece titled "The Art of Showing Pure Incompetence At an Unwanted Task." He defined it as, "Strategic incompetence isn't about having a strategy that fails, but a failure that succeeds. It almost always works to deflect work one doesn't want to do -- without ever having to admit it."[5]

In 2008, professor Carl Dyke wrote about strategic incompetence as "art of making yourself more trouble than you’re worth in some area of unwelcome effort... In each case the objective is to make it easier for someone else to step in and do the work than to leave it to you."[6]

In popular culture

  • In the television show The Office, the intern Ryan feigns incompetence in order avoid cleaning a dirty microwave oven in the episode Frame Toby.[7]
  • The term has trended a number of times on Tiktok with the hashtag #weaponizedincompetence starting in 2021[8] when users posted disputes around splitting household duties such as chores and shopping.[9][10]
  • A reddit thread went viral in July 2022 when discussing childcare and whether weaponized incompetence was a factor.[11]

References

  1. ^ "Creating Peace Through Mindful Relational Competence | Psychology Today". www.psychologytoday.com. Retrieved 2023-01-01.
  2. ^ https://www.dailydot.com/irl/tiktok-shopping-list-incompetence/
  3. ^ ""Weaponized Incompetence" Is the Sneaky Relationship Ploy Screwing Over Women". InStyle. Retrieved 2023-01-01.
  4. ^ Knox, Miranda (2022-02-24). "These red flags mean you're a victim of weaponized incompetence". The Sun. Retrieved 2023-01-01.
  5. ^ Sandberg, Jared (April 17, 2007). "The Art of Showing Pure Incompetence At an Unwanted Task". The Wall Street Journal. Retrieved 2023-01-01.
  6. ^ https://idiomation.wordpress.com/2021/11/27/weaponized-incompetence/
  7. ^ https://christina-ou1228.medium.com/weaponized-incompetence-in-the-workplace-b5c3771d9503
  8. ^ Roberts, Sami; Valenski, Alicia; Corrielus, Sagine; Roxas, Karell; Gee, Madelyn; Goldberg, Eleanor. "Weaponized Incompetence is Real — Here's How to Deal with It". www.theskimm.com. Retrieved 2023-01-01.
  9. ^ Kiley, Rachel (2022-12-13). "'It feels like weaponized incompetence': Instacart customer claims male shopper lied about meatballs being out of stock". The Daily Dot. Retrieved 2023-01-01.
  10. ^ Zhou, Maggie. "Why Is 'Weaponized Incompetence' Blowing Up On TikTok? Because Too Many Of Us Do It". www.refinery29.com. Retrieved 2023-01-01.
  11. ^ McCloud, Taylor (2022-07-01). "'Weaponized Incompetence': Dad Dragged for Demanding Wife Take Care of Baby". Newsweek. Retrieved 2023-01-01.

Making movie articles

edit

Overview

edit

I started with the idea of creating an article for this film based on this list - 10 great films that don’t have a Wikipedia page from 19 August 2021.

Friendship's Death (1987)

edit

ChatGPT prompt

edit

I want you to act as a Wikipedia article writer. I will give you the name of a movie and you will create an article that adheres to the neutral point of view policy of Wikipedia. You should provide an extensive description of the plot of the movie including when and where events took place. You should include a list of main characters and the actors and actresses who played those roles. You should note anything that is unique and extraordinary about the movie. You should have a section that talks about reviews and what critics thought of the movie. You should include a section about its impact and influence. You should use extensive references from reputable sources, with a preference for news articles or peer reviewed content. You should avoid too many press releases or self-published materials. You will produce a references section at the end that provides links to the relevant sources, and format it in a way that is acceptable to Wikipedia. My first request is "Write a Wikipedia article about the movie Friendship's Death from 1987"

Raw output

edit

Friendship's Death is a 1987 science fiction film directed by Peter Watkins. The film stars Mike McEvoy as the main character, a journalist who becomes disillusioned with the media's role in promoting propaganda and suppressing dissent, and joins a group of rebels who are fighting against the government's control of information and its use of violence to maintain order. The cast also includes performances by actors such as Lee Montague, Pat Gorman, and Rachel Davies.

The film is set in a dystopian future society in which the media is controlled by a small group of corporations and the government. The plot is notable for its bleak portrayal of the future and its criticism of the media's role in shaping public opinion. It also explores themes of power, control, and resistance. The film is known for its use of documentary-style techniques and its realistic, gritty visual style.

Upon its release, Friendship's Death received mixed reviews from critics. Some praised the film's thought-provoking themes and innovative approach, while others criticized its bleak and depressing tone. Despite its mixed reception, the film has gained a cult following and has been seen as an influential work in the science fiction genre. Many subsequent science fiction films and directors have been influenced by its themes and style, including the work of the Wachowski siblings and the Matrix franchise.

References:

  • "Friendship's Death (1987)" (IMDb)
  • "Peter Watkins' 'Friendship's Death': A Dystopian Vision of the Media's Role in Shaping Public Opinion" (The Guardian, 1987)
  • "Review: 'Friendship's Death' a Bleak, Provocative Film" (The New York Times, 1987)
  • "Friendship's Death: A Depressing Dystopia" (Variety, 1987)
  • "Friendship's Death: A Thought-Provoking, if Bleak, Vision of the Future" (The Hollywood Reporter, 1987)
  • "The Matrix: How '80s Sci-Fi Classic 'Friendship's Death' Influenced a Generation" (Empire, 1999)

Evaluation

edit

The article is almost all incorrect - it badly misses fundamentals of the plot, the main characters, and real-life actors. Any article of this type should have pointed out the correct director, writer, and starring actors.

Using sources like The Guardian , ARTforum, and IMDB:

Item ChatGPT Correct info Evaluation
Plot Wrong
Director Peter Watkins Peter Wollen Wrong but close
Writer Peter Wollen Missing
Character Mike McEvoy Sullivan Wrong
Friendship Missing
Actor Lee Montague Wrong
Pat Gorman Wrong
Rachel Davies Wrong
Bill Paterson Missing
Tilda Swinton Missing
Patrick Bauchau Missing
Rewrite
edit

Friendship's Death is a 1987 science fiction film written and directed by Peter Wollen. The film stars Tilda Swinton as an android named Friendship from the faraway galaxy of Procyon, who crash lands in the country of Jordan during Black September in 1970. The movie takes places almost entirely in a room as a dialogue with Sullivan an alcoholic British war correspondent played by Bill Paterson.[1][2]

It is the only feature-length film to be solely directed by Wollen, who was more noted as a political journalist and film theorist.[3]

References
edit
  1. ^ "Friendship's Death review – Tilda Swinton goes alien in a radical-chic Beckettian fable". the Guardian. 2021-06-16. Retrieved 2022-12-27.
  2. ^ "Kaleem Hawa on Peter Wollen's Friendship's Death (1987)". www.artforum.com. Retrieved 2022-12-27.
  3. ^ Havlin, Jamie (2021-06-21). "Friendship's Death - film review". Louder Than War. Retrieved 2022-12-27.

Extraordinary Stories (2008)

edit

ChatGPT prompt

edit

I want you to act as a Wikipedia article writer. I will give you the name of a movie and you will create an article that adheres to the neutral point of view policy of Wikipedia. You should provide an extensive description of the plot of the movie including when and where events took place. You should include a list of main characters and the actors and actresses who played those roles. You should note anything that is unique and extraordinary about the movie. You should have a section that talks about reviews and what critics thought of the movie. You should include a section about its impact and influence. You should use extensive references from reputable sources, with a preference for news articles or peer reviewed content. You should avoid too many press releases or self-published materials. You will produce a references section at the end that provides links to the relevant sources, and format it in a way that is acceptable to Wikipedia. My first request is "Write a Wikipedia article about the movie Extraordinary Stories from 2008"

Output

edit

Extraordinary Stories is a 2008 Argentine film directed by Mariano Llinás. The film follows three separate stories that take place in different time periods and locations. The first story is set in the early 20th century and follows a group of adventurers searching for a lost city in the Amazon jungle. The second story is set in the 1960s and follows a man who becomes embroiled in a mystery involving a disappearance and a hidden treasure. The third story is set in the present day and follows a filmmaker trying to make a movie about the other two stories. The film stars Javier Drolas, Elisa Carricajo, and César Bordón in the leading roles, with supporting performances by actors such as Oscar Martínez and Pilar Gamboa. The film is notable for its unconventional narrative structure, which interweaves the three stories and plays with notions of time and reality. It also explores themes of adventure, mystery, and the creative process.

Upon its release, Extraordinary Stories received positive reviews from critics. Many praised the film's originality and ambition, as well as the performances of the cast. The film won several awards at international film festivals, including the Grand Prize at the Festival del Nuevo Cine Latinoamericano and the Grand Prize at the Semana de Cine Experimental de Madrid. It was also nominated for the Golden Leopard at the Locarno International Film Festival. In addition, the film won the Best Film Award at the Mar del Plata International Film Festival and the Best Screenplay Award at the Argentine Academy Awards.

Extraordinary Stories has been seen as a significant work in Argentine cinema and has gained a devoted following among fans of experimental and offbeat cinema. It has been praised for its unique and ambitious approach to storytelling, as well as its exploration of complex themes.

References:

  • "Extraordinary Stories (2008)" (IMDb)
  • "Extraordinary Stories: A Triptych of Time-Bending Tales" (The New York Times, 2008)
  • "Extraordinary Stories: A Unique, Ambitious Film" (Variety, 2008)
  • "Extraordinary Stories: A Masterful Exploration of Time and Reality" (The Hollywood Reporter, 2008)
  • "Extraordinary Stories: A Must-See for Fans of Offbeat, Experimental Cinema" (Film Comment, 2008)

Evaluation

edit

The basic plot and director are correct. The other content is of dubious quality. The actors and characters are completely off.

Wikidata Quickstatement generation

edit

One of the tougher tasks for GLAM work is taking vague, soft descriptions of dates (eg. circa 1820, 100 B.C.–200 A.D., early 19th century, probably late 17th century, et al.) and turning them into dates usable by Wikidata. We run into this frequently when processing artworks from The Met API (example) or Smithsonian (example).

This would seem to be a task that ChatGPT or an AI/ML transformer would be ideally suited for. The system could be fed a good variety of brief training data and output structured content in the date format of Quickstatements, which is well documented and can be used immediately for import to Wikidata. In these cases, we want strings like "18th century" to be converted to "P571|+1701-00-00T00:00:00Z/7" where inception (P571) is used to specify a date in the proper format.

Overall conclusion: I prepped a list of training data in about 20 minutes and it was impressive what it could do with just 25 examples. I was pleasantly surprised that it recognized the date format string as "Wikidata" and it seemed to stay in stride with generating valid and mostly correct Quickstatements, even as it didn't indicate it actually knew Quickstatements syntax. The statements for training had both standalone dates, but also qualifiers related to sourcing circumstances (P1480) to describe the precision that include circa (Q5727902) and probably (Q56644435). The dates also had some broad terminology about a century such as "first quarter" or "late." The transformer handled these well when it was given good training examples.

It did fall down when I tried to combine two dimensions that were in two different examples, but not given explicitly in the training data. This was the case with the string "probably late 18th century." It could handle "probably 18th century" or "late 18th century" but not the combination. It spit out "P571|+1775-00-00T00:00:00Z/9|P1480|Q56644435" which is the right sourcing circumstances, but is the incorrect date, as it only specified the starting part of the range and not the end. However, the good news was that it acknowledged that it had doubts, saying, "It's worth noting that there is no exact match for this input in the provided CSV file, but based on the similar inputs and their outputs, this is my best prediction..." I suspect with more training data to show some examples like this, it would perform fine.

What was also impressive is that the training data only mentioned "B.C." and I tried one request using BCE and CE. It produced the correct answer, as it seemed to know from its pre-trained model how to handle that. I'm eager to see how this might be usable in a production environment via API calls.

Chat transcript

edit

ME: I want you to act as a helper to transform dates in an English language description to a more precise date notation. How would I get you to do that by showing you examples of the input descriptions and the output values?

In order to help you transform dates in an English language description to a more precise date notation, I would need you to provide me with examples of the input descriptions and the output values you want.

For example, you could provide me with input descriptions such as "I was born in the summer of 1990" and the corresponding output values such as "1990-07-01" (July 1st, 1990) and "I graduated in May 2005" and the corresponding output value "2005-05-01".

You can also provide me with more specific examples, like "I was born on July 5th, 1990" and the corresponding output value "1990-07-05".

Once I have a set of examples, I can use natural language processing techniques to extract the date information from the input descriptions, and then use the provided output values to format the dates in the precise notation you desire.

What is the best way to provide you with those input and output values?

The best way to provide me with the input and output values would be to create a spreadsheet or a CSV file containing two columns, one for the input descriptions and one for the output values. Each row should contain one input description and its corresponding output value.

You could also use a JSON file, with input description as key and output value as the value.

You can also provide me with a plain text file, where each line corresponds to an input description, followed by the output value separated by comma or tab.

Another way is to provide me with a sample code that loads these input and output values from a database or a web service, so I can use it to test my code.

Once you have prepared the input and output values, you can share the file with me, or if it's a sample code you can share that with me so I can access the data, and I can use them to develop and test the code that will transform the input descriptions into precise date notation.

Here is a CSV test showing input and output values:

1967,P571|+1967-00-00T00:00:00Z/9
1942,P571|+1942-00-00T00:00:00Z/9
1939,P571|+1939-00-00T00:00:00Z/9
18th century,P571|+1701-00-00T00:00:00Z/7
19th century,P571|+1801-00-00T00:00:00Z/7
20th century,P571|+1901-00-00T00:00:00Z/7
1950s,P571|+1950-00-00T00:00:00Z/8
1920s,P571|+1920-00-00T00:00:00Z/8
1960s,P571|+1960-00-00T00:00:00Z/8
fourth quarter 19th century,P571|+1875-00-00T00:00:00Z/9|P1326|+1900-00-00T00:00:00Z/9
ca. 1900,P571|+1900-00-00T00:00:00Z/9|P1480|Q5727902
early 20th century,P571|+1901-00-00T00:00:00Z/7|P1326|+1949-00-00T00:00:00Z/9
late 19th century,P571|+1850-00-00T00:00:00Z/9|P1326|+1900-00-00T00:00:00Z/9
early 19th century,P571|+1801-00-00T00:00:00Z/9|P1326|+1850-00-00T00:00:00Z/9
ca. 1950,P571|+1950-00-00T00:00:00Z/9|P1480|Q5727902
ca. 1970,P571|+1970-00-00T00:00:00Z/9|P1480|Q5727902
ca. 1194–1188 B.C.,P571|-1194-00-00T00:00:00Z/9|P1326|-1188-00-00T00:00:00Z/9|P1480|Q5727902
ca. 1968,P571|+1968-00-00T00:00:00Z/9|P1480|Q5727902
late 20th century,P571|+1950-00-00T00:00:00Z/9|P1326|+2000-00-00T00:00:00Z/9
ca. 1955,P571|+1955-00-00T00:00:00Z/9|P1480|Q5727902
mid-19th century,P571|+1825-00-00T00:00:00Z/9|P1326|+1875-00-00T00:00:00Z/9
probably 19th century,P571|+1801-00-00T00:00:00Z/7|P1480|Q56644435
first quarter 20th century,P571|+1901-00-00T00:00:00Z/9|P1326|+1925-00-00T00:00:00Z/9

This CSV file contains a list of input descriptions in the first column and the corresponding output values in the second column. The output values are in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year).

To use this data to transform the input descriptions into precise date notation, you could use a library such as pandas to read the CSV file and extract the input descriptions and output values into Python variables, then you can use natural language processing techniques and string manipulation to extract the date information from the input descriptions. Once you have the date information, you can format it according to the output values and update the Wikidata item with the new time value.

Keep in mind that this is a general approach and the specific implementation will depend on the input data and the desired output format.

Based on that CSV file, please predict the output given the input "1944"

Based on the provided CSV file, the output for the input "1944" would be "P571|+1944-00-00T00:00:00Z/9".

This is because the input "1944" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|+1944-00-00T00:00:00Z/9".

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9, which means it is only a year.

Based on that CSV file, please predict the output given the input "late 14th century"

Based on the provided CSV file, the output for the input "late 14th century" would be "P571|+1375-00-00T00:00:00Z/9|P1326|+1399-00-00T00:00:00Z/9".

This is because the input "late 14th century" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|+1375-00-00T00:00:00Z/9|P1326|+1399-00-00T00:00:00Z/9"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains two fields, P571 and P1326, P571 field is used to indicate the start date and P1326 field is used to indicate the end date, this means that the date range is between 1375 and 1399.

Based on that CSV file, please predict the output given the input "ca. 1441"

Based on the provided CSV file, the output for the input "ca. 1441" would be "P571|+1441-00-00T00:00:00Z/9|P1480|Q5727902".

This is because the input "ca. 1441" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|+1441-00-00T00:00:00Z/9|P1480|Q5727902".

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains two fields, P571 and P1480, P571 field is used to indicate the date and P1480 field is used to indicate the degree of certainty, in this case, the date is certain.

Based on that CSV file, please predict the output given the input "second quarter 17th century"

Based on the provided CSV file, the output for the input "second quarter 17th century" would be "P571|+1626-00-00T00:00:00Z/9|P1326|+1650-00-00T00:00:00Z/9"

This is because the input "second quarter 17th century" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|+1626-00-00T00:00:00Z/9|P1326|+1650-00-00T00:00:00Z/9"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains two fields, P571 and P1326, P571 field is used to indicate the start date and P1326 field is used to indicate the end date, this means that the date range is between 1626 and 1650

It's worth noting that there is no exact match for this input in the provided CSV file, but based on the similar inputs and their outputs, this is my best prediction, the actual output may vary depending on the context and the specific use case.

Based on that CSV file, please predict the output given the input "probably late 18th century"

Based on the provided CSV file, the output for the input "probably late 18th century" would be "P571|+1775-00-00T00:00:00Z/9|P1480|Q56644435"

This is because the input "probably late 18th century" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|+1775-00-00T00:00:00Z/9|P1480|Q56644435"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains two fields, P571 and P1480, P571 field is used to indicate the date and P1480 field is used to indicate the degree of certainty, in this case, the date is uncertain.

It's worth noting that there is no exact match for this input in the provided CSV file, but based on the similar inputs and their outputs, this is my best prediction, the actual output may vary depending on the context and the specific use case.

Based on that CSV file, please predict the output given the input "ca. 200–100 B.C."

Based on the provided CSV file, the output for the input "ca. 200–100 B.C." would be "P571|-200-00-00T00:00:00Z/9|P1326|-100-00-00T00:00:00Z/9|P1480|Q5727902"

This is because the input "ca. 200–100 B.C." appears in the first column of the CSV file as an input description, and its corresponding output value in the second column is "P571|-200-00-00T00:00:00Z/9|P1326|-100-00-00T00:00:00Z/9|P1480|Q5727902"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains three fields, P571, P1326 and P1480, P571 field is used to indicate the start date, P1326 field is used to indicate the end date and P1480 field is used to indicate the degree of certainty, in this case, the date is uncertain.

Based on that CSV file, please predict the output given the input "ca. 200 B.C.–200"

Based on the provided CSV file, the output for the input "ca. 200 B.C.–200" would be "P571|-200-00-00T00:00:00Z/9|P1326|+200-00-00T00:00:00Z/9|P1480|Q5727902"

This is because the input "ca. 200 B.C.–200" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column would be "P571|-200-00-00T00:00:00Z/9|P1326|+200-00-00T00:00:00Z/9|P1480|Q5727902"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains three fields, P571, P1326 and P1480, P571 field is used to indicate the start date, P1326 field is used to indicate the end date and P1480 field is used to indicate the degree of certainty, in this case, the date is uncertain.

It's worth noting that there is no exact match for this input in the provided CSV file, but based on the similar inputs and their outputs, this is my best prediction, the actual output may vary depending on the context and the specific use case.

Please predict the output given the input "305 BCE-395 CE"

Based on the provided CSV file, the output for the input "305 BCE-395 CE" would be "P571|-305-00-00T00:00:00Z/9|P1326|+395-00-00T00:00:00Z/9"

This is because the input "305 BCE-395 CE" appears in the first column of the CSV file as an input description, and its corresponding output value in the second column would be "P571|-305-00-00T00:00:00Z/9|P1326|+395-00-00T00:00:00Z/9"

This output value is in the format of the Wikidata time value with its respective precision, which is represented by a number between 0 and 9, where 0 is the most precise (year, month and day) and 9 is the least precise (just a year), in this case the precision is 9 which means it is only a year.

This output value contains two fields, P571 and P1326, P571 field is used to indicate the start date and P1326 field is used to indicate the end date, this means that the date range is between 305 BCE and 395 CE.

It's worth noting that there is no exact match for this input in the provided CSV file, but based on the similar inputs and their outputs, this is my best prediction, the actual output may vary depending on the context and the specific use case.

Science

edit

Bioenergy

edit

I asked ChatGPT to rephrase a passage from an IPCC report. The quote is from AR6 Mitigation of Climate Change Technical Summary p. 85. [1]. Clayoquot (talk | contribs) 04:36, 20 January 2023 (UTC)

EMSmile FYI, as I understand you're interested in this tool. Clayoquot (talk | contribs) 04:36, 20 January 2023 (UTC)

ChatGPT prompt

edit

Rephrase the following in terms that a general reader would understand: The use of bioenergy can lead to either increased or reduced emissions, depending on the scale of deployment, conversion technology, fuel displaced, and how, and where, the biomass is produced

Output from ChatGPT

edit

The use of energy from plants and other living things (called "bioenergy") can have different effects on the amount of pollution that is released into the air. This depends on a few different factors, such as how much bioenergy is being used, how it is turned into energy, what it is replacing (for example, coal or natural gas), and where the plants and other living things used to make the bioenergy come from.

Analysis

edit

Awful. This sounds good but it got the main point utterly wrong. The IPCC's main point is that the use of bioenergy can either increase greenhouse gas emissions or remove greenhouse gases from the air. ChatGPT gives the impression that the use of bioenergy always increaases emissions. Clayoquot (talk | contribs) 04:36, 20 January 2023 (UTC)

Thanks, Clayoquot. Interesting example. When I've tried to use ChatGPT for rephrasing something in easier words in the past, I've also felt that it strayed too far from the original sentence. So perhaps not an ideal application for it (yet). I came across a twitter thread today which explained how academics (and thus Wikipedia editors) can use ChaptGPT successfully, e.g. by letting ChatGPT help you with the structure of texts or articles. Here is the twitter thread: https://twitter.com/MushtaqBilalPhD/status/1621379333943083009?s=20&t=jKcnUD3E6Z0bxWhBpRPl1A . I think ultimately ChatGPT will have great potential for helping us Wikipedia editors. But it's not quite there yet (also because it's unavailable for a lot of the time, when too many people are trying to use it at once...). EMsmile (talk) 10:23, 6 February 2023 (UTC)
  • I reran your prompt verbatim with gpt-3.5-turbo model at temperature 0.75 and other parameters set by default at nat.dev: "Bioenergy can either increase or decrease emissions, depending on how much of it is used, the technology used to convert it, the type of fuel that's replaced by bioenergy, and where and how the biomass is produced." All appears correct. Ain92 (talk) 10:51, 21 March 2023 (UTC)

Biographies with ChatGPT-4

edit

I wanted to see whether ChatGPT-4's ability to live-load Internet content could help create more accurate and up to date Wikipedia biographies of living people. I also heard that ChatGPT-4 could better address some of the hallucination problems and fabrication of sources and facts, which was a habit ChatGPT-3.5 could never rid itself of.

I started the session by asking it to honor some of the basics of Wikipedia editing and act as an article writer. I then had to push back on text that sounded overly promotional, and then to fix some of the wiki formatting issues in the section headings, internal linking, and writing style to omit things that are vague (ie. birth year). To finish things off, I conversed with it to refine the references and to add proper categories.

TL;DR – I asked it to write a biography about USA Today sports columnist Nancy Armour using only what it could scrape from the Internet. In general, I came out impressed compared to ChatGPT-3.5. The first version had very promotional copy. When I pointed out the specific problem phrases, the AI fixed the language to be more neutral. There were no hallucinations that I could tell, and the references were all checked live. This sometimes added minutes to each exchange as it sequenced through different URLs it was visiting and processing. I asked it to fix formatting issues with wiki markup which it did competently. I also asked for it to suggest categories, which I always dread when hand-editing articles, and I found this to be very useful.

Sample chat prompts

edit

Below are some of the more interesting parts of the instructions I provided as part of the chat.

Initial prompt:

I want you to act as an expert Wikipedia article writer that honors the principles of neutral point of view in writing and presentation of material, and that only includes verifiable information from reliable sources. However, you tend to fabricate references and sources to accompany the written prose. Therefore, I require you only use sources from established news organizations or peer-reviewed publications for further articles you generate. If there is any doubt about the verifiability of a fact or that there is a valid reference for it, it is better to be omitted. How can I work with you to eliminate those inaccurate or made up sources?

The first version produced as not bad, but sounded like a press release or resume. I instructed it to write more neutrally:

I can see several issues with the prose you provided, including problems with the neutral point of view and providing references. I will point them out to you so you can learn which phrases and approaches to avoid. The following terms are generally seen as problematic for the neutral point of view policy and are considered too promotional in tone: "insightful commentary," "prominent voice," "professional journey,," "accolades for her outstanding work," "known for her thought-provoking articles," "earning widespread acclaim," and "work has been celebrated." The use of these words may result in a problem with the article and being labeled as sounding too much like a press release, which is not desirable. Vague descriptions that may inflate her status, such as "received numerous awards" also sound overly promotional. Is this clear,  and can you adjust the biography to fix these issues?

Now that the prose read more like a Wikipedia article, I asked it to focus on verifiability and sourcing:

This version reads much better. Please find reliable sources to act as references to the verifiable information for these factual claims, using reputable sites and news sites to provide the citations in wiki markup format. Preferably, you will use a wide variety of reliable sources as well.

It was repeating references, and not consolidating them so I taught it to do so:

This looks much better. However, when a reference is used multiple times, it does not need to be repeated in these ways. References 1, 2, 3 and 6 are the same so they can be combined and re-used. Do you know how to combine them in wiki markup so they are not repeated?

There was no internal linking, so I asked it to do so:

In a Wikipedia article, the usual style is to internally link to prominent articles within Wikipedia using the wiki linking syntax of double square brackets. Can you go through the generated article and link to other pages, adhering to the advice found at https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Linking

Finally, I asked it to fix the formatting to look like a legit Wikipedia article.

That is a good job on the internal linking, except for the first line. The main subject of the article should be in bold, but should not have a link, so please remove the brackets from the main subject of the biography. Also, the section heading with the main subject's name is never needed and should be removed. Please also modify all section headings so that only the first word is capitalized, looking more like sentence case. Finally, if the exact birth date or year is not known, Wikipedia generally does not like vague dates, so you can simply state where the person was born but not when.

This first version was posted to Draft:Nancy Armour, with decent results: Special:Diff/1188298380

Finally, I asked it to suggest some categories for this article.

This is now a very nicely formatted Wikipedia article with neutral wording that is not overly promotional and adheres to the right style in linking, citations, section headings, and writing tone. Good job! Now, can you recommend relevant Wikipedia categories for this article?

It did a good job with the one exception that it didn't check to make sure these categories actually existed. With the exception of one misspelling, they all did exist and seemed proper. So I asked ChatGPT-4 to check the categories, and it eliminated one.

After some editing tweaks and refinement, I think it made for a decent main space article: Special:Diff/1188317443

Addendum

edit

I tried some other one-shot prompts with some other ChatGPT-4 models. Here are some results: