User:Fuzheado/ORES experiment

Experimenting with Revscore and ORES in the Classroom edit

By Andrew Lih, February 9, 2016

“Yay!"

“Alright!"

"I feel like we’re part of a secret computer club!"


I’ve taught many people how to edit Wikipedia over the last decade, but hearing this kind of exhilaration from students was a change. Typically, when new users edit Wikipedia articles, they hope their changes stick around, and that they've added something of value. But today, students were saving their edits, hitting a button, and getting immediate feedback on their actions “…stub… start… C…” and were going back for more editing.

Quality
  FA
  A
  GA
B
C
Start
Stub

I had my students use a new feature called ORES, Objective Revision Evaluation Service, produced by the research team at the Wikimedia Foundation. Through machine learning technology and having a model for an ideal Wikipedia entry, ORES can algorithmically and instantaneously provide a general quality assessment of any article in Wikipedia. Without ORES, this article assessment is done tediously by hand, using volunteer human effort to rate the content compared to a checklist of desired features. This manual process is slow and haphazard, with ratings that fall quickly out of date as other editors (hopefully) improve the article.

Assessments edit

What do these assessments look like? A new Wikipedia entry often begins as a one-line “stub" article, which may graduate to a “start” status when it has a few paragraphs of content and eventually make it to C- or B-class. Once they get enough references, well-constructed sentences, images or categories, they make it to good article (GA) status, and perhaps one day, to the highest featured article (FA) status.

Why is this interesting? edit

In 2015, I saw that the experimental ORES allowed for any revision of a Wikipedia article to be given a rating (stub, start, C, B, good or featured) instantaneously. This capability was something I’d always dreamed of. (It is true: Wikipedia editors have strange dreams.) As a teacher, I saw the potential to provide crucial instant feedback to new Wikipedia editors to get them hooked on the experience. For meetups and edit-a-thons, where we train new users, what if newbies could see the impact their edits were having, instantaneously, as they were saving each revision?

Using ORES edit

 
Sample output of a ORES query against the WP 1.0 model

As an experiment, I tried this out on my class of 13 students and included myself in the test pool. Each of us chose an article in English Wikipedia within a WikiProject that was assessed by hand as a stub and tried to see how far we could improve it in one hour’s time. We would measure our progress every 20 minutes with the latest tool – ORES.

The first test was to see whether ORES agreed that the revision that was manually-assessed as a stub was also an ORES-evaluated stub. Interestingly, 8 of the 14 articles that were hand-labeled “stub” were in fact rated as “start” class by ORES. I gave the students a list of standard features to consider adding to the article – sections, info boxes, references, images, internal wikilinks, categories and links to Wikimedia Commons. They would have to do the research and searching on the Internet to fill in these common article features.

After 20 minutes of editing, we did the first check-in. People saved their edits, determined the unique “oldid” identifier for their saved revision by clicking on the “Permanent link” button on the Wikipedia page, and retrieved the ORES score via a RESTful API call:

https://ores.wmflabs.org/scores/enwiki/wp10/#######

For example, visiting the following URL asks ORES to evaluate a version of the space vehicle article from February 9, 2016, and produces the JSON result shown in the image to the right.

http://ores.wmflabs.org/scores/enwiki/wp10/704158086/

ORES has calculated that the article is most likely a "start" class, and a glance at the article shows this is a pretty good determination. The article has five sections, including a ==References== section at the bottom. Even though two sections are very short and contain only one paragraph, it is well written and has photos. Despite it having a template saying it is a "stub," ORES thinks it's higher quality than that.

The results edit

At the first checkin at 20 minutes, five people were able to get their articles from stub to start class. One was able to get their article from start to C class. The other eight editors stayed at the same level. After 40 minutes, two more people were able to get their article to C class.

At the end of the one-hour session, a total of 5 articles made it to C class. Four articles did not change class at all and remained in the “start” class (one student had an error and lost a few paragraphs of edits). Nine articles jumped exactly one class, and one jumped two classes - Women and religion. (A closer inspection of the latter article shows that content was copied in from other “Women in…” religion articles, which explains its rapid expansion. While not technically a violation of Wikipedia style, some work may be needed to find out whether content was replicated in the right proportions.)

 
ORES editing experiment with User:Fuzheado class, February 2016 at American University

It’s interesting to note that the Wikipedia editing community has traditionally shunned metrics or gamification in the editing process. Part of this is an understandable resistance to reducing the breadth of editor activity down into a single quality score. But having students excited and experiencing that instant dopamine hit when editing articles is a breath of fresh air when pleasurable interactions seem to be less frequent than in Wikipedia’s early days.

I wanted to experiment with ORES as a tool to see if it might give new learners of Wikipedia a way to engage with the content. I’m encouraged by this experiment, and can see a scoreboard or leaderboard system used for events or classes. Some projects such as the Wiki Education Foundation are already doing this with their Dashboard, and WikiProjectX is experimenting with ORES as a way to track the quality of article clusters. I look forward to new creative tools taking advantage of this feature.