Welcome! edit

Hello, MattEnth, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are a few links to pages you might find helpful:

You may also want to take the Wikipedia Adventure, an interactive tour that will help you learn the basics of editing Wikipedia. You can visit The Teahouse to ask questions or seek help.

Please remember to sign your messages on talk pages by typing four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or click here to ask for help on your talk page, and a volunteer should respond shortly. Again, welcome! JAGUAR  13:41, 14 November 2015 (UTC)Reply

Re: OpenCritic Updates? edit

  Responded here czar 14:11, 3 May 2016 (UTC)Reply

Shop talk on OpenCritic edit

Hi Matt. Now that E3's behind us, I'd like to talk some more shop about OpenCritic's feature set. I left a message to this effect last month on czar's talk page but it probably got lost in the weeds. Specifically, I would love to see OpenCritic be an innovator in the realm of presentation of statistics about a game's reception. Currently, both OpenCritic and Metacritic report a simple arithmetic mean of all aggregated (scored) reviews. I love the Rotten Tomatoes-style % of reviews recommending the game. I love the histogram showing where a game falls on the distribution of all scored games. In fact, I actually think this number should be reported first/more prominently since it tells you more about how a game scored in a comparative sense. Take Mirror's Edge Catalyst for example. At first blush, a 71/100 seems like a decent, perhaps above average score. But when you tell me that 54% of all games scored better than this? That's unbelievable! (As an aside, I want to see this histogram on a per-publication and/or per-reviewer basis, which would control for biases from converting all scores to a 100pt scale.) I think a good place to start is to report other descriptive statistics like standard deviation and include that prominently in the headline score (e.g. 71 ± 5) to raise awareness about the degree of consistency of the scores. Thanks for hearing us out! Axem Titanium (talk) 18:56, 21 June 2016 (UTC)Reply

Hi Axem. We're working on a whole bunch of this. Big brain dump for ya:
Longer-term, we actually feel like the Rotten Tomatoes percentage is much more indicative of a games' quality. The #1 criticism we get about OpenCritic is "you're averaging a bunch of arbitrary numbers," and we actually sortof agree with that. Each publication has its own review scale. Each publication has different ways they grade and critique games. The lack of standardization significantly devalues the metric.
The beauty of the % recommended metric (when done right) is that it's consistent. Does this critic recommend this game to general gamers? Yes or no?
We didn't think it would be wise to do this at launch. The reality is that most gamers talk about the average review score. If we'd chosen to go with the more diverse data set at our launch, it had the potential to be confusing and overwhelming. And if we'd gone with a metric other than the average review score, we risked not being included in conversations and thus not having a path to grow.
But it's something we are working on. The % recommended metric is in our crosshairs right now. We're working on empowering publications and critics to be able to choose their threshholds for "recommended." IGN might say that they recommend games 7.5 and higher, while Giant Bomb might save that for 4 stars and higher.
We're also working on just getting more data. We want OpenCritic to actually be "open" to anyone who's interested in writing nuanced and high-quality reviews. One of our projects is tentatively called the "Contributor Program," which lets anyone sign up and become a critic. Instead of posting their reviews on OpenCritic, they'll post them on their on website, blog or YouTube channel. That way, they continue to own their content and reviews while still being able to use OpenCritic to grow their audience. We'll still have higher standards to becoming an official critic that's included in the OpenCritic average, but opening the floodgates to hundreds of amateur and up-and-coming critics would significantly grow our pool of data to report from. And by creating an initial signup gate, we solve the problem that Metacritic, Amazon and Steam face: "bombing" patterns and just scores of low-quality reviews.
Obviously, we're still interested in getting the OpenCritic average alongside Metacritic on Wikipedia :-P From a business perspective, we're still trying to prove that we're a legitimate authority, and it's brutal. We're making progress: our tweets frequently have higher Twitter engagement than Metacritic despite having 85% fewer followers. Our new gaming calendar is getting referrals from Ubisoft and PlayStation's internet. And things like catching Metacritic sourcing from us are very helpful. But still, the struggle is being recognized as an authority by someone that matters (including the Wikipedia editors). We're pretty confident that once a single authority recognizes us, the dominos will start falling and we'll move much faster.
Anyways, hope this is helpful! I don't talk much on Wikipedia now because I really don't want it to look like we're pushing OpenCritic or something; it's clear that conflicts of interest are a serious part of the culture here and we certainly don't want to infringe on that. I've been making sure to be available to Czar specifically on the Wikidata front, but that's about my extent.
MattEnth (talk) 19:13, 21 June 2016 (UTC)Reply
If you guys are doubling down on the % recommended metric, it would be great to see that reported as the primary metric, rather than the average score. Are you planning to do a "Rotten"/"Fresh"/"Certified Fresh" thing like Rotten Tomatoes? I'd still like to see reporting of standard deviation by default. I like what you guys are doing so keep it up! Axem Titanium (talk) 16:55, 22 June 2016 (UTC)Reply
That's the idea. We don't want to make the switch to % recommended as the primary metric until we empower critics and publications to control their own threshholds. We don't like Rotten Tomatoes' system where RT determines if a review is fresh or rotten on behalf of the critic. Right now, the system cuts off at 8/10 for numeric scores and "recommended" threshold for the Eurogamer/GameXplains/AngryCentaurGaming. We agree with the criticisms that it's arbitrary and unscientific in its current form - we wanted to see if people would use and talk about it. It's clear there's value here, so the next step is to iterate and improve.
And yes, we just rolled out a system similar to Rotten/Fresh/Certified Fresh. Ours is based on typical RPG strength tropes: "Weak"/"Fair"/"Strong"/"Mighty".
I'll see if we can add Standard Deviation back on the page somewhere. We used to have it but dropped it after seeing low engagement.
On a different note, Arstechnica used us as a source today: http://arstechnica.com/gaming/2016/06/what-can-we-learn-from-mighty-no-9s-troubled-launch/ MattEnth (talk) 17:04, 22 June 2016 (UTC)Reply
Cool, I understand the desire to wait until publications get back to you about thresholds for recommendation. I must have missed when SDs were included and then removed. Even if they had poor engagement, I'd still push for that data to be reported on Wikipedia. Axem Titanium (talk) 20:34, 22 June 2016 (UTC)Reply
I doubt 80±5% would mean much to a Wikipedia audience because the score is apropos of nothing (it has no basis of comparison) and most readers lack the literacy to do something with it—even better would be using the SDs to explain the tightness of the spread around the mean in writing. Metacritic uses the terms "average"/"mixed" interchangeably, but in reality, an average review is very different from what I imagine most people call a mixed review (a mostly balanced range of positive and negative reviews rather than skewing to one side). czar 00:11, 23 June 2016 (UTC)Reply
While, yes, Wikipedia is written for a general audience, I think it's still a useful addition because 1) I don't think a general audience will be actively harmed by this information that they may or may not understand, and 2) informed readers will gain a significantly better insight as a result of just 2-3 extra characters in a review box. There's no tradeoff for this extra data and if we accidentally inspire some readers to learn more about statistics, even better. Axem Titanium (talk) 20:53, 24 June 2016 (UTC)Reply
Useful, yes, but only to those with stats literacy—better explained through graphs, comparisons, prose for everyone else (e.g, a graph similar to the bar graph at OpenCritic that shows where other games in general or just that year fall above/below a specific game's average score; for maximum impact for the SD, it would be better to show it as a range like 80–90% and express how we're defining "most" instead of showing 85±5%). The trade-off would be confusion—I doubt WP:VG would approve of including it in the reviews box even if a site calculated it as a metric. (Similar reason why we don't include system requirements, etc.) Anyway horse before the cart here, but wanted to reply since you brought it up czar 00:42, 25 June 2016 (UTC)Reply
I'll agree to fighting that battle when we get there. I will preview by saying that Wikipedia, as a public good, should encourage stats literacy as a public good. Axem Titanium (talk) 22:06, 28 June 2016 (UTC)Reply

What is a contributor? edit

What are contributors on OpenCritic?

Verify community applications edit

  • This message verifies the I recently applied as a community member (Serryl#2342), and it may be removed by anyone else when appropriate. Supergyro2k (talk) 18:54, 29 December 2021 (UTC)Reply