Industry favourite Metacritic, a site which aggregates review scores, does not simply add up every score it can find and come up with an average. It weights scores, giving some outlets more of a say in the final aggregated tally than others.
While the outlet has never publicly disclosed the criteria by which it does this (on the site's FAQ page it says it will "absolutely not" reveal how each outlet is weighted), Adams Greenwood-Ericksen from Full Sail University, along with some of his students, set about cracking Metacritic's code for us. After months of research, they've finished their work, and according to a report on Gamasutra "their findings were almost entirely accurate".
UPDATE - Metacritic's response at bottom.
There are six "classes" of score, with publications ranked "lowest" all the way up to "highest". You'd think that those at the top, those with the biggest influence on a game's score - and as a result a game's critical performance and, in some cases, a developer's pay (or even future existence) - would be the biggest and most influential outlets. The IGNs, Gamespots and Eurogamers of the world.
The 29 websites in the "highest" category contain several that I literally have never heard of. One of them is a volunteer fansite. Eurogamer and PC Gamer are only ranked "medium". Giant Bomb is "lower". Edge Magazine, perhaps the most respected review outlet in the world, is ranked beneath Yahoo Games.
Note that the list, perhaps due to the time taken to research it, is a little old. Some outlets on there are no longer in business, while other newer sites like Polygon do not feature.
Metacritic's very existence, or at least its importance in the eyes of many industry types, is bad enough. But if this research is accurate - or even mostly accurate - it only opens the site up to further criticism. Who is determining which sites carry more weight? And how? Does the site keep the weighting a secret because, for a site that's all about hard numbers, the process is completely arbitrary?
When people are losing their jobs over Metacritic scores, those are questions that need answering. We've contacted the site for comment and clarification on the list's accuracy, as well as Metacritic's weighting methodology, and will update if we hear back.
UPDATE - Here's Metacritic's response to the study:
Today, the website Gamasutra "revealed" the weights that we assign to each gaming publication (for the purpose of calculating our Metascores), based on a presentation given at the Game Developers Conference this morning. There's just one major problem with that: neither that site, nor the person giving the presentation, got those weights from us; rather, they are simply their best guesses based on research (the Gamasutra headline is misleading in this respect).
And here's the most important thing: their guesses are wildly, wholly inaccurate. Among other things:
* We use far fewer tiers than listed in the article.
* The disparity between tiers listed in the article is far more extreme than what we actually use on Metacritic. For example, they suggest that the highest-weighted publications have their scores counted six times as much as the lowest-weighted publications in our Metascore formula. That isn't anywhere close to reality; our publication weights are much closer together and have much less of an impact on the score calculation.
* Last but definitely not least: Our placement of publications in each tier differs from what is displayed in the article. The article overvalues some publications and undervalues others (while ignoring others altogether), sometimes comically so. (In addition, our weights are periodically adjusted as needed if, over time, a publication demonstrates an increase or decrease in overall quality.)
Metacritic's weighting system revealed [Gamasutra]