Game Reviews Are Broken and We're All Part of the Problem
I write game reviews. I’ve been doing it for years. And I think the entire system is broken.
Not the individual reviews — there are brilliant game critics doing excellent work. The system. The ecosystem of how games are reviewed, scored, aggregated, and consumed. It’s failing everyone involved: players who want useful buying advice, developers whose livelihoods depend on review scores, and critics who are trying to do honest work in a broken framework.
Let me explain.
The score problem
Metacritic and OpenCritic aggregate review scores into a single number that follows a game forever. A score of 85 means one thing. A score of 79 means something entirely different — often the difference between a game being perceived as a hit or a disappointment.
The problem is that these scores are fundamentally nonsensical. Different publications use different scales. A 7/10 from one outlet means “good, worth playing.” A 7/10 from another means “mediocre.” Averaging incompatible scoring systems produces a number that looks precise but is essentially meaningless.
And yet, entire business decisions are made based on that number. Developer bonuses have been tied to Metacritic scores. Publishing deals have fallen through because a studio’s previous game scored below 80. Marketing budgets are allocated based on score thresholds.
A single aggregated number derived from incompatible scoring systems should not determine people’s careers and livelihoods. But it does.
The time pressure
Review embargoes create a race condition. When an embargo lifts — typically the day of or just before a game’s release — every outlet needs their review published as quickly as possible. The traffic and engagement from a day-one review dwarf what a late review generates.
This means reviewers are under pressure to play games quickly and write reviews fast. For a 60-hour RPG, that might mean playing 40 hours in a week, sleeping four hours a night, and writing the review in a sleep-deprived state.
The quality of criticism suffers. A reviewer who plays a 60-hour game in 40 hours isn’t experiencing it the way the audience will. They’re missing side content, rushing through dialogue, and evaluating the game under artificial conditions. The review is accurate for the game they played. It’s not necessarily accurate for the game you’ll play.
The reader problem
Players contribute to the dysfunction too. Game reviews are increasingly consumed as validation rather than evaluation. People decide they want a game before it’s reviewed, read the reviews looking for confirmation, and get angry when a critic disagrees with their predetermined opinion.
The user review sections on Metacritic are a wasteland. Review bombing — organised campaigns to give a game 0/10 for reasons unrelated to game quality — happens routinely. Games get bombed for political content, pricing decisions, Epic Store exclusivity, and perceived slights to specific communities. These user scores are meaningless as quality indicators.
Social media has made this worse. A review that gives a popular game a 6/10 gets screenshotted, taken out of context, and used to harass the reviewer. Critics self-censor to avoid backlash, which makes their reviews less honest and less useful.
What good reviews actually look like
The best game criticism I read in 2025 came from independent writers, not major outlets. Long-form pieces that engaged with games as designed experiences rather than products to be scored. Critics who took the time to play a game thoroughly, think about it deeply, and write something genuinely useful.
These reviews don’t generate massive traffic. They don’t show up as Metacritic scores. They exist on personal blogs, newsletters, and small publications. The economic incentives of the review ecosystem actively work against this kind of criticism.
What should change
Drop scores. Or at least drop the pretence that a number on a ten-point scale communicates meaningful information. A written recommendation — buy, wait for sale, skip — is more useful than 7.5/10.
Extend embargo periods. Give reviewers more time. A review published a week after release but based on a complete playthrough is more valuable than a rushed review on day one.
Decouple business outcomes from aggregate scores. Developer bonuses should not be tied to Metacritic. Publishing decisions should not depend on aggregate scores. The industry knows these numbers are unreliable and continues to use them anyway.
Read reviews properly. If you’re using reviews to make purchasing decisions, read the full review. Don’t look at the score and skip the text. The text is where the useful information is. The score is a reductive summary that doesn’t capture what the reviewer actually thought.
I’ll keep writing reviews because I think they can be useful when done well. But I’m under no illusion that the system I’m operating in is working. It needs fundamental reform, and that starts with everyone involved — critics, publishers, platforms, and readers — acknowledging that the current approach isn’t serving anyone well.