I am a digital entertainment consultant, and as such, consult on video games, hardware platforms, and online services for a variety of game companies. I work with a team. People are often curious about game consulting; I can explain.
Though there are details that can't be shared due to confidentiality, I will also answer what follow up questions I can in the comment section below.
A consultant is someone who provides expert advice professionally for a fee. Specifically, a game consultant's job is to provide insights and recommendations that help game companies improve their products' chances for success. One type of evaluation might examine a game's narrative and story, user interface, level design, or various areas of game play. Another may give feedback on specific areas of concern a client might have, for example, by responding with feedback to a client's proposed messaging and positioning of their product to consumers and press.
Game consultants are used by many companies in the industry, from small indies to large stand-alone studios to huge publishers. Generally, the smaller the player, the more focused the assistance being sought. For example, an independent developer might be looking for targeted input on the single title they're working on, while a large publisher might be looking for a wider range of assistance across their entire portfolio.
An obvious follow-up question is, which companies specifically work with game consultancies? Different consultants approach client confidentiality in different ways. Some list companies they've worked with on their websites; others refuse to disclose clients in even that manner. We fall in the "don't disclose clients" camp. That said, it's fair to say that most larger and many smaller game companies use consultants in some fashion during their development and publishing process, even if only for mock reviews.
Game companies use consultants to improve their products for their customers—it's really as simple as that. Obviously there is a financial incentive in that a better product sells more units, but I'd caution people from being too cynical about company motives here. Every game developer I know has a sincere desire to make the best game possible—it's not just about moving units.
As to specific reasons, sometimes a client needs expertise to troubleshoot a difficult problem area; other times clients are looking for an external perspective from "outside the bubble." And sometimes they need external feedback to help cut through internal politics—I've had several projects where a dev lead or studio head explained what they were trying to do, and in fact knew the right path forward—but were stymied because of disagreements higher up the chain. Having an independent outside opinion can help unblock the title and allow them to move forward.
Clients look for assistance in a wide variety of areas. Some common examples include:
- In-depth game, hardware platform, or online service evaluations
- Mock reviews (much shorter, higher-level evaluations)
- Messaging and positioning
- Monetization evaluations
- Social/Community engagement analysis
- Code performance reviews/optimization
- Tradeshow support (game evaluations, script writing, messaging and positioning)
- Reviewer guides
- Product pre-acquisition evaluations
- Concept reviews
As I don't have the space to detail each of these, I'll use one common service as an example for this article.
Probably the best way to give a sense of how consulting works is to walk you through the phases of a pretty common project we do: a pre-mortem evaluation. A "pre-mortem" is an in-depth evaluation of a title or platform at earlier stages of the development process—i.e., before it's ready to ship. The intent of this sort of evaluation is to arm a client with an external view of a product's strengths, weaknesses, opportunities, and threats (also known as a SWOT), as well as actionable recommendations as early as possible. Depending on where the title is in the development timeline, feedback as to how media and consumers will likely react to those findings may also be included, along with associated messaging and positioning feedback. Finally, if a title is close to shipping, most clients also look for a Metacritic score prediction.
Engagements can be broken down into five phases: Initial Contact/Negotiation, Product Evaluation, Report Creation, Metacritic Score Analysis, and Delivery.
Most projects start with an initial contact mail or call, which quickly moves to a conference call where we can get more detail on the client's needs. During this call we focus on two things: understanding the client's timeline and needs, and what assets we will need to do the requested work.
After the initial call we evaluate staffing needs and then generate a detailed written proposal. When complete, the proposal details the work to be done, deadlines, and our fee. The proposal is then sent to the client for review and approval.
Evaluations happen either at our offices, or at the client's.
If at our offices, there is usually a delay of a few days while all required assets are acquired. During this time we also try to have a coordination call with key stakeholders. This allows the development, PR, and marketing teams to share their areas of concern directly, as well as help make sure we are set up to do the evaluation as soon as possible.
Sometimes the evaluation is at the client's office. Onsite visits can be beneficial for the client, as it allows them to set up the game for evaluation for multiple consultancies at the same time. This can also help reduce the impact on the development team, since they can present key aspects of the game just once to everyone involved. In most cases, onsite visits will also include meetings with lead developers, producers, designers, and representatives from the marketing or PR teams to discuss the areas they're most looking for feedback on.
The actual evaluation process is basically the same irrespective of location. Our primary goal during an evaluation is to experience as much of a representative slice of gameplay and story as possible, ideally by finishing the title. Sometimes that's not possible due to the current state of the game, as when being asked to evaluate a first playable or vertical slice of gameplay. Whatever the case, each evaluator plays the game to the requested scope while taking copious notes.
Onsite evaluations usually end with a debrief with the client's stakeholders, where we give a broad sense of our findings and answer questions around some of the more timely issues in play. Then we decamp and return to our offices to focus on writing the actual evaluation.
After the title has been reviewed, the team pauses to discuss and consolidate all of the feedback from the various evaluators. Topics are categorized into the major buckets (Strengths, Weaknesses, etc.), and we work to synthesize a common view on all the issues, as well as make sure our recommendations are realistic, taking into account the current development timeline of the game. Once done, we begin to write. An in-depth evaluation can take up to ten business days to write, with the report ranging from 30-50 pages in length on average.
If a game is at a late enough state to be representative of what will ship, we also do a Metacritic score analysis to assist in score prediction. As part of this process we collect and maintain a dataset for a representative set of titles and then analyze it, looking for trends that we might want to take into account when making our prediction.
One example can be found below. We recently evaluated a first-person shooter and wanted to get a sense of how scores for this genre have been trending over the last generation of consoles and into the current. In particular, we wanted to evaluate how similar games in a series, often from the same developer and arguably of roughly the same quality, were faring over time.
Overall it's clear that scores declined toward the end of the last generation (ending in 2012), and in fact did not garner any sort of boost (with the exception of Killzone) that one might have expected from the advent of the next-gen consoles in 2013. This could be due to a lack of innovation in the games themselves, reviewer fatigue, or specific structural issues endemic to each title. Whatever the case, an FPS title that we might have confidently forecasted as 90+ four or five years ago is likely to be 8-10 points lower today.
Another area we've considered is how game platform might affect title scores over time. For an example, let's take a look at Call of Duty (due to its many sequels) by mapping out every entry in the series from 2004 to today. For years with multiple releases, we used the title with the top Metacritic score to represent the high point for the series for that year.
In this case, we didn't find much evidence of platform significantly affecting score (other than the Wii, which never really found traction as an FPS console). For Call of Duty titles released on platforms of similar capability (such as the Xbox 360/PlayStation 3 and Xbox One/PlayStation 4), scores tracked closely, especially later in the generation when developers became more familiar with the hardware and were able to engineer multi-platform releases with fewer compromises. So in this case, we didn't find any useful correlations.
In these and other ways, we analyze data and apply it to our predictive models. That gives us a first good sense of where the title might land. From there, we refine the score based on our experience with the title and instinct and arrive at a final value which is added to the report.
Once finished, we email the report to the appropriate stakeholders at the company, who typically share it internally as they feel most appropriate. Some clients share reports widely with the entire development team, others only with targeted team members. The team evaluates our findings, and depending on the current state and timeline of the title, changes may be implemented. Occasionally some of those changes are significant, and the client might request a follow-up evaluation to see how the title has improved.
Game developers don't work in a vacuum; they listen, they care, and they want their games to be awesome. Your comments—in forums, videos, social media, or elsewhere—are one huge source of feedback they listen to in moving toward that goal. Game consultants are yet another way they get feedback; I hope this article has helped shed a little light on how we do so.
I will make a point of responding to comments (within limits of confidentiality) to this article, so please feel free to post any questions you might have.
Andre Vrignaud is the Principal and Chief Consultant for AV Digital Consulting, a digital entertainment consultancy based in Seattle, Washington. Inquiries or questions may be directed to firstname.lastname@example.org.
Illustration by Jim Cooke.