Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 21, 2014
arrowPress Releases
October 21, 2014
PR Newswire
View All
View All     Submit Event

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

Why game testers should be integrated into the development teams
by Johan Hoberg on 07/25/14 05:35:00 am   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


Jeff Sutherland recently wrote about that being agile means getting rid of separate test teams. He talks about how tools development at Microsoft reduced their bug count from 30000 down to 3000 by integrating their testers into the development teams. [1]

Of course there will always be instances where a separate test team will be necessary, or at the very least equally good. Running mandatory customer acceptance tests or certification tests is one example. Localization test is probably another. But we will disregard these instances in this article.

So why should game testers not be a separate team? Why should almost all game testers be integrated into the development teams, and part of the development process? The short answer is “complexity” and “combinatorial explosion”. At least these are my thoughts on the subject.

Large game worlds, multiplayer, AI, unpredictable users, and many other factors add a complexity to games, which is rarely seen in other software. Games are unpredictable and can sometimes feel random. Bugs appear even though the changes done should not have affected that area or feature. A complex system [2].

If game testers are siting in a separate team and are not an active part of the development process this means they have less insight into the how the complex system works. To them it is even more unpredictable and random.  This is further compounded by the fact that game testers usually have less clear requirements to work with than other testers [3]. This means that when something changes they basically have to regression test everything if they want to be certain that nothing has been broken. Since most modern games become larger and larger, and more and more connected to other software, the number of possible tests grows and grows. There is a combinatorial explosion [4] of tests to execute every time something changes.

With a separate test team this means that the number of tests they must execute will continuously (and not linearly) increase with feature growth. This means that the team must either grow, increase lead-time, or not test everything. Since the separate test team is dealing with a complex system, they do not, with any certainty, know what tests to select and can easily choose not to run tests that would have actually revealed critical bugs.

Since continuously increasing lead-time or growing the test team is probably not an option, the only way forward is to be able to select test cases in a better way.  To be able to do this I think that you need two things. Better communication with the system experts (game developers and system architects), and a better understanding of the complex system, which reduces the unpredictability and perceived randomness.

As you can imagine my opinion is that this is achieved by having testers integrated into the development teams.

When discussing this with a friend, I was asked who would perform system test if all the testers were part of different development teams. Who would make sure that all the different development teams’ features and changes worked together? Wasn’t there room for a system test team that tested that the entire game was actually working?

My answer was that we should not confuse the need for system test with the need for a system test team. System testing is probably the most complex testing, and I don’t think the answer is to make it harder for these game testers to communicate with the development teams, and give them less insight into the complexity of the system.

With increasing complexity and combinatorial explosion we need to test smarter and more efficiently. The answer to solve this conundrum can never be to move the game testers further away from the development process.

That is my opinion anyway.



[1] Agile means get rid of test teams

[2] Wikipedia: Complex system

[3] Cowboys, Ankle Sprains, and Keepers of Quality: How is Video Game Development Different form Software Development

[4]Wikipedia: Combinatorial explosion


Related Jobs

Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Senior UI Artist (temporary) Treyarch
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Lead UI Artist
Infinity Ward / Activision
Infinity Ward / Activision — Woodland Hills, California, United States

Senior AI Engineer - Infinity Ward
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Senior Software Engineer - Treyarch


Karthik Srinivasan
profile image
A very good article. Well written. One thought that struck my mind is that. As the teams continuously go through the feature growth, is it possible to stop testing on the stable parts? As in is it possible to isolate and freeze a part of the developmennt. So as we move to Feature 200+, we might no longer need to test Feature 50 and below. Just a thought

Mitchell Fujino
profile image
Usually stable systems get moved into "regression testing", which is run less frequently, and/or automated. (Or at least, that's how I did it on my teams.)

Due to the complexity Johan mentions, stable systems can often have bugs exposed by seemingly unrelated changes, so testing can't be abandoned completely.

A good QA team will evaluate the cost/value of any test and adapt accordingly. (And communicate that resultant uncertainty to Production.)

Ron Dippold
profile image
I generally agree - QA people who are on the team tend to file much better reports and test much smarter. However it still pays to have some outside testing (before it goes to the console's qualification team!). People who are familiar with the system think in different ways than people who aren't and have a tendency to learn how to unconsciously avoid some problems - basically it's the difference between whitebox and blackbox testing. And the players are going to be a million very clever blackbox monkeys. There's room for both.

Joe Chang
profile image
Yeah this is how it should be in any dev environment. I've found the team works best when testers are involved right from the very beginning - i.e. even during the formation of requirements. Being able to ask questions and essentially test and challenge design decisions can be really helpful for the entire team.

Ian Morrison
profile image
Totally agree. I feel that QA should also be participating actively in development in the sense of gathering and analyzing user tests and providing checks and insight for the sake of robustness. By that, I mean that testers should be proactively testing against high-risk elements of development (a tough thing to do if you're on a separate team). They should also be getting support to automate as much of the testing as possible... the testing pipeline is something that should be getting a lot of tool development!

Philip Wilson
profile image
Hear hear! Another reason why an integrated test team is a good thing is that it provides multiple channels for learning how to debug, resolve and provide greater feedback as to "why" something may have happened. By doing this, you're providing fallback options when members of the dev team move on, are out due to emergencies or are promoted.

Peter Harries
profile image
Great article, wholly agree. Embedded QA also get the opportunity to work closely with the dev team and help debug complex issues that potentially have long winded or very tricky reproduction steps.