Bugs are the bane of the business of software engineering. They cost time and generate fruitless conflict between programmers and bean counters (which in turn wastes more time). I happen to be one of those bean counters, and it makes me mad as hell to see even more of my beans disappearing without getting a better game to show for it.
Despite all that, I want to argue that bugs offer a valuable kind of input to the game creation process. I came to this view in part from working as a designer and producer on games, and in part from Thomas Kuhn's analysis of knowledge in The Structure of Scientific Revolutions. Instead of sighing and wringing my hands over a roadmap as soon as I see something unexpected, I ask what a different game would look like where the mistake was instead correct.
Fifty years after it's publication, I still find Kuhn’s thesis shocking: that the normal practice of science does not change our understanding of the world. Normally, a scientific experiment should only confirm that what you know is true. At most, a normal scientific experiment should demonstrate that you can do new things by applying those rules in a slightly different way. Facts that don’t fit your paradigm are either considered special cases, to be described by a subset of rules, or research problems to be solved in the future.
Kuhn advances this view of science for a few reasons. First, it is necessary for science as a community of practice to be cautious about overturning its foundations. Second, humans tend to fit new facts to existing paradigms. Third, the historical record of scientific revolutions shows that scientists have held onto mistaken views—that fire was made of phlogiston or that outer space was filled with aether—despite being aware of the experimental evidence that would become the cornerstones of later scientific paradigms.
I find Kuhn’s view both inspiring and dispiriting. The keys to massive, paradigmatic leaps of knowledge could be already among us, ready to be discovered by someone brave enough to question everything—and at the same time, these are rare crisis events that we are strongly predisposed to avoid. (David Hume made a similar argument in his proof against miracles: a miracle by definition defies your understanding of the possible, and so either indicates a mistake of perception or reshapes your concept of the possible).
I try to keep Kuhn’s lesson in mind when I inevitably encounter bugs in a game I’m working on. Some bugs are not the kind that can lead to productive knowledge, just like not all scientific accidents can lead to breakthroughs. There is such a thing as simple error, and you fix it and try again. But other bugs have forced me to rethink my vision of a game by proposing alternatives that no one around me had considered.
Two examples from my most recent game can illustrate the benefits of a skeptical openness to bugs. The game involves two players placing mines that will be hidden from their opponents and which each player will try to avoid (like Stratego). One problem was that a player could run through the opponent’s map, find the mines, then force quit the app and the round would be reset when they returned. This forced us to consider the question: having experienced the mulligan, should players be able to purchase a re-do? We decided this was abusive and not fun, and so we introduced security measures to prevent it.
Another bug caused undetonated mines to carry over between the three rounds of the game. This happened to solve a problem we had been struggling with. We wanted some kind of “rubber-banding” so that players who lost the first round would have a slight advantage later, resulting in more contested and exciting finales. But since both players have full knowledge of the others’ turns (we show a replay of how your opponent fared), there was no way to sneakily give an asymmetrical advantage without explicitly cheating on one player's behalf. Preserving mines had the effect of giving an emergent advantage to players who killed fewer runners with mines in the first round. It also introduced new meta-strategies around hiding your mines to intentionally hoard them—an unusual tactic, but one that played into the game’s overall concept of anticipating your opponent.
Both of these examples could be considered not to be true “bugs” in that they didn’t prevent the game from working—they are more like incomplete implementations than faulty ones. The lesson of Kuhn’s work, however, is to question what it means for a piece of software to “work,” and so question distinctions about what “really” is a bug. The biological concept of stress, for example, was discovered because Hans Selye was so bad at working with his lab rats that he drove them to produce stress hormones no matter what substance he injected into them. The data he gathered was worthless for the experiment he was running—from that perspective it was all noise and no signal—but gave a view onto an entirely different way of conceiving the body’s disposition to illness. I try to maintain this view of bugs: even if they present as the complete negation of what we’re working on now, maybe they are the inkling of something radically bigger to come.