After playing SoundSelf, many players describe their experience as though they were interacting with another aware entity. "I did this, and then SoundSelf responded like this, so I did that too...." That's the result of player's perceiving the game responding to the subtleties of their vocal expression in as delicate and attentive a way as another mind might. And while we try to accommodate for nuances in player expression like that, it'd be impossible for us to dream up scenarios adapting to the whole range of vocal expression and intention behind them.
So instead we cheat: We take a limited range of player expression (tonality, rhythm of breath, "grittiness" of voice and vowel shape other ones we're working on) and obscure the way they affect the play experience. The result is that players instinctively know that the audio/visual dance is responding to them, and they make the assumption that it is responding to whatever aspect of themselves they are focusing on at that time.
The magic lies in that assumption, and my job as a designer is to feed players with enough datapoints to validate any assumption they may have. The player is smarter than the game, but the game's responsiveness invites the player to project their own mind into it, and to then imagine that what they are looking upon is a separate mind, not simply a reflection of their own. It's the same mental process as anthropomorphising a beloved pet, and it's what we're talking about when we describe the game as "a meditative experience."
For a certain kind of game, discovering the truth behind the datapoints is the Eureka moment that the game is designed to facilitate. Thanks in part to Jonathan Blow's in-depth discussions of his design values, this seems to have crystallized into a discipline of game designed that has taken flight in indie games like Braid, Antichamber, Storyteller, and many more.
But for another kind of game, that Eureka moment breaks the game's contract with the player. In SoundSelf, the conceit is that you are dancing with another mind, and if a player were to discover the rules that govern the behavior of that other "mind", the assumed depth would collapse entirely.
In narrative-driven AAA games, I see this contract broken all the time, and as a gamer I can't see past it any more. The AAA is getting better at this every year as they (slowly) climb their way out of the uncanny valley and include simulations of increasingly complex systems (e.g. anything pretending to be human) into their gameplay. Unfortunately, they compromise their promise of immersion as they appeal to the gamer's desire to conquer those systems.
To conquer a system and use it to their advantage (a trope of gameplay almost completely taken for granted in videogames), the player has to understand the intricacies of how it works. As any sociopath eventually discovers though, this just doesn't work with minds. Minds are too chaotic - the best we can do is adjust our assumptions and meet that chaos with our own.
To create a convincing simulation of mind, it must not fall into predictable "game-able" behavior. In "The Last of Us," each behavior pattern I came to fully grok from the AI reduced them from presumably aware entities, to merely cardboard buttons to be pressed. It pulled me out of the story. 1
Fortunately for game designers - humans tend to project their own mind wherever they both (a) identify a responsive system and (b) cannot fully understand the rules governing that system. We don't need to design a complex mind to simulate one - we can get away with designing responsive systems that also cannot be completely understood. Like reading meaning into a random Tarot draw, or seeing faces in the gnarles of wood, people will see mind everywhere with the right balance of predictability and chaos.
1 SPOILER ALERT: It could be argued that in "The Last of Us" the descent of the player's perception of the game's enemies from aware-entitites to cardboard-cutout-buttons-to-press effectively mirrors Joel's narrative descent into sociopathy, but I sort of don't buy that.