Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 21, 2014
arrowPress Releases
October 21, 2014
PR Newswire
View All
View All     Submit Event

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

Simon's Ant On The Beach
by Luis Guimaraes on 12/04/13 09:29:00 am   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


So here at Candango Games we are working on this horror game for PC which is bound to be announced soon – it's the game I'm studying the building blocks of horror for – and one of the main features I wanted it to have was a good and immersive stealth system. And we all know the one most important thing in making great stealth is great AI.

Looking to achieve that, I came up with a heuristic approach on how to design the architecture of the AI. The following paragraphs explain how this design problem was approached.

Some theories in Cognition imply that we don't see Reality for what it is, but instead, what we "see" is a Virtualized "copy" of Reality created and processed in our minds. The same line of thought is also demonstrated in ancient Plato's Cave.

That's the same way how we naturally approach the idea of AI at first thought: we make the agents acquire and process information about the world to make decisions based on that information. It makes complete sense, we're emulating the way we think of things, copying it into our artificial people, and in fact what we're trying to do with an agent is to make it act the way a real human being would, so that's not wrong at all.

But there's one thing to observe in the specific case of Video-Games: their equivalent of the "real world" is already virtual. It's already made of pure information, so why should we re-virtualize it into the agent's brain and then try to make sense out of it? Why try to bring the world into the agent's brain and not look at it from the other side and bring the agent's brain into the world instead? So this is how I decided to do it: the world will "think", be "intelligent" and have knowledge. The world, not the agents.

This was decided to be made this way because it proved better for our purpose of making AI for interesting stealth gameplay Dynamics. Imagine the following gameplay sequence:

You're being chased inside a house, running down a corridor you see an open door leading to a bedroom, so you run inside it and lock the door. The chasers start to mash the lock and you know they'll soon break through and get in the bedroom, so you look around and you see a closed window, a bed and a closet. You open the window and then hide under the bed. The door breaks open and one of the chasers comes inside, he runs to the open window and look outside, then he screams to the others informing them that you went outside, and then jumps out of the window and goes away in the dark. You have escaped, this time.

That's what can be done with the basic heuristics of "bringing the brain to the world", with extremely simple code. Like, Pacman-simple. The detail lies in the fact that the agent (chaser) is completely unaware of anything of what happened, he doesn't know what the window is, or what it means, or where it leads to, not even that is leads somewhere. The agent is completely oblivious to all those concepts, things and ideas. All he did the entire time was to follow simple waypoints and scripts around.

Now picture the following events:

You're there on the computer desk like, game-making, and then suddenly you find yourself in the kitchen, face into the open refrigerator, looking for something you're not sure what, or coming back to the desk with a cup of coffee you barely remember wanting or even serving.

How did that happen? Did you make an informed and thoughtful decision of getting up and doing that stuff? Where did those actions originated from?

What happens is that it's not the agent that's "intelligent", it's the window. The brain is in the world, not in the agent. The window tells the agent what to do, where to look, where to go, what to say to the other agents, and how to portrait "his" decision to the player. When the agent enters the bedroom, the whole environment tells him to do things. The bed where the player is hiding under tells him to look under it. The closet tells him to look inside it. The recently-opened window tells him to go outside. After all those tasks are received, the agent proceeds to make an Utility-based Decision using the priority of each task.

By interacting with the window, the player raises it's importance, which makes the tasks given by the window to have increased priority making the agent pick its task over the others. That's controlled by the window's scripting, for example, if the player was inside the closet and made no noise, it could instead reduce the priority of its task, to lead the agent to look under the bed first, giving the player a chance to escape; it's all up to the Game Designer to tweak and decide how each piece of the environment should work.



But not all tasks come from external sources. Things internal or attached to the agent can also give them tasks. For example, if the agent has a medic pack and he's hurt, the medic pack (attached) will tell him to use it on himself, and give the task a weight relative to how gravely wound the agent is. At the same time, the damage system of the agent (internal) will tell him to run away from what's hurting him. The likely course of action for the agent in that situation based on the weights of each task is to run and seek safety, then treat the wound, then proceed to keep fighting or to keep running away. That's easily achieved by making the medic pack consider that it can't be done under dangerous immediate circumstances, as stopping to treat the wounds will leave the agent vulnerable to more attacks, so it reduces the priority of the task if the situation is not suited for it.

These are the very basics of the heuristic approach, and I was very happy with it. We can make the stealth system behave credibly and seem complex while still being very simple in code, and the system is versatile enough to allow easy addition and improvement of tasks. For example, we can create a new task source for a car or a locker or a ladder and never have to touch the agent's programming, or we can improve or tweak the systems that control a single task or environmental brain piece and not affect anything else directly. And the behavior possibilities studied seem very interesting to create an engaging experience – and for the specifics of a horror game, some moments of uncanny display of intelligence for the player to witness.

I then considered the addition of a text-parsing system to see how it could be exploited by those heuristics. A textual instruction of "put the blue ball into the green box" given to a friendly NPC would be identified by the respective green box and blue ball then be turned by them into tasks given to the agent telling him how to grab the ball and where to release it, making it all seem as if the agent understood what the player told him to do.

All nice and sound so far. Then the raw Game-Design phase ended and I got to the point of actually making it in code. The execution phase. This is what my system looks like at this point:


"Behavior is always an interaction of an agent with its environment".

As I started programming that system in Unity, I took a bit of time everyday to research basics of AI. Not only Video-Game AI, but AI in general. I wanted to make sure to not be just wasting time reinventing the wheel after all.

I started to stumble in some very interesting concepts and half-century-old studies and experiments. Among them, the "Frame Of Reference" property, experimented with in the animated movie from Heider-and-Simmel (1944) and exemplified in Herbert Simon's "The Sciences of the Artificial" (1969) by the anecdote "An Ant On The Beach" and, by the possibly not considered correlated at the time, The Kuleshov Experiment (the original from around 1919). That's where things started to surprise me (and deprive me of sleep).

Finding out that what I was working on was coming from the opposite side of those studies to meet them in the middle of the road was very eye-opening. I was coming at it from a different perspective, that making the world "intelligent" instead of the agents would be an efficient thing to do because we virtualize the world we live in and the world an agent lives in is already virtual. But that was a base of something that could have been used to do much more, and I wasn't seeing it.

What all these studies have in common is: a simple thing when combined with a complex thing creates a complex output that our minds then interpret as even more complex and intelligent than it actually is.

Of course that's Game Design 101 that the point of the AI is to make the agents seem complex and intelligent by using a diversity of simple illusionist tricks. But the interesting part is not that basic approach, but the pattern of the Simple vs Complex formula repeated in all those experiments. The fact that the same building blocks apply to each of them even thou their specifics differ.

After seeing how present that aspect was in each of those experiments and studies I started to look for similar stuff everywhere else, because maybe it was there in more things too and I didn't know. Most importantly, I was looking for them in the gaps of our AI system. And then I started asking many questions about design problems and trying to answer them using that same system and that same formula.

  • If the tasks are created under a modular system and can look into the agents information to calculate and weight behavior to give them as tasks, why not increase the amount and detail of the information they have? Give them personalities, background, mood, feelings, social dynamics and then have tasks consider that information too instead of only health and inventory? 
A behavior to make a character that was born in the jungle to know where to find water and food and the ones from the city to not know it can be just a simple background check done by the task source that sends them after the food and water. If the character is from here, give this task, if not, don't give it.
  • Why am I only considering the concrete side of the world and not using abstract stuff as well? The things that are there but cannot be seen? Things like dramasuspensecomedy? Why only create behavior originated by things and not by people and ideas? Why not create tasks originated by groups of things, by science and plot scenes?
We can make dynamic landmark plot scenes to spice up the sea of emergent behaviors. Scenes that pick from which characters are available according to how their situation and personality fits each character role required (or optional) from the scene to happen.
Scenes can be scheduled or tied to a place or trigger on a situation or on a certain point of the global story arch. For example, to create a generic zombie movie scene where one character have been bit or hurt and the other characters argue if they should help him, or kill him, or leave him behind: it doesn't matter who's hurt or who's in the scene or not, the task source (a plot scene) just has to evaluate how each character fits each scene role best and assign tasks for them to "act" in each role, and then just let everything happen naturally with their utility-based decisions. On another example, we could have the last climax scene pick whatever character makes sense and have been outside the player's watch in key situations to be revealed as being the killer of the thriller plot.
  • Why not use these dynamic plot scenes and propagate their effects to further events and change the plot naturally?
Like if the player sneaks about and flats the tires of an NPC's car, he can't show up to the scheduled plot scene "The Party", then another NPCs gets to dance with the common love interest (because the dance scene will replace him with the next NPC on the list to fit the role), and then the future love-triangle scenes invert the two characters between the roles of who's the boyfriend and who's the other guy, leading to other further outcomes later.
Or if a character that's taking on the Leader role of a group lost in the desert starts to lose his sanity or falling into despair, the abstract idea / social dynamics "Team Leadership" chooses another NPC to take on the role and the responsibility of being the leader. Similarly, if the group is unhappy with the leader decisions, the social dynamics of the group can spawn a new dynamic plot scene where they fight over and then disband into two groups. 
  • But then why stop at that? Why not let the plot scenes detect when the player himself picked on a role and then adapt the scene to consider that?
The scene picks a character for a second leader role (or leaves it open for a while to see if the player fills in) and others to be support of each side and others to argue that it's better to stick together and stop fighting.
  • But characters can also be fighting while still running away from the zombies and while arguing with each other about who's the leader or if the hurt guy is gonna be left behind, so why not improve the decision system and make the characters capable of multitasking?
Agents have "physresources" that allow them pick multiple tasks at once (and weight decisions based on groups of tasks rather than only individual ones): concentration limits, arms, legs, mouth, eyes...
  • If "behavior is always an interaction of an agent with its environment", why not extend that meaning of the term behavior to encompass personality and mood? Don't we behave differently at home and at work or in social events, or with friends or strangers? If the place is happy don't we become happier and if the place is serious we act serious in accordance to it? Isn't the same true for an event or situation?

The Current Model

After further consideration of many questions and aspects, without any big increase in code complexity, but simply by better exploring that same original heuristic made for stealth AI while keeping all the basics of the system intact, it finally arrived at this:
"Everything that happens once can never happen again. But everything that
twice will surely happen a third time". Patterns can always go further to reach more stuff.

That's what our system is being made into now, and what that horror game I mentioned will be created with. This game will not mean the dead-end of this system as it has enough versatility to be pushed further as we learn more about how to exploit it for good results.

There's a few more things about that project I'll be talking about in the future. Not yet, but probably before the reveal.
The best thing comes last: after releasing our game we'll more than likely be releasing it all for free on the Unity's Asset Store, and also publishing schemes to allow people to reproduce it in other engines. We'll also register what we learn in terms of usage of the systems, the tips and tricks of what works or doesn't, and the interesting stuff that can be done with it.

We want to play your cool creations using that system too! We want Video-Games to follow it's Fate and become what they're supposed to be. We want the evolution of the art-form to reaccelerate and we believe AI is going to be the next big turning-point in that aspect.

This article is a repost from

Related Jobs

Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Senior UI Artist (temporary) Treyarch
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Lead UI Artist
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

UI Artist/Visual Designer (temporary) Treyarch
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States

Senior Character Artist (temporary) - Treyarch


Bart Stewart
profile image
I love the idea that the world itself has functional "knowledge" built into all the objects whose dynamic effects can be perceived by mobs, including player and non-player characters.

A reactive world is an interesting world!

And big kudos for being willing to release the technology at some point to other developers who are also interested in building more dynamic worlds. That's how System Shock saves the computer game industry. :)

Luis Guimaraes
profile image
Totally agreed Bart (and your blog is very inspiring in that aspect).

Maybe somebody smarter than me puts their hands on this an brings the answer to many unsolved problems we have, so holding stuff at this day and age is not a very visionary thing to do.

The difference between Virtuality and Fiction is that Fiction only represents Reality (Life) in Form, while Virtuality can also represent it in Function. So using Video-Games to make only Fiction is setting the bar way too low.

We're Video-Game developers! We're supposed to 1337 the big Game Of Creation! :D So let's spread the coins around for the other players in our team...

It's called Video-Games!

Matt Marshall
profile image
Great article...I will certainly be looking at this when I get into my AI aspects.

One thing I was trying to do with a personality AI was trying to structure and understand why we do things and it really is a basic case of sensory data. Instead of the environment you could look at basic functions of the body and 'made up' situations, much like your idea of mood etc.

In the end after some solid research on emotions etc I basically structured a creature as having three key areas. Physical needs (hunger, thirst, bladder, sleep etc), physical condition (temperature, pain/pleasure, dry/wet etc) and emotive response (happy, angry etc).

Add to this an element of short term and long term memory that is built on over time. And you have a creature that develops it's own 'character'...if done right.

Different attributes affected the eventual goal of being 'happy' or 'content', so if it was hungry it's happiness would lower, and it would find things on its own in the environment to satiate this issue...everything would have different priorities, much like you mentioned, and I would also add certain elements of randomness per creature that allowed for more unique character...such as some creatures are 'content' when they are depressed rather than may sound odd, but people work in much the same way.

Everything aims to be in their 'optimal state' whatever that may be. And that changes depending on person be it relaxed on the couch watching movies or bungee jumping.

I haven't put it into practice yet as I only started learning how to program about 4 months ago, I'm more a designer...but basically through this system I could basically visualise the goings on of a creatures brain and physcial condition in realtime using bar graphs and environmental cues that affect the creatures existance.

SImply put, I like this environmental appraoch as that will tie in with my internal development quite well, rather than making everything internal.

SO thanks a lot!! :)

Luis Guimaraes
profile image
Hi Matt, keep an eye open to a follow up article explaining some other things in more detail soon! :)

Glen Pawley
profile image
Hi Luis, thanks for the article. It's come at the right time for me as I am currently designing an AI in a similar environment.

I'm trying to relate your concepts to ones I'm more familiar with from my AI research... am I right in thinking, particularly when looking at your final diagram, that this approach is roughly analogous to hand crafting a neural network? The set of task weight calculations are the output nodes of the network, and all the other concepts act as input or hidden nodes that feed into each other and the output nodes. An additional complexity is that your concepts may each have a custom function by which data from the relevant inputs is aggregated.

I'd never considered the possibility of hand crafting a neural network before, and even if I've completely gotten the wrong end of the stick from your article, I think that will be worth exploring. Thank you.

Luis Guimaraes
profile image
Hi Glen, I usually think MDA itself is extremely similar to a neural network: you have the Mechanics as inputs and the Aesthetics as outputs, Dynamics being the hidden layer where all the magic happens. So games (not only Video-Games) ARE neural networks :)

On your question, I think the system is more analogous to software and hardware, where hardware is a fixed set of instructions and rules, and software defines how the hardware is supposed to process it. The agent being the hardware and his actions being the software.

If you'd like a real world analogy, think of machines that come with instruction manuals. If you're smart, you probably can skip the manual and go straight to operating it, but not everybody is as smart as you and some people need the manual. The agents in our games are EXTREMELY dumb, so, EVERYTHING comes with an instruction manual :)

It's also a way to invert the concepts of systems and content. Game events are usually created as content, and AI is usually created as system. Decentralizing the behaviors of the agents to the world takes the behavior out and only leaves the agent with the core system. The behavior leaves the system and join the content, making it more feasible to adds lots and lots of it later and in further projects without affecting the core system. At the same time, the events stop being just content and become part of the systems, as now they're interconnected and not just static anymore.

Still your analogy is good if you break down the idea of the neural network from a linear trajectory of input-hidden-output and think of the data that composes the characters as a separate layer of weights and the actions as injections of neurons that perform a specific tasks and go on top of that layer blending their own weights with the agent's weights (personalities) to come with a final priority for the output, and then feeding back into that base-layer with results from the outcome. Very interesting...

Glen Pawley
profile image
Thanks Luis, that helped flesh out a couple things I wasn't quite sure I'd understood well from your article. Good luck with your project :)

Jesus Alonso Abad
profile image
You've opened a sea of new and exciting possibilities before me :D Thank you very much for this. I'll be sleep-deprived too today!

Luis Guimaraes
profile image
Welcome to the club! :D

Just in case you're still awake: the player is an agent too so, depending on the level of details and the accuracy of the character personality model, the system can be used backwards to model the player's personality in the game based on his actions ;) and then you can use that information for whatever you want.

Craig Jensen
profile image
Be careful to avoid the "all npcs know everything about the character" trap that so many game fall into.

Some should be oblivious and concerned with other things.

Luis Guimaraes
profile image
Thanks for the reply, Craig! Don't worry about that :)

Diet Schnaepp
profile image

Fun fact: this repeats Sartre's philosophical lection in terms of game dev.

Jorge Miralles
profile image
Very interesting what you write about AI. I look forward your next article.
I have a question do you consider that the actions performed by the agent can affect its own states like his mood or the considerations made by the plot and the places to asign a task?

Luis Guimaraes
profile image
Hi Jorge. Yes the action can feed changes back to the character's mood or general personality, it's all about programming each specific action to consider how each event changes the character as a whole.

For example, you can make it so an otherwise pacific and empathic character kills someone in a life-or-death situation and comes out of it a changed person, with varying levels of change based on whatever rules you establish for the action.

Jorge Miralles
profile image
Very interesting what you write about AI. I look forward your next article. I have a question do you consider that the actions performed by the agent can affect its own states like his mood or the considerations made by the plot and the places to asign a task?

Luis Guimaraes
profile image
Hi Jorge, this is old I haven't seen this question sorry.

Yes, the outcome of some actions and the actions of other agents can affect the agent's mood in a feedback loop and the mood is considered in further clculations of tasks and priorities. So the potential for propagation is very present.

Joshua Darlington
profile image
Have you peeked the free Dreyfus Heidegger Philosophy 185 class from Berkeley?

Prof. Dreyfus talks about being a philosophy prof at MIT in the 60s and battling MITs OG AI department (Minsky etc). He told them that rule based AI and heuristic AI was a dead end. THe Dreyfus/Heideggerian insight seems to have profound implications on AAA storyworlds.

Essentially Heideggerian existential phenomenology exposes the connective layer of shared environmental cultural intelligence that humans are submerged in. This matches your intelligent landscape approach and goes even deeper into how human intelligence is distributed intelligence.

Following this line of thinking, it becomes obvious that the most important move in game/story design is networking in more live people instead of AI NPCs. Take the effort you would use to create an NPC and write a role for some one else to play - something with a colliding game objective. AR layered reality worlds are a better match for story games.

Luis Guimaraes
profile image
Well we're not at that dead end yet.

Senad Hrnjadovic
profile image
This idea sounds genious to me. It is very simple and opens up possibilities. Thanks for sharing and the open attitude about it!

I am very excited about it! :)

Robin Di Capua
profile image
wow that's a very interesting take on AI in games! ;) I think that AI has been neglected for far too long in the games industry. What's the point of having characters with billions of polygons and details if they act dumb and are not believable? Have you thought about writing a proper paper about this stuff with some examples on how you applied your theories into practice? That would be very interesting in my opinion ;)

Luis Guimaraes
profile image
At some point yes, first I'll explore the possibilities and make some "tech demos", formalizing this now would exclude much of what's yet to be experimented and learned.