Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 31, 2014
arrowPress Releases
October 31, 2014
PR Newswire
View All
View All     Submit Event





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Dynamic vs Static Rendering
by David Maletz on 01/08/11 07:10:00 pm   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Over the past decade, lightmapping and other precomputation methods have been used to simulate accurate lighting in games. Quake was the first computer game to use lightmapping, and the technique allowed the levels to have good lighting (for 1996) from many light sources with static shadows. Today, lightmapping is still used in many AAA game titles and can simulate accurate global illumination throughout levels, allowing game developers to have very photorealistic scenes. 


However, lightmapping and precomputation techniques come at the cost of flexible and dynamic capabilities. The complex lighting computed for lightmaps can take many minutes to compute, sometimes even hours depending on the level complexity and lightmapping settings. This computation, while very accurate, is fixed for the scene it was computed for. If the scene changes, the precomputed lightmap data is invalidated, and recomputing that data is often impractical, as lightmaps take so long to generate. In many games, this problem is overlooked in favor of improved quality of renderings. However current trends in game development call for more dynamic scenes with more complex lighting such as view-dependent specular reflections, which cannot be completely precomputed.


The perfect rendering algorithm that is flexible, can handle any kind of scene, and is high quality does not run in real time. As graphic cards become more powerful, however, those high quality and dynamic renderings become faster and faster as well. It is realistic to think that in ten years such renderings will be feasible in real time. For instance, one of my own research papers (of which my profile picture is a screenshot of) was a flexible, high quality multi-bounce global illumination solver for diffuse and low-specular scenes, and could fully converge within a few seconds. That is not exceptional timing either, as many global illumination papers are reporting times in the interactive to a few seconds range. While right now these algorithms are still too slow for games, the performance cost for these rendering algorithms goes down every year.


The dragon model in a cornell box converged in 2 seconds using my algorithm. Click image for larger size.


This does not mean that precomputation should be thrown out the window. Precomputation is useful for computing components of the scene that never need to be changed. For example, take atmospheric rendering. Rendering of multi-bounce volumetric atmospheres is expensive to compute. If the game is a space simulator featuring millions of planets with different atmospheric properties (like Infinity: TQFE), then precomputation does not make sense, and an approximate atmospheric scattering technique should be used. However, if the game takes place on only a handful of planets whose atmospheric properties do not change much, then precomputation can be used to greatly improve the accuracy of the atmosphere for those planets using techniques like Bruneton et al. If the relative sun position, atmospheric properties and the height of the camera within the atmosphere do not change much, then precomputing the entire atmosphere into a skybox is a cheap and accurate alternative. Games simply need to figure out how much of their scene needs to be dynamic, and how much quality or performance they are willing to sacrifice for that portion of the scene.


An offline, raytraced sunset image I generated in a half hour with volumetric atomosphere and water.
Compare to Bruneton et al. work (semi-precomputed) and Infinity: TQFE (fully dynamic).


Why do we care about dynamic rendering algorithms when the majority of the scenes in games are static? Because, even scenes we think of as static don't have to be static, and could benefit from motion. Buildings should be able to collapse, explosions should create realistic craters, trees should be able to bend in the wind or shockwaves from explosions, and light sources should be able to move (like the sun or headlights on a car) and still contribute more to the scene than just direct lighting. Crytek's game engine and game Crysis are a good example in my mind of using dynamic lighting instead of light maps. They developed Light Propagation Volumes, a real time global illumination solver, for their games. The quality of it cannot compete with offline precomputed lighting data that took hours to generate, but the effect is still convincing, and allows their trees to billow, their bridges to break, and creates scenes full of motion.


Keeping all of this in mind, for the game engine I've been developing everything has been designed with flexibility and dynamic scenes in mind. No more problems with getting the precomputed static objects and the on-the-fly dynamic objects to fit together, and no hassle getting a door to break, or turning static objects into dynamic objects. What can be precomputed (like the atmosphere) is precomputed, and what can not (like global illumination for dynamic scenes) is not. Nevertheless, I am curious as to what you, my fellow game developers, think about the relative costs and benefits of static versus dynamic rendering. Are the benefits of dynamic rendering (such as increased interactive capabilities) worth the cost in appearance which is currently unavoidable? Or, are precomputed scenes simply the way to go to wow the gaming world? I believe that dynamic rendering is integral to the future of gaming - especially as the quality and performance of those algorithms improves. What do you think?


Related Jobs

Twisted Pixel Games
Twisted Pixel Games — Austin, Texas, United States
[10.31.14]

Senior Graphics and Systems Engineer
Twisted Pixel Games
Twisted Pixel Games — Austin, Texas, United States
[10.31.14]

Mid-level Tools and Systems Engineer
Sega Networks Inc.
Sega Networks Inc. — Madison, Wisconsin, United States
[10.31.14]

Mobile Game Engineer
Forio
Forio — San Francisco, California, United States
[10.31.14]

Web Application Developer Team Lead






Comments


Steven An
profile image
It's a complex problem that varies from game to game and platform to platform. I don't think any sane developer would say either one is "the way to go." Doing what's best for a given game will require a combination of both classes of techniques and other techniques that fit somewhere in between (e.g. PRT).



But it is exciting to see more dynamic algorithms being used thanks to increased computational power. This will surely be the trend as our game worlds become more dynamic themselves.

David Maletz
profile image
I agree that combination techniques are very powerful - especially when the data it precomputes does not change much (or at all), and the time it takes to precompute is not long. I find full precomputation techniques like lightmapping limiting because global illumination is not a static thing - even if the world is static, the dynamic objects in the scene should contribute to the global illumination. Additionally, requiring hours to recompute lightmaps makes it impossible to think about updating them on the fly. Bruneton's atmospheric precomputation gives a very nice result, doesn't require the camera or light source to be fixed (just the atmospheric properties), and can be recomputed in about 5 seconds (so it would be possible to consider computing new atmospheric properties in the background, and the precomputation time will decrease as hardware improves as well).

[User Banned]
profile image
This user violated Gamasutra’s Comment Guidelines and has been banned.

David Maletz
profile image
I touched on the computational requirements of rendering dynamic vs static gameworlds, however the artistic side is a valid point as well. Allowing every building to be destroyed sounds realistic, but if that building was needed for some other game purpose, then as you said you now need to deal with the fact it was destroyed (like respawning it). However, having a renderer with the flexibility to destroy the building and update the illumination in real time if the game calls for the building's destruction is a good thing - if the renderer can do so efficiently.

Kassim Adewale
profile image
Static computation and dynamic computation is one of the reasons why hardware specification for game existed, it is also one of the reason why GPU assisted real-time rendering is getting ubiquitous.



Some big studio will go extra miles to put static codes to assist where the dynamic codes will fail in games, especially when the players violated the hardware requirement.



The reality is that dynamic rendering computation in game is where the future is going, Intel is now wooing game developers with Sandy Bridge processors which incorporates Intel HD Graphics capabilities directly onto the die.

David Maletz
profile image
I agree that dynamic rendering is definitely the future, we've been heading towards that since programmable pipelines. GPUs have been becoming more and more like many-core CPUs, and as that change occurs, more dynamic algorithms are possible. Static rendering is still a big factor for high performance rendering especially on older graphics cards, but the flexibility and unification (one algorithm to rule all objects in the scene) of dynamic algorithms will eventually replace static algorithms once its speed on the majority of cards in use improves.



Speaking of Intel, it's a shame the Larrabee fell through - I think a combined CPU/GPU with full x86 shader support would've been awesome, although it is possible that x86 cores are not suitable to highly parallel computation.


none
 
Comment: