Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
August 23, 2014
arrowPress Releases
August 23, 2014
PR Newswire
View All
View All     Submit Event

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

The Drama of idTech5
by Benjamin Quintero on 03/20/13 02:52:00 pm   Expert Blogs   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


[reprinted from...]

I don’t know why I am so interested in these types of off-the-path rendering techniques.  Perhaps it has something to do with the fact that they are so different from the classic “push more polygons” pipeline.  Or maybe that they serve a singular purpose of trying to make the experience better through a non-traditional use of traditional content like triangles and textures.  Still, now that the dust has settled and everyone has chosen their sides in the fight I wanted to go back and analyze Megatextures more.  I wanted to question if it was the right choice for id Software and what can be done with it in the future.

I’ve been following this technique for far too long.  A likely side effect of my addictions, I find myself combing the whitepapers and lectures of companies like Crytek, id Software, Dice, Valve, Epic, Ubisoft, and many more.  I guess it’s not the worst thing for an engineer to do with his spare time but, as I’ve mentioned in previous posts, wanting to follow in the steps of giants often leaves me standing in a dark place from the large shadows they cast.  And fully knowing my flaws, I still venture into this technology to ask myself if it is something worth investigating for my own indie-scaled games.

In The Beginning

When Megatextures were first introduced in a modified version of idTech4, John Carmack used a much more traditional terrain-based scheme where the geometry was assumed to be a relatively 2D top-down sprawl of land.  This approach has been used for at least a decade prior to video games in the simulation industry.  Terrain topology rendering was something critical in military applications to display high resolution satellite images for tactical planning.  It seemed like a reasonable approach at the time, but games like Quake Wars: Enemy Territory showed the strengths and weaknesses on this approach.  Vertical cliffs, walls, and ceilings were an impossibility with a technique that was designed to drape itself over the landscape like a table cloth.

Megatexture 2.0

Not satisfied with the rudimentary version that was developed in idTech4, the latest incarnation of Megatextures was a complete re-write that evolved out of an “ah ha” moment for the developer.  Though it’s not entirely likely that Megatextures were the first of their kind (there were similar whitepapers with slightly varied approaches), it certainly was the first to be proven on the gaming battlegrounds at 60 frames per second.

Megatextures now appeared to be rendered (from the nuggets I’ve gathered) as follows:

  1. Render the geometry to the frame buffer, writing out attributes such as texture coordinates (uv).
  2. Read the frame buffer out to system memory.
  3. Process each pixel in the buffer. (handle cache misses, loading/unloading pages, transforming global uv coordinates into cache-relative coordinates, jpeg decoding and DXT recoding at run-time, so on)
  4. Cache misses involve updating all relevant texture attributes (color, normal, specular).
  5. Upload the new processed buffer with it’s cache-relative uv’s.
  6. Upload the changes made to the cache textures (if any)
  7. Oh yeah.. NOW actually render the scene using a deferred shading.
  8. Overlay translucent surfaces (particles, windows?) using traditional forward rendering and shaders.

To be honest, I am kind of shocked that Megatextures work at all let-alone at 60 frames per second.  Though most of these steps do not seem overly offensive step 3, and by extension step 4, is where the secret sauce gets made.  It is this step that is also where Megatextures succeed and fail at the same time…

Promise vs Execution

When we first saw the demo of Rage that showed a merchant hanging out in his little hut the promise was grand.  We listened to Carmack describe how an artist masterfully painted a 4096×4096 texture for his face alone.  He went on to say that every facet of the world could experience that level of uniqueness without the degrading frame rate that we would all expect.  As impressed as I was at that time, a few things didn’t seem right to me.  I couldn’t wrap my brain around this idea of unique pixels everywhere.  I kept asking myself, “how big is this game going to be?”.  At first I justified it as thinking that they would only use it to enhance the terrain and create cooler cliffs and canyons but it was later clarified that any opaque surface, including characters were going to follow this pipeline and it made me very nervous.  I was going to need a bigger hard drive.

After going going dark for a bit Rage returned but it wasn’t the same.  There were rumors that Rage was having trouble with Megatextures on some platforms and it was clear that there was a real moment for the company to reflect on whether it was right to move forward with it.  Carmack even mentioned in his keynote about the disappointment shared by the art team when they first saw the end product of their hard work processed and compressed down to game format.

The platform issues were eventually resolved but Rage looked a bit more… decimated.  Images were kind of blurry and the texel density looked to be fairly horrific in some areas, notably very low light areas as well as surfaces that were deemed “unseen” by their automated importance algorithms.  It was later discovered that Rage was undergoing some changes and aggressive compression was a bullet point in that list.  The end result was an image quality that still holds up beautifully but only under a list of perfect conditions:

  • Well lit spaces such as outdoors or in direct contact of a static light source.
  • Surfaces that are considered to be visible by the optimizer.
  • Smaller confined areas like the sewers appear to have more quality retained in the compression process.

In wondering around the Rage environments I find some rooms to have nearly poster-sized blocks of solid color, an artifact of a shadowed wall that is then lit by a dynamic light source.  I’m fairly certain that many of these issues resulted from two major factors.

  1. Fear that an entire 10′s million dollar game would be developed and then run at single digit frame rates after giving such a strong marching order to maintain 60 frames per second.
  2. Speculation that, without standardizing the texture density to something reasonable, the game would end up shipping on 4 Blurays or about 50 DVD’s, or each boxed copy would include a coupon for $100 off a new hard drive.

Now that Rage has shipped and some of these fears have maybe been answered for the company I am hopeful to see an improvement in the texture density for future id Software games that use idTech5.

The Future of Megatexture

I feel like this technique has some serious challenges ahead of it.  While many games are fighting to stay smaller in an age of digital distribution is seems like Megatextures are running the other way.  While idTech4 focused on all dynamic lighting and visibility across uniform surfaces, Rage took a step back to the Quake 3 era of baked lighting and long build times for developers.  Rage is roughly a 25GB game and the development build of the game is said to be roughly 1TB of data.  If the less aggressively compression version of Rage is even half that size (500GB) I question what it would mean for gamers.  I question what it would mean for people who still pay by the gigabyte for bandwidth or maybe only have a 256GB hard drive but still want to play the next id Software game.

As much as I really love the idea of this technology I think that id Software is going to have to invest big money in smarter and less invasive compression techniques than HD Photo.  They’ll need to improve how they decimate their formats, or stop baking the shadows into their color maps and allow the compression to work with the full color band.  It does make me wonder why their dark maps were combined into the color maps but I’m sure it had to do with storage and performance of including yet another texture in the cache pipeline.

For a company like id Software who has a fully working version of this technology, I don’t see them backing down but I would hope to see some minor changes.  Do we really need every pixel to be unique?!  I would love to see this technique used less as a ubiquitous blanket approach and more of a way to intelligently stream in super resolution images for individual objects.  I’d like to see a modified version of idTech5 that goes back to the promise of that 4096 texture for a character’s face, or a similar sized texture used on a wall but allow that texture to be reused, essentially treating each image as it’s own Megatexture.  This would allow large organic terrains to continue using their massive painted and decal-covered textures while embracing basic tiling and reuse for more sterile and rigid surfaces, or instanced objects such as characters and environment props.  Something like this I would imagine could get a lot of use on a Mars space station maybe ;) , just dropping that out there.

Closure :’( Maybe… I Don’t Know.

I know that it just isn’t worth it for me to chase Megatextures for my own project.  The development process would be long and the pipeline would be a nightmare to find a place for all of that source content.  Knowing that I’ll likely be the one to develop most of the content, I doubt that my own skills as an amateur crayon artist would come close to maximizing that technology.  If I am searching for some cool technical achievement as inspiration to my next creation, sadly I don’t think I’ll find it here.  I’ll have to keep telling myself that as I stare at beautiful Megatextured vistas on my screen.


I know that this post sounds more like a eulogy than a critique, but I just don’t know where Megatexture 2.0 belongs.  I do hope that I am wrong, and maybe 3.0 has a few tricks up its sleeves (assuming there is a 3.0).  I am certain that id Software is going to continue to push this technology because of the massive investment made.  I just hope that it turns profitable for them at some point.

I really feel like Rage was the testbed for this technique but their future idTech5 pillars like Doom 4 are going to be the proving ground that vindicates or damns Megatextures.  The success or failure of Megatextures may also be the guiding decision to continue perusing Megageometry or the now famed Sparse Voxel Octree approach to representing a world.  If storage is an issue now for data that can be lossy (textures and sound), then what will come of data that can’t?  Would gamers buy a 256GB voxelized Wolfenstein?  It feels like owning a space shuttle; it sounds awesome but where are you going to park it?

Related Jobs

Raven Software / Activision
Raven Software / Activision — Madison, Wisconsin, United States

Sr. Software Engineer (Gameplay)
AtomJack — Seattle, Washington, United States

Level Designer
FarSight Studios
FarSight Studios — Big Bear Lake, California, United States

Lead Android Engineer
Churchill Navigation
Churchill Navigation — Boulder, Colorado, United States

3D Application Programmer


Andreas Ahlborn
profile image
This article really hit the nail of the problem on the head.

ID Tech 5 is in my eyes one of the great misguided engineer-projects in gaming history and Rage is the game equivalent of the Titanic.

While I believe Carmack is a genius, Megatextures come across in retrospective as a false promise, if you look up the wild gap between critcal and user ratings ( ), you might even be tempted to imply fraud.

What a mess to produce such a showcase for e new engine that surely will never be touched/licensed by any studio which wants to keep its sanity.

Its practically half the way between reality and science-fiction, which this mysterious super-"engine" ( ) goes full.

Michael K
profile image
it's actually even Megatexture 3.0, the "surface cache" of quake 1 is kinda like a realtime megatexture, combined lightmaps and textures into one "megatexture" on runtime. you could have baken it offline, generating an insane amount of data and compress it, just like now, but it baked the stuff 'instantly' for the surface you were about to render, which isn't that different to the upload transcoding and upload of textures in the rage approach.

I think they would have had less problem if they had kept this dynamic baking approach.

Jeremy Alessi
profile image
Great article! I was just thinking about mega textures the other day. The limit on materials/textures in games is still a huge issue. I think the notion of unlimited amounts of texture data is still very appealing. Hopefully, in due time it will work well.

Michael G
profile image
I kinda feel like CUDA could be useful here, processing tiles of the megatexture in parallel rather than mapping a large chunk that then has to be processed in several iterations. I really wish Nvidia would adapt the API to work with AMD so it can be adopted as a proper standard in games. It was done with CUDA 1 through an end-user hack, so I don't think it's impossible if it's adjusted to work better with larger operations.
I fear that unless Nvidia opens their technology up developers will default to the deeply inferior DirectCompute instead which would be a terrible waste.

Josiah Manson
profile image
@Michael There is always OpenCL instead of CUDA.

Michael G
profile image
I'm not sure OpenCL could process that volume of instructions fast enough to tessellate tiled sections of the texture for each frame though. Game developers have used OpenCL in the past for various things, I think Just Cause 2 used it for wave simulation on non-Nvidia systems, but the implementation is always much higher level because it has to work generically. If CUDA was to work it would have to use separate APIs so it could use both at a very low level in order to process that many instructions in a timely fashion.

Michael K
profile image
"I'm not sure OpenCL could process that volume of instructions fast enough"
opencl is not processing that data, neither does cuda, it's the GPU that does. in the case of NVidia, the GPU driver is fed with an assembler program written in "PTX", both, the cuda compiler and the opencl compiler create those PTX programs. although it depends on the driver and cuda version, the output is very similar if you look at the PTX disassemblies.
I write quite a lot opencl and cuda stuff, I'd estimate that cuda is in average about 10% faster, sometimes 50%, sometimes also slower than opencl, but it's not like you couldn't get good enough results. Also, it's not that much of a difference whether you use cuda or opencl most of the time, some syntax is different, but that critical code that you write for GPGPU is a few pages of source, maintaining both versions is not that much overhead imho.

Michael G
profile image
Right but my point is that I don't think OpenCL code executes fast enough to tessellate thousands of tiles within a rendering pipeline. It's not a matter of how quick the GPU processes the information once it's got it, it's that CUDA in C operates at a level closer to the driver whereas I believe OpenCL operates on the back of the CUDA module, at least on Nvidia obviously and Stream on ATI cards.

Michael G
profile image
"...many games are fighting to stay smaller in an age of digital distribution..."

I hadn't noticed.

Carrado Grant
profile image
"I really wish Nvidia would adapt the API to work with AMD so it can be adopted as a proper standard in games."

Though great, CUDA is not what the industry needs. OpenCL is just as capable as CUDA ( just not mature enough ). If Nvidia tried to put more effort in the standard ( OpenCL ), then there would be no worries about opening up CUDA.