This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
It seems like we've gone so far in the polygon direction that it would take a significant spend and a lot of research to try to push in a different direction.
TS: Yeah. Polygons are nice for representing your scene that the idea that you can go with this smooth, seamless mesh that has nice properties like being planar... I have a hard time seeing the world moving away from that. And if you think about something like a skeletal animated character, trying to represent data like that in a voxel representation would be hopelessly inefficient.
Say you have two fingers and want to animate them independently, but they're close enough that they probably share some voxels in common; you want a polygon representation so these objects can be independent and move independently with no relationship between them. So I think polygons ultimately will always be our working scheme representation, but our rendering scheme representation is where you're seeing a lot of this innovation.
You can really look back and say, a-ha! The industry started inventing these techniques five or six years ago with screen space techniques like ambient occlusion, and now we're starting to realize that, instead of just doing that in screen space, you can also do it in a voxel representation of the world. They have in common this very regular structure that's easy to traverse.
I think we're going to see enormous innovation in these areas over the next 10 years because, with DirectX 11, you now have all this power of general-purpose vector computing hardware, and you can use a few teraflops of performance in traversing an arbitrary data structure. In the previous generation, we had to rethink everything in terms of pixel shaders and vertex shaders, and that rules out most of these techniques.
So the next few years are going to be very interesting. I suspect that a lot of developers will look at this from different perspectives and come up with entirely new techniques that we're not anticipating now.
What do you think about Carmack's solution to textures with id Tech 5, where every texture has to be specifically drawn?
TS: It's a neat idea that's kind of the extreme Carmack way of doing things, the brute-force, completely general solution to the problem. Some artist is going to want to customize any particular part of an object, and that makes it really easy to go in and paint over and customize any part of a scene to an arbitrary degree. That is certainly the general solution, and it's a very brute-force algorithm that gives you regular performance regardless of scene complexity, and gives you regular performance for streaming, which is really important.
I personally tend to be on the other end of the spectrum, which is that, rather than going at everything brute force, we want procedural authoring techniques to enable content developers to build small enough content and then amplify it by instancing, reusing, scaling, rotating, and customizing objects with minimal amounts of work. In Gears of War, you see that the same mesh was used maybe 20 times in different places. I think those techniques tend to be the best solution to the overall game development productivity problem because we want to go and create games with minimal effort. Although they're not as flexible, they do give you the greater amplification of both data and development effort, and, to me, that is always going to be a really desirable property.
You can envision the two ideas being combined. The thing that's awesome about Carmack's megatexture approach is that it gives you the freedom to customize everything. The drawback to it is it gives you the cost of everything being customized, even if it's not, because you have to store everything uniquely. What I would like to do is basically represent the world in some sort of hierarchical, wavelet-inspired way so that data layers that are zero or haven't been changed relative to their base aren't stored at all, and therefore don't incur any cost.
The idea is that you want to build a cool door façade and use it in 10 different places, and in three of those places you go in and do a custom thing on top of it. You're storing the data there, but you're only incurring the cost for the areas where it's actually being customized. Similarly, the idea with wavelets is you can paint at multiple resolutions and you can do perturbed geometry at multiple resolutions, so you might have a beautiful car mesh that's used in a bunch of places; then, in some places, you have it wrecked because you destroyed the high-level vertices or the lower-level detail versions of it, but the rest of the content is still there.
So, with a little bit of data, you've made a really interesting procedural modification to an object that already exists. I think nuanced techniques like that will win in the long run because they give you the full customizability of the Carmack approach without the full cost of using it everywhere.
I feel like there aren't really a lot of companies that are taking such a long-term view of things -- thinking five, 10, or 20 years ahead technology-wise, especially in games and also in terms of business models. Epic is thinking about that with Unreal Engine 4, but also with Infinity Blade being the company's most to-scale profitable game. How have you been able to take this view, where a lot of others aren't thinking past what they're going to put out next year?
TS: Well, being both an engine developer and a game developer forces us to think further ahead. Our big success with the engine came in the third generation, and that was because everybody else had just been finishing their previous-generation games, while we'd been building Unreal Engine 3 three years prior to Microsoft or Sony actually choosing their hardware for the generation. So we had this enormous investment that we'd already built up, and as soon as the industry moved over from that we were ready for it and everybody else was years behind. That gave us a huge advantage.
We've been doing that informally all along -- really trying to think ahead -- but that experience made it an explicit and clear part of our strategy. We need to stay ahead of the rest of the industry so we can be there with tools and technology when people need them. People don't realize they need them until they need them, so we must realize they'll need them three years before they need them, so that we can actually build them. That's been the challenge.
We have great relationships with the hardware companies -- Intel, Nvidia, and Apple. With the pure hardware manufacturers like Intel and Nvidia, we can talk about our roadmap with them and they can talk about their roadmap with us three, four, or five years out, and we can really line everything up. It's what's necessary. We have to start thinking of technology developers like Epic, Crytek, and Unity a lot like the hardware industry thinks of itself.
When Intel ships a CPU design like the new Sandy Bridge or whatever, it first envisioned that architecture seven years ago. They had to, because that's their development cycle and they have to go through this long series of processes to develop that from scratch to ship. We need to have a lot more engine development work in the pipeline. UE4 was in development simultaneously with UE3 for three years or more, plus a longer research cycle than that in advance. It's just necessary for survival and continuing to lead the industry forward.