Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
September 17, 2014
arrowPress Releases
September 17, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Chasing the Holodeck: Two big fat Q&As
by Kris Graft on 10/30/13 10:59:00 am   Editor Blog   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Kris Graft (@krisgraft) is EIC of Gamasutra

This week, I ran a piece on the "holy grail" of the holodeck, and it included insight from developers working on the USC-backed Project Holodeck and Tuukka Takala's RUIS at Aalto University in Finland. There was quite a bit more information from our corresspondence, so here are the full email Q&As I conducted for our ongoing Advanced I/O week. Enjoy!

Following answers provided by:

James Iliff
Producer
Project Holodeck

Could you give me a bit of background on Project Holodeck, and how and why your team started developing the project?

Project Holodeck got started back in March 2012 at the IEEE VR conference in Orange County.  Me and two of my colleagues, Nathan Burba and Palmer Luckey, realized that there was really a need for inexpensive virtual reality, and not only that, but components were cheap enough now that we could actually create fully immersive VR gaming at a consumer-level price point.

We had been previously working with professional motion capture systems that used numerous cameras, mocap suits, and trackers all over your body, along with expensive HMDs that used heavy lenses or microdisplays.  This kind of experience was incredible -  your entire body could be tracked in real time and you could move around and interact in a massive VR space.  The head-mounted display was wide field-of-view, and you had a very strong sense-of-presence in the environment.  It was near 100% immersion.

But this kind of setup is incredibly expensive.  $200K for a mocap stage, thousands more for these military-grade HMDs, and you could scarcely imagine that a consumer could fit this into their homes.  And because these systems were primarily used by research communities, the content was primarily research related.  There was nothing inherently entertaining about it whatsoever.

So our goal was to create a platform that offered about 90% of the immersion of these super expensive systems, but for less than 1% of the cost.  And with that consumer-level price point, we wanted to start creating virtual reality games that everyone could enjoy and engage with.  

Of course we stole the concept of the Holodeck from Star Trek: The Next Generation and christened our initiative “Project Holodeck.”  Not only because we're super geeks, but because the Holodeck is that ultimate holy grail virtual reality experience.  While we're still far away from a true Holodeck experience, it is something that we can all really aspire to.

Palmer then began working on his next head-mounted display prototype he called the Rift, and planned to put it on Kickstarter for the VR community on MTBS.  After John Carmack tried the prototype and repped it at E3 2012, the hype for the Rift exploded, and the kickstarter raised over $2.4 million in funding last August.  Oculus has since produced thousands of  kits for game developers.  The Rift is a massive success and Oculus is really kicking ass right now.

Around the same time as the Oculus Kickstarter, Project Holodeck got accepted into the Advanced Games program at USC, and over a period of 9 months we developed our first functional prototype using early versions of the Rift, Sixense motion controls, and PS Moves. We also developed several games for this platform in order to show that true VR in a physical space wasn't just technologically feasible, but it was also insanely fun.  

Today we are continuing this vision - we have some big announcements coming up soon so I can't go into too much yet, but its our ultimate mission to provide everyone with the tools they need to create these kinds of VR experiences in the simplest and most convenient way possible.

What are a few of the lessons you've learned that you'd like to pass on to game developers who are thinking of creating games or VR that involve movement in a physical space?

The biggest lesson we've learned is to take full advantage of three spatial dimensions, or otherwise the game will feel more like a port of traditional first-person game instead of a made-for-VR game.  This may seem like an obvious point, but it has profound implications across all facets of game design, such as art, interactive items, AI, audio, lighting, and everything in between.

For the first time ever players aren't just looking at a screen - they are living within a space.  It's intimate. This is the key difference that VR brings to the table, so the best VR games and experiences will take full advantage of the sensation of space that players are feeling.   

For instance, place the most interesting and detailed models near eye-level, so players will get up close and examine the hell out of them.  The closer an object is to a player's face, the greater the sensation of depth and parallax.  Make weapons as juicy as possible, and assuming you are using motion controls, have players manually reload their guns with new clips.  Have slow moving projectiles in your games so players can dodge, or use obstacles that force players to watch their step.  Have fully written notes sitting on a desk for players to pick up and read, have monsters sneak up behind them and breath down their neck.  Make players jump out of a plane, or climb ladders, or press elevator buttons, or pull a cord to start a chainsaw.  By focusing on the things that involve – and invade – a players' space, you will be taking the fullest advantage of virtual reality.

Audio turns out to be way more crucial than you expect.  This is usually the case with other mediums, but its particularly important in VR.  Having robust soundscapes throughout your levels can make all the difference.  As an example, in one of our game demos called Zombies on the Holodeck, we added numerous rain sound effects in a fairly small environment.  The rain sounds change depending on if you are outside on the street, or under an overpass, or under a metal overhang, or standing in a doorway, or deep inside a room, or under a wood roof, or next to a window, etc.  Blending these different rain variants throughout the environment helped simulate the feeling of being fully present in it. The more things that subtlety change as you move within a virtual environment, the more you feel like you're there. 

If your not taking full advantage of virtual space, then you're missing out on an opportunity to subconsciously affect the player in a tremendous way. Subtlety is all the rage with virtual reality gaming - because for the first time we can actually do it.

What challenges have you had to you overcome with Project Holodeck? What are some solutions you've come up with for the problems you've encountered?

One of the biggest challenges was the first one we encountered – which was how to track a player's entire body in the simplest way possible.  Initially we started out using four Kinects placed around a VR playspace, but it just wasn't working.  It was too jittery.  The Kinect is a great gestural interface, but it does not give you the precise and consistent positional tracking data you need for proper avatar embodiment, and it does not provide rotational tracking at all.  So instead of using an IR system like the Kinect, we opted for a magnetic / optical tracking combo that involved the Razer Hydra and Playstation Move controllers.  The Sixense magnetic tracking technology in the Hydra ended up providing the most robust data for hand movements, and the PS Move was perfect for placing on the head and tracking a person's absolute position / orientation in space.  Of course the resulting helmet rig looked pretty gnarly but we've since moved beyond that first prototype!

Outside of hardware, there was a lot of new ground to cover on the game design and interface side of things.  3DUI is a big one.  Creating three-dimensional user interfaces in VR that are natural and intuitive was a huge challenge.  The easy thing to do is to make 2D menus in stereoscopic 3D – but that's boring.  Players now have the ability to move around, reach out, and grab things in a VR space.  So its important to take advantage of the strengths of these new input systems and craft menus and layers that are simple and usable. 

There are also lot of automated systems in traditional games that we tend to take for granted, but when you think about it they are kind of weird and don't work in full-motion VR.  In first-person shooters, reloading a weapon is automated, opening doors is often automated, and pressing buttons or picking up physics objects is done with an invisible set of hands.  The reason its like that is because traditional input devices like mouse and keyboard are limited in what they can do – but with VR headsets and motion controls these conventions no longer hold true.  We can now physically press buttons and physically throw grenades and physically punch a bad guy in the face.  Its a whole new way to interact with games, and it also happens to be more realistic, natural, and intuitive if implemented correctly.

How do you envision a true "Holodeck?" How far off do you guess that is?

This is a really tricky question, but its fascinating to muse about!  A perfect simulated reality that is indistinguishable from real life will ultimately take one of two forms: it will either manipulate real light and real matter, like the Holodeck from TNG, or it will remove the middleman and manipulate our perceptions through a brain-user interface directly, like The Matrix.  Depending on the technologies that develop over the next 200 years, there may be hybrids that emerge, such as the manipulation of real light (holograms) combined with haptic gloves, or the direct manipulation of the brain's sense of touch combined with VR / AR contact lenses, or many other such combinations involving other senses.  These are all very idealistic visions that we aspire to, and I think we're getting pretty damn close with the current tech we have on hand!  But there is still a long way to go.  Considering how far we've come since the Industrial Revolution, I would say that a perfect simulated reality of some form – like the Holodeck – will emerge in at least two centuries.

 

Following answers provided by:

Tuukka Takala
Doctoral candidate
Department of Media Technology
Aalto University
RUIS

Could you give me a bit of background on RUIS, and how and why your team started developing the Turbo Tuscany demo?

Back in 2010 I started working with my PhD by creating 3D user interface applications using Nintendo Wiimotes and camera-based tracking. PlayStation Move and Microsoft Kinect were about to be released, and it was clear to me that motion controllers would eventually revolutionize the way we interact with computers. Back then it was difficult to build applications for these devices, as only hacked drivers were available for PC developers and the application programming interfaces were low level. This presented a huge learning curve for hobbyist developers, who I believe to be critical for innovation in many fields, including virtual reality and immersive 3D user interfaces. So we set out to create RUIS software platform, with the intent to make it easier to build virtual reality applications using affordable, state-of-the-art interaction devices.

I personally think that there is a lot of potential in using Kinect together with motion controllers like PS Move, so in RUIS our focus is to allow the creation of novel user interfaces through the use of multiple devices in conjunction. Recently we released a pre-release version of RUIS for Unity, which supports Oculus Rift, Kinect, PS Move, Razer Hydra, and stereo displays. Currently we are ironing out bugs, creating example scenes, and writing documentation.

We created TurboTuscany demo to present some of our ideas about VR interaction, and to showcase the features of RUIS and what can be created with it. All Oculus Rift enthusiasts know the original Tuscany demo, and it was natural to build on top of that, as the demo provided an easily recognizable environment for our new interaction features. All in all, we have tried to build a demo that would not only show the strengths of RUIS, but also all the VR coolness that we could fit in. For example seeing your own body and jumping off rooftops with the Rift on feels quite wild, and those things couldn’t be done in the original demo.

What are a few of the lessons you've learned that you'd like to pass on to developers who are thinking of creating games or VR that involve movement in a physical space?

Get over the wow-factor of virtual reality, and provide your players with substantial gameplay. Even the most enthusiastic users will eventually get jaded from just watching pretty things in a head tracked stereo 3D environment. What might work in a demo that is meant to be played for 10 minutes, doesn’t necessarily work in a full VR game or application. Ask yourself why you want to involve physical movement in your application, and how that enhances the user experience. This is an important question since motion controlled interfaces are often laborious to implement.

Keep testing your user interface and ask feedback from other people. Consider how long the users are supposed to use the application and whether they will have enough stamina for all the repeated movements. Pay attention to ergonomics and avoid the gorilla arm (http://en.wikipedia.org/wiki/Touchscreen#.22Gorilla_arm.22). With Oculus Rift and other HMDs you should avoid gameplay that requires having your neck in a strained pose repeatedly or for a long time.

Be mindful of the limitations of the tracking devices that your application relies on, as those limitations set hard constraints for your gameplay and user interface. For example, Kinect can’t be really used for tasks that require high accuracy or high rate of successful gesture recognition. Oculus Rift developers should push their device to the limit and experiment how often yaw drift occurs, as well as the occasional misalignment in pitch and roll.

Use physical buttons on a controller instead of gestures to trigger common actions like firing a gun or performing undo in the game world. This is faster, more effortless, and spares the user from gesture recognition errors.

Polished applications should give feedback to the user when they are about to exit the tracking range (like many Kinect games already do). For those users with small living rooms, it would be good for them to be able to define the playing area so that nothing gets broken while they are reacting to events in the virtual world.

What challenges have you had to you overcome with Turbo Tuscany? What are some solutions you've come up with for the problems you've encountered?

I wanted our demo to complement Oculus Rift’s orientation tracking with positional head tracking. Right now there is not a single affordable positional tracker on the market that would be perfect for VR purposes (PS Move comes close but it’s a hassle to get it to work on PC). Instead there are several devices with each having their pros and cons. Therefore we implemented support for the most common, affordable motion trackers: Kinect, Razer Hydra, and PS Move. It was a lot of work to include all those devices, and we had to write sensor fusion and filtering algorithms to get good results. Thankfully the implementation is now part of our RUIS platform for anyone to use when creating VR applications with Unity. The TurboTuscany demo has four positional head tracking schemes, with the Kinect & Razer Hydra tracking scheme being most technically elaborate: A Hydra controller is attached to Oculus Rift, and their rotations are used to infer the rotation of the Hydra base, whose position is estimated with Kinect. I also implemented yaw drift correction methods for Oculus Rift, which use data from the external trackers.

Another big challenge was to get a Kinect controlled full-body avatar, whose motion in the Kinect’s field of view is reflected in the virtual world and its physical simulation, but whose movement and orientation can also be affected with a wireless controller. My colleague Mikael Matveinen implemented this and it works great: If you want to sneak to a door in the game world and lean out to take a peak while keeping your body in cover, you can just act that out in front of the Kinect, and this will work as long as Kinect detects you. At the same time, you can use a wireless controller to orient and move your avatar like in any first-person game, so that you don’t actually have to walk for miles in a game like Skyrim.

We had to be very careful about not breaking stuff in our office or crushing Oculus Rift signal box with our feet when testing the TurboTuscany demo! We kept tripping on cables from Oculus Rift, and the solution to this would be to convert the Rift to a wireless version by creating a custom battery pack and using a wireless HDMI and USB transceiver, like one Norwegian guy has done: http://www.mtbs3d.com/phpbb/viewtopic.php?f=140&t=17710&p=129396

We also performed jumping while wearing the Rift to test out our jump gesture recognition, and that’s something I can’t really recommend since the way you land and your balance relies largely on visual cues which are confused by the imperfect orientation tracking and display latency.

How do you envision a true "Holodeck?" How far off do you guess that is, and is it even a reasonable goal to pursue? What would you use it for?

A true “holodeck” is definitely a reasonable goal to pursue in the long run, and I believe that to be the ultimate dream for all VR researchers. Right now it seems that head-mounted-displays will be the closest thing to a holodeck that we will get in the near future. We need a technical solution to take into account eye accommodation in order to reduce cyber sickness. A bigger problem is the lack of realistic haptic feedback, which could be 10 or even 20 years away from now. If the matter manipulation technology behind Star Trek’s holodeck proves to be beyond our reach, it might be that the next best thing will be achieved via plugging our brains into a computer using optogenetics or other neuromodulation techniques. That could take more than 20 years though.

I personally would use a true holodeck for acting out scenes from movies like “One Million Years B.C.”, “Ghostbusters”, and “Inception”. I would also hang out with a virtual George Carlin, and do some extreme sports like mixed martial arts and base jumping. The question is whether the geeks of the future want to return to the “meatspace” once they have accessed a perfect holodeck.


Related Jobs

Cloud Imperium Games
Cloud Imperium Games — Santa Monica, California, United States
[09.17.14]

Physics Programmer
Cloud Imperium Games
Cloud Imperium Games — Santa Monica, California, United States
[09.17.14]

AI Programmer
Pocket Gems
Pocket Gems — San Francisco, California, United States
[09.17.14]

Associate Product Manager
Yoh
Yoh — Vancouver, British Columbia, Canada
[09.17.14]

Rendering Engineer Job






Comments



none
 
Comment: