My Steam Developers Day Talk

It was a lot of fun talking at Steam Developer Days; the whole event was a blast, the virtual reality talks drew a large, enthusiastic crowd, and everyone I talked to had good questions and observations. Here are the slides from my talk, in PDF form. They include the text of the talk as the per-slide notes; I don’t ad-lib when I give talks, so the notes match what I said almost exactly.

Here are some of my previous posts that discuss points from the talk in more detail.

You may also find the slides from my Game Developers Conference talk in 2013 to be useful.

Joe Ludwig’s slides from his talk about Steam VR are here, and related links can be found here, here, and here

As I said at the end of my talk, I look forward to continuing the conversation with you in the comments!

Update: the talks are online

Videos of Steam Dev Day talks are now posted here. There are four talks about VR: mine, Joe Ludwig’s, Palmer Luckey’s, and one by the Owlchemy guys.

113 Responses to My Steam Developers Day Talk

  1. Milo says:

    Devs days seemed like a blast this year! I live in Seattle and was hoping to go but couldn’t buy the tickets in time. I played the Oculus Rift at PAX prime with Titan Fall and being able to use the Rift with a Steam box will be an amazing experience for gamers. Great talk

  2. Marcus says:

    Hey Michael, since curved wrap-around monitors might be a thing soon, I’m wondering how taxing it is to support anything other than rectilinear / gnomonic projection? I know there have been some experiments with Quake (shaunew/blinky Quake mod), Outerra supports fisheye-like and cylindrical projection but other than that, no luck.

    • MAbrash says:

      It’s not hard to do different projections – after all, a Rift-style design already requires software distortion compensation. However, we may be a ways away from affordable screens that are curved enough to make a difference.

      • Marcus says:

        Thank you for your answer.
        If we are years away from that, what about the next best thing – multiple viewports? Many racing sims support that but it’s nowhere to be found in FPS titles. Can you tell us anything about possible future plans / experiments on that field at Valve?

        • MAbrash says:

          I assume you’re trying to get a wider field of view? The problem with multiple screens is seams between them – until someone figures out how to hide them, the seams would break the visual clarity needed for presence. Fortunately, even 90 degrees is pretty good, and 110 is better, so it’s not critical to get a wider field of view any time soon. It would certainly be a plus to go wider, and I don’t know how it’s going to be solved.

          • Are these seams an actual physical gap between the displays or just the fact they would be at different angles?

            If it’s just different angles, I think that instead of rendering to a screen quad, one could render to some other custom-designed primitive that takes the physical screen angles and positions into effect.

            I can see it in my head, I’m not sure I am putting it to words right.

          • MAbrash says:

            Physical gaps. Screens have stuff around the edges.

          • remosito says:

            One screen cut to a different aspect ratio (3:1,4:1) and non-spherical optics?

          • MAbrash says:

            That sounds straightforward, and we’ve tried some similar things, but the challenge is to come up with optics that don’t leave a gap or an overlap area.

          • wang2bo2 says:

            Have you heard about InfinitEye? It uses Fresnel lenses with 2 screens.

          • MAbrash says:

            I’ve heard of it, but don’t know enough to have an opinion.

  3. Ryan McClelland says:

    Thanks for posting the slides! Exciting times for VR.

  4. Jason says:

    Michael, as a long time VR enthusiast I’m really excited with all the new developments. One thing I was hoping to hear announced as part of the SDK was functionality related to full body positional tracking. Can you comment on whether the SDK will eventually contain support for full body positioning?

    • MAbrash says:

      Full body tracking would be awesome, but I don’t have anything to share at this time.

      –Michael

      • Jan Ciger says:

        Full body tracking is fairly pointless in most applications, as you don’t want to physically run and jump those kilometres in that FPS – few people are such good athletes, especially not gamers … Unfortunately most folks jazzed up over things like the Virtuix Omni treadmill or the various motion capture kits showing up on Kickstarter and elsewhere don’t realize this. That type of gear is also notoriously fragile and complex to set up (to don and doff by the user, to calibrate because each user is different and they wear it differently each time, etc.)

        Once you move towards symbolic interaction, where the movement merely triggers a pre-setup action (“gesture”), you lose the need for precise full body tracking – simple tracking like Kinect or even a regular webcam will suffice. That is something a lot more feasible for a game player than a full body motion capture/tracking setup.

        As far as tracking is concerned, the essentials to track are both hands (for interaction, both orientation & position + some provisions for pointing and grasping, e.g. buttons, triggers, glove, etc.) and the head – there mainly orientation, position tracking is a nice to have, but it could be “faked” to a certain degree without it being exceedingly disturbing.

        • MAbrash says:

          Agreed about interaction not requiring full body tracking. However, tracking the rest of the body is important for presence. If, when I was standing on the ledge, I looked down and saw my legs and feet, it would be a big win. Also, when multiple people are sharing a virtual space, it would make a huge difference to see their bodies move properly. However, how to do this is certainly not a solved problem.

        • Jason says:

          Jan, I agree with you that full body tracking is utilized in only a very limited way in most VR related applications at present. However I think that this is a temporary situation and that, due to the presence concerns Michael has mentioned, full body tracking will become increasingly important over time. I also think that, although FPS games are an obvious first use for full body tracking, I can envision a scenario where full body tracking is being driven by other types of games and experiences. My concern is that, without an abstraction layer for full body tracking that there will be fragmentation of the VR market in this area. There are already a number of body tracking systems that are available (e.g. Razer Hydra, Kinect) or that are in development (e.g. STEM, PrioVR, Kinect 2). Adding support for a wide variety of devices will become prohibitive for game and content developers over a particular threshold of tracking options. For these reasons I believe it makes a lot of sense for the VR SDK to eventually support full body tracking as well.

  5. Torkel says:

    No video of the talk?

  6. Rex says:

    Michael, will the refresh rate be programmable? In the case of watching videos in the Rift, the common video frame rate of 30Hz will require a 90Hz display refresh rate for a smooth playback.

    • MAbrash says:

      What we’ve built is a prototype, so we haven’t had to worry about that. Certainly it’s possible to implement programmable refresh rate.

  7. Josh says:

    Hi Michael — Enjoyed reading your talk. I think you have captured the essence of the VR movement. Hope your wrong about the next two year estimate and we see consumer ‘presence’ VR by end of 2014! :) Do you think there is a chance?

    • MAbrash says:

      It could happen if the necessary hardware components are already in the pipeline, but if not, it’ll take at least a year to design and build them.

  8. Flávio Ribeiro says:

    Thanks for being a part of this revolution and bring people’s dreams into reality. VR can’t come soon enough!

  9. Gerred says:

    Very exciting times. Presence could affect Steam’s interface fundementally transforming it into a virtual space where you open doors to games instead of launching. The need to re-orient users from game to OS to a VR app without causing sickness or panic. However it may be faster to have voice commands or virtual holgraphic UI floating in front of you.

  10. rogeressig says:

    I’ve been anticipating the next Rift by peering through my rift A-lens at my Galaxy S4 AOLED and it looks a magnitude better than DK1. I’ve shown my rift to over 500 people at various festivals and parties, and the reactions have been of astonishment, and that’s running with medium graphics settings off a laptop. It’s given me the huge indication of people’s seemingly inbuilt attraction to this type of experience. I just had the thought of having diffused reflective material on the inner casing of a HMD may bounce light into the visual periphery, increasing perceived FOV…. cool! ..off to get some tinfoil.

    • MAbrash says:

      The problem with having really low-fidelity light in the periphery is that you can turn your eye 30 degrees either way reasonably comfortably, which brings the peripheral light close enough to the fovea so that you can see it pretty clearly, and it looks wrong. If only the lens moved with the eye, that wouldn’t be a problem. Still, maybe there’s a way to get low-res but not inaccurate pixels in the periphery, and that might work.

  11. Al Kaos says:

    Hi Michael,

    I’ve thoroughly enjoyed your blog posts. Thank you for those!

    In your presentation, you mention a 95Hz refresh rate. Will this require games to run at a locked 95 fps? It’s hard enough to achieve 60fps :)

    95 just seems like an odd number (as opposed to 90 or 120). Could you elaborate on this and on whether g-sync or equivalent technologies can lower the threshold required for presence?

    Thanks!

    • MAbrash says:

      95 Hz isn’t a magic number; it’s just as fast as we were able to drive our panels. 90 is probably fine. However, yes, it will be necessary to render at the refresh rate, unless some kind of interpolation can be devised that works well enough. That is challenging for 2014 games, but not for 2005 games, which were pretty good looking. For instance, TF2 runs at a couple of hundred Hz.

      Anyway, VR will definitely make the GPU vendors happy :)

      • JWR says:

        Has any thought been given to caching frames (possibly over sized for the view port) locally at the display and panning across the frame based on head tracking to achieve the 90+Hz image refresh from a lower or unlocked refresh rate scene update.
        This would allow for smooth image movement with minimal processing and transmission delay and not require the graphics card to manage extremely low frame processing time.

        • MAbrash says:

          Yes, this was discussed at length here here, especially in the comments. There are potential issues with translation and spherical distortion, and especially with moving objects.

  12. Exciting stuff indeed, thanks for sharing.

    I’m curious about your comment that “normal maps don’t look good, and textures sometimes do and sometimes don’t” – can you expand a little more on that? Do you think it’s an artifact of the lighting model used, or something else?

    • MAbrash says:

      What I’ve seen is that normal maps don’t look good because in VR you can tell that they’re actually flat. However, someone’s recently told me that normal maps work well for them, so I guess that’s yet to be determined. Basically, the answer is that I don’t have any firm rules at this point. I’ll let you know when I do :)

      • I was just playing HL2 VR last night and saw the problem alright. It’s very obviously flat and in some cases the normal mapping almost makes it look worse due to the fact that the lighting contradicts what your depth perception is telling you. It seems that some variation of parallax mapping will be necessary at the very least. I’ll have to do some tests.

        • WormSlayer says:

          HL2 didnt actually get normal mapping until quite late in development so it’s use is a bit spotty and inconsistent throughout the content. Generally it doesnt hold up well to close inspection though so yeah we’re going to have to use parallax or tesselation, or something.

      • hughJ says:

        Would definitely need to replace normal maps with something that’s perspective sensitive (parallax mapping, etc.) Lighting tricks aren’t enough obviously if each eye is seeing the same data. I suppose could still use normal mapping for distant surfaces that lie outside the bounds of depth provided by stereoscopy.

      • ccsander says:

        normal maps still look good in VR if they are on very small details (1-2mm max) like wrinkles on skin or grain on wood. So they will still have a place in VR to get that extra fine detail on a character or model. This may become less true over time as the resolution of the panels goes up, but hopefully that goes up at the same rate as the systems ability to render more geometry.

  13. Joe says:

    loving all of your posts. Thanks for all your work.

    In your slides you talk about the problems with optics and I wondered if you’d seen the work a small group at nVidia have been doing regarding micro lens arrays for display of the whole light field. I unfortunately don’t have enough knowledge about light field display to know if it’s a viable solution….. But on the face of it it seems to solve practically every problem with 3D dislpay. The only issue I’m aware of that it creates is a need for massively increased resolution (by an order of magnitude or something) and an insane amount of processing power.

    Would love to hear your thoughts on the matter. I’m really not good at accepting that something may not be completely solvable and that there has to be trade-offs!

    for reference, here is a link to the research https://research.nvidia.com/publication/near-eye-light-field-displays

    • MAbrash says:

      Yes, I am familiar with that light field work, and it’s impressive. However, you nailed the problem – it does require very high resolution and tons of processing power. Given that we already are squeezed for processing power due to the need to maintain frame rate in stereo, and given that we already don’t have nearly enough resolution for a wide FOV, it seems like the time for VR light field displays has not yet come. The potential is huge, though.

  14. Erlend Wollan says:

    Really appreciate the excellent work that yourself and everyone involved at Valve and Oculus is putting into the VR movement.
    I’m curious about the stance to “help drive VR on the PC”. No question that it will be *The* platform to evolve and develop the VR experience for the coming years. It seems though consoles like the PS4 are just within the ballpark specs-wise to drive certain VR content at 1080p/95hz? Do you believe VR success on consoles would be counter-productive to the evolution of the format at this point?

    • MAbrash says:

      Simply put, I think VR is going to be so much better on the PC than anywhere else that it’ll be all that matters in terms of VR. Regardless of whether consoles are within the ballpark (obviously they are if you’re willing to simplify rendering enough), three years from now a high-end PC will have an order of magnitude more FLOPs than a console. It’s hard to argue with orders of magnitude, especially when you need all the processing power you can get. Also, hardware can evolve on the PC, but will be frozen on a console, if it even ever appears, and competition will eventually appear, driving that evolution.

  15. Carlo Rivera says:

    Great presentation. You’ve got me super pumped about getting into the VR software industry but I don’t even know where to begin!

  16. Scott Duensing says:

    It was a pleasure to see your talk at Dev Days. I can’t believe you only had 30 minutes! The entire conference was fantastic, but next year, maybe make it two weeks instead of two days? :-)

    Thanks again. Eagerly awaiting video so I can see the sessions I missed!

    • MAbrash says:

      Scott, I’m delighted you enjoyed the talk and the conference. It was a blast for me – just meeting so many interesting developers was great, and it was amazing to see that many people interested in VR!

    • Dmitry Lu says:

      The whole experience at the conference was mind blowing. Valve made it all so personal and supportive. Not sure about two weeks, first day was so intense for us, we were so exhausted on the second day… You probably didn’t go on afterparty for the first day:)

  17. Dmitry Lu says:

    Michael,
    I was confused a lot… I felt that I’m only one from the audience VR believer… How can you explain the 95% of empty seats on VR Q&A?

    • MAbrash says:

      Q&A tends to be people with specific questions about things they’ve run into. Few people have specific questions at this point, since we talked about a bunch of brand new stuff. So it’s not surprising that not that many people showed up.

  18. Jonathan Huyghe says:

    The slides were very exciting to read! Is Valve also looking at other applications besides gaming for VR? Education is often mentioned, but do you think VR could be a significant improvement over video and images in classrooms?

  19. Michael Labbe says:

    Thanks for the exciting talk. After all these decades of VR hype, someone has publically attempted to quantify what needs to be accomplished. Finally. :)

    The comment about audio being a multiplier on the experience later in the talk reminded me of an attempt to perform rotational modifications to the audio stream in a first person shooter, many years before VR, and with a simple 2D headset.

    The most immersive one was simulating the rattling of the earlobe that happens in a wind storm when you stand perpendicular to the direction the wind is travelling. As you rotate your head to face the direction of the wind, it gradually trails off.

    I can imagine needing to explore rotational attenuation all over again in VR. Very much looking forward to putting R&D time into such things.

    • MAbrash says:

      I’m really looking forward to the first time I get truly correct audio to go with the visuals – it will be another quantum step up in the experience.

  20. Roman Margold says:

    Hi Michael,

    I only got to experience your presentation via the slides, but I think it’s great! And, because this is the Internets, I have a few comments that noone asked for:

    “It’s worth noting that VR quality suffers noticeably when rendering doesn’t keep up with frame rate, and that it’s going to be a challenge to maintain 95 Hz stereo rendering, especially as resolutions climb.”

    I got to try the Rift at the last Siggraph and I was completely amazed by the body-reaction. In that respect, I really like your point about distinction between immersion and presence. Personally, I don’t think the huge amount of detail that’s been added to games in the last decade added that much to the overall experience (is it a heresy to say that as a rendering guy?). In many cases it just makes things much harder to see, forces game design adding visual clues (which further break the immersion) and make worlds smaller, and keeps the framerate lower to further make the experience hitchy and less enjoyable. And, at the same time, games from the [not too distant] past run at hundreds of FPS, so from my perspective, the only issue with respect to framerate is to actually make a use of it in terms of low latency and smoothness.

    “..my body reacts as if I’m at the edge of a cliff. What’s more, that effect doesn’t fade with time or repetition.”

    I’m curious about what you have to support that statement. This is the one thing I was really worried w.r.t. VR – shouldn’t the brain become trained in time to distinguish between this and real situation (/threat)? After all, learning is what brain is really good at. Also, could the situation be different if a person would get exposed to this experience as a kid?

    “3D audio, haptics, body tracking, and input are going to be huge positives for presence, and they’re bigger and harder problems than head-mounted displays.”

    Here, on the other hand, is where I’m very optimistic, especially on the input front. Once VR gets proven as “a thing”, I believe the “community” will solve many other problems very quickly by the shear amount of brainpower and creativity available.

    Finally, one general question – this being a very subjective experience, are you at all worried about the complexity of getting it tailored well for a high percentage of population? I’d liken this to 3D imaging, which, in my opinion, is still mostly unsolved, given how many people get uncomfortable at least a couple of times watching a 3D movie (never mind a game!).

    Again, nice talk and thanks for the blogpost!

    Roman

    • MAbrash says:

      The basis for the statement that the effect doesn’t fade is that the effect hasn’t faded for me. Not a statistically significant finding :) But why should it? The brain might learn it wasn’t real if I kept bringing that to its attention, but within the VR experience there actually is a drop, for all intents and purposes. I guess if I stepped off the ledge many times to see that there wasn’t a drop, something might change.

      Agreed that once VR is rolling, a lot of amazing stuff could happen in a hurry.

      I don’t have an answer about tailoring VR across the population. Different people clearly react differently, but pretty much everyone experiences some degree of presence, and as VR technology improves, that should get stronger.

  21. Alan says:

    Are you or your team doing any work with facial expression capture in VR to enhance player to player communication? E.g. overcoming the fact that most systems require some sort of goggle… Maybe with Emotiv-like devices? Smiling to another player in a poker game or perhaps a cheeky wink as you speed pass them in a race?

  22. xa4 says:

    Hi,

    Any thoughts on the possibility of enhancing ‘presence’ with the sence of motion with 6DOF stewart platorms. Especially for seated games in a VR cockpit or flight, race, mech, … games.

    • MAbrash says:

      Obviously they could enhance presence a lot, but I have a hard time seeing them being widely used, except in location-based entertainment – they’re just too big and/or klunky and/or expensive. I’d be happy to be wrong :)

  23. Erik Swan says:

    Thanks for the talk and the slides! These are crazy times.

    How much can you share with us about some of the other demos that you showed off (other then the ledge demo)?

    Do you have a personal favorite or is there one that people seemed to have a really great reaction to?

    Thanks!

    • MAbrash says:

      They mostly would sound pretty unexciting described in words. For example, there’s one where you’re standing in a 3D sea of cubes. Doesn’t sound like much, does it? But the rows and columns of cubes give an amazing sense of parallax.

      There’s no one demo that is the clear favorite. VR is surprisingly individual. For example, some people don’t care about heights, and to them the ledge isn’t a big deal. But I remember one person who refused to step off, and who couldn’t talk about anything else afterward, it affected him so much. It’s all in how you’re wired and how the virtual inputs stimulate your wiring.

  24. Grant says:

    Have you guys found any rules of thumb or tricks/hacks that can be applied to the design of simulations to help avoid breaking the presence/immersion?

    I’m thinking specifically of the visual instability you talked about in your last blog post where dropping the persistence messed with people’s vision during saccades enough that they felt the room shift – would that suggest guidelines for early VR simulations like where to put big visual cues around the player to avoid them from running into those problems?

    • MAbrash says:

      In that particular case, lower contrast and lower frequency textures reduces the effect, and in normal scenes (as opposed to the blue-on-black grid I had used) you can see it if you look for it sometimes, but generally it’s not an issue. So that doesn’t seem to be a major concern, unless we learn something new. In general, we’re not at the point where we can say what best practices are, but when we figure something out, we’ll be happy to share it with all of you.

  25. Alex says:

    Michael, thanks for this fantastic presentation and the work you guys are doing over at Valve on VR. One comment you made that really jumped out to me was that “the per-degree pixel denstiy of a 1K x 1K, 100-degree VR display is roughly one-seventh that of a big-screen TV, and about one-tenth that of the eye itself.” I’m wondering what implications you feel this has for non-gaming video content. For example, the other night I was watching a documentary about the aurora borealis on my Panasonic ST50 and couldn’t help but think how incredible this kind of content will be when experienced on VR HMDs such as the Rift. There’s no question that the wide FOV, tracking, etc. offered by these devices will make for a compelling experience, but do you think that current limitations around resolution largely offset this? In your opinion, what’s the resolution and/or pixel density that needs to be acheived as a baseline for this kind of content, and if we’re not already there, how far away are we from achieving this? How close do the Crystal Cove and anticipated consumer Rift come?

    • MAbrash says:

      Good questions, to which I don’t know the answers. The only thing I’m sure of is that we need a lot more resolution before you want to be programming using virtual screens in VR.

  26. Samium Gromoff says:

    Michael, is there anything to be said about CastAR?
    What’s your opinion on it?
    Is there anything to be said about Valve’s relationship with them?
    Does Valve plan some sort of a generic API, which could cover both Oculus and CastAR?

    • MAbrash says:

      Nope, nothing to say.

      Valve has already created a generic API, Steam VR, which you can read about in Joe Ludwig’s slides that are linked from the post above.

  27. Janne says:

    Michael,

    When referring to 110 degrees as being ideal for a 2015+ HMD, are you talking about horizontal or diagonal FoV? As I understand it, the Rift devkit is roughly 90 degrees horizontal and 110 degrees diagonal. So, is there room for improvement in the consumer version?

    • MAbrash says:

      110 isn’t necessarily ideal for 2015 – it’s the widest I know can be implemented effectively, and wider is better.

      I’m referring to the FOV from one side of the lens to the other. The nose takes a chunk out on the lower inside edge, and the panel may not stretch quite to the edge of the lens, depending on the configuration and the eye position. So use 110 degrees as a rough guideline.

  28. Daneel Filimonov says:

    Hey Michael, VR has really piqued my interest since I ever first heard of it and I’m glad to see you’re making good progress on the matter! I have a curiosity on the basis of input tracking that you might be able to answer (or at least speculate on); what do you think it would take to have eye-motion/retina tracking on a VR device? I assume you’d need additional hardware real-estate and some way of hiding such input within the VR headset. I’m sure you could use pixel space or install retina-tracking input into/onto the corners of the monitors.

    Also, since current VR devices are already taking a considerable amount of processing power just to display and calibrate their monitors, how much more processing power would you think it would take to gather retina/eye-movement input, if any at all?

    Cheers!

    • MAbrash says:

      I haven’t looked deeply enough into this to have a good answer. It’s obviously possible to put a camera in each side of an HMD, but getting good coverage over the range of eye motion is hard. It also adds heat and power and data transmission requirements, as well as cost. And if processing happens in an ASIC in the HMD, that adds more power and cost, while if it happens on the PC, it steals some amount of processing time (although I don’t know how significant that is). If the goal is to know where your avatar’s eyes should look, that seems doable. If the goal is something that requires very low latency, like foveated rendering, that’s hard; I’m not aware of any good head-mountable solutions to that. However, low-latency eye tracking would open up some interesting ways to improve VR rendering, including not only foveated rendering but also motion blur based on the velocity of the eye relative to each object in the scene.

      • Grant says:

        Do you need full eye coverage for a small camera tracking the eye? As long as you can calibrate it, why don’t you just treat it as an optical trackball? For 99% of cases you should even be able to make out the curvature of the iris for centering purposes, but I don’t even think that’s necessary once you’ve got a calibration button for it.

        I’m not sure what the glossiness of the eyeball would do, but optical mice are pretty good these days, if you could crack some open and figure out how to mount the sensors in a HMD…

        • MAbrash says:

          I’m not quite sure what you’re proposing – can you explain?

          • Grant says:

            Well a mouse/trackball sensor only cares about relative movement, it doesn’t know its absolute position on the coordinate space, but they’re still very accurate.

            So if you can see a small section of the eye from the side of the face, and there’s enough optical detail in the blood vessels in the sclera, and you can filter out eyelids blinking, then you could tell the user to look at a point on the display, press a button to calibrate, and then track the eye surface’s relative motion rather than the pupil’s absolute position.

            I don’t think a mouse sensor/controller would exactly be a plug & play solution at the moment because of the environment they’re engineered for, but if you could do a proof of concept with a software setup and a regular camera, you might be able to work closer to a hardware solution.

          • MAbrash says:

            That’s doable, but what would you use the relative motion for?

          • Grant says:

            That’s doable, but what would you use the relative motion for?

            That’s where the calibration of initial eye position comes in – a mouse uses relative motion to move a pointer around a 2D coordinate space, this would be using relative motion to dead-reckon the position of the pupil, for foveal rendering/relative motion blur etc.

          • MAbrash says:

            That’s absolute position, right? Since the camera wouldn’t move relative to the eye, it would all be absolute after the initial calibration. Anyway, what this really amounts to is putting an emitter and a camera in an HMD, which is exactly what current eye-tracking systems do. The question is whether good enough, affordable eye tracking is possible in an HMD, and in particularly whether it can be as accurate and low-latency as demanding applications like foveated rendering require. So not a bad idea, but basically just a way to prototype what eye trackers already do.

  29. Kyle Marcroft says:

    Has there been any testing done with people who can see the flicker from 60Hz light sources? It is enough of a problem with just watching a TV that I bought a 120Hz display to ease the eyestrain. When I drive, I have to consciously stop from looking at LED Christmas lights because they register to my eyes as something moving near the road. Likewise I am concerned about the combined effects of head movement and eye movement while in VR.

    • MAbrash says:

      I have the same issues – LED Christmas lights are a problem, for sure – and that’s why our prototype is running at 95 Hz, a speed at which no one has detected flicker. Flicker varies with contrast, but I think the acceptable refresh for flicker will turn out to be lower than 95 and higher than 70. Probably something like 85 or 90, given that CRT flicker seemed completely solved at 85 Hz.

      • Kyle Marcroft says:

        Judging by TV showroom experience, I can see a huge difference between 60Hz and 120Hz monitors, a small difference between 120Hz and 240Hz, and little or no difference between 240Hz and 420Hz. But that is with static viewing. Hopefully 95Hz is enough for me, but I will need to try it out and see for myself eventually.
        Thank you for responding.

  30. Michael Blix says:

    “Virtual reality … over the next few years may be as exciting as anything that’s ever happened in games.” What if the ‘in games’ turns out to be redundant?

    It’s simply amazing to think of not only the longer term possibilities, but even just the near term impact, say within the first 6 months post consumer release. I’d bet only a few are really prepared at all for what’s coming with bringing presence mainstream. I’m strapping in.

    Joe’s slides state that the Steamworks VR API “allows multiple apps to talk to a single piece of hardware at the same time”. Is this as significant as I think it may be, akin to the importance of multitasking?

    • MAbrash says:

      In the long run, VR could affect all sort of stuff, but games seem like the obvious entry point. Maybe just experiences will be enough, though.

      DirectX and OpenGL let multiple apps talk to the graphics hardware; the functionality you refer to is similar, basic resource management.

  31. JSeb says:

    Hello,

    I also found theses new VR experiences exciting.
    I’m a graphic programmmer from a game company.
    I’ve already worked with the first version of the Oculus dev-kit.
    And although this version suffers from different things you’ve already talked about, the new dimension it gives is actually there.
    I’ve read your last slides from your talk, the “Presence” thing looks to be very promising, and I hope we can try that soon :)

    Here I want to talk about a basic idea which comes to me, I’m pretty sure you’ve already thought about also:
    We could call it “Hemi-cube buffer”.
    I get to that from several reasons:

    a) when I used the SDK to make our game render to Oculus, I realized that many pixels (on the edge of each image eye) are computed almost for nothing due to the compression of the distortion that comes just after.
    I’ve the feeling the ‘classic’ projection we’re doing before the distortion is not very well fitted with the “physic” of what the eye can see.
    For instance, the devkit 1 has a 1280×800 display, 640×800 per eye, due to distortion we’ve to render close to 1730×1080 pixels for a 110° FOV.
    (Sadly I don’t remeber the exact values of the scale, but I’m pretty sure is was more than the one given in the oculus sdk overview)
    If the Lens FOV was bigger than 110° we’d have to scale with a much bigger values (due to nature of the projection: edge pixel’s apparent is being smaller when FOV is bigger), and the projection ‘ll be again less fitted.
    This (mapping pixels to the eye) is a point which is not talked in your blog except “more resolution is required”.

    b) I understood minimizing the latency, and maximizing fps are key points.
    But like you said, raise the Fps means more Cpu/Gpu power and more required bandwidth for the link.
    If the device could rotate the displayed image around (according head rotation) for a very small period, without the need of another buffer from the application, that could increase/decrease the percieved fps/latency with a good approximation.
    (Here I assume that for a very small period, the virtual world motion can be ignored while the head rotation is taken into account, head translation creates parallax and must be also ignored)

    ——————–
    So the idea, is to send to the device a buffer that looks like a “cube-map”, I feel that only a half-cube map (one face fully facing, and 4 half neighbouring faces) could be enough.
    A half cube map is 180° FOV, this is quite more than the current 110°, but
    1) the pixel apparent angle vary less: they fit better with physics of what an eye can see
    (I’ve not done the math to get the actual needed resolution of this cube map, but I feel it should be not so high)
    2) and these 180° gives to the device a freedom for head rotation inside the given frame.

    For instance, the device could refresh at 120Hz while requiring only 60Hz half-cube (world)updates, the interframe is done only in the device.

    Obviously that means that the distortion correction must be done in the device itself (with the cube map buffer as input), but the current pixel shader formula doesn’t look to be complex to do.

    Another benefit is that as distortion part is now hidden from application, the display tech may evolves without changing applications (for instance a laser scanning an hemisphere screen in front of each eye, I like this idea also but it’s another subject).

    Obviously they’re some drawbacks, like
    A) the storage of this half-cube (doesn’t seem easy to cut the cube in half in graphic APIs)
    B) formatting it for the display (it could be only a shader that copies faces into a 2d format)
    C) the embedded distortion costs some enginerring and adds to the final price
    D) image space post-fx shaders must take cube faces into account
    E) head translation is skipped during interframe: I expect this won’t hurt too much

    Again, I’m not convinced it’s “the idea” to improve VR devices, but I think it could helps maybe.

    • MAbrash says:

      It’s a clever idea, and one that was discussed at length in the comments here. If there’s not significant translation, it can work for static scenes, but it falls apart for dynamic scenes; we’ve tried it, and moving objects very visibly stutter and strobe, which is what you’d expect given that the fix-up for head motion (which is actually a proxy for eye motion) doesn’t account for movement within the scene.

  32. Dan Ferguson says:

    Excellent talk, Michael. I wish I were in a position to attend and participate. I especially liked the slide on page 32 showing a room full of fiducial markers. I tried doing something similar as my final project for my second-ever programming class in May, but my coding skills weren’t up to snuff. I’ve spent the last few months educating myself on tracking technologies and have decided that optical fiducials won’t work for my needs. I have some solid, testable ideas, but I’m curious; What tracking technologies are you currently investigating?

    I know everyone is eagerly awaiting the release of updated versions of the Rift, but I’m off the main stream and it would be nice to have something capable of producing “Presence” that could be used for research and development without waiting until 2015. Is there any chance you’ll share details on your prototype(s)? Even hints would be helpful :)

    • MAbrash says:

      We’re not ready to talk about tracking technologies, because we don’t yet know what will pan out. What was used for the demo was MPTAM and an IMU, with an EKF for sensor fusion.

      What would you like to know about the prototypes?

      –Michael

      • Dan Ferguson says:

        If you’re not ready to talk about tracking I’ll try to restrain my burgeoning curiosity. Is the M for “mini”, “mobile”, “multiple”? Are you able to maintain absolute 1:1 positioning? I was under the impression that you could only maintain relative positioning.

        On the prototype I’d like to know all the parts and model numbers, detailed build and calibration instructions, and source code for demos. But I’ll settle for anything. I guess my big question is, what display and driver are you using?

        • MAbrash says:

          ‘M’ is multiple. We do have absolute tracking and orientation, which is necessary for really good VR.

          The display is an OLED cellphone panel (two, actually). The driver is a custom board we made.

  33. Name says:

    Interesting talk!

    Although, as someone who hasn’t tried any HMDs yet, the idea of seeing a completely realistic and believable image while sitting still and interacting with the virtual environment with a mouse and a keyboard seems fishy at best, at least when it comes to shooters. Are you working on different VR input devices (something like the Wii Mote and the Power Glove perhaps?) at Valve as well?

    • MAbrash says:

      As I mentioned in the talk, input is a key area that’s pretty much totally unexplored. I’ll post when we have something useful to share about input.

  34. Shawn says:

    Awesome talk, my excitement about all of this makes 2015 feel so far away.

    Anyways, one of the things that I’ve spent some time thinking about in VR is experiences that might be a step or two beyond what the average consumer would consider putting into their home, but would greatly benefit from a level of presence like you describe. A basic example would be a helicopter simulation that placed the user on a chair attached to a mechanical base that moved and tilted to provide another level to the experience.

    Have y’all experimented with anything like that yet? Obviously it would require the software to differentiate between HMD movements due to the player’s head and movements due to the entire base moving, and I imagine that getting that even a little wrong could make the player very uncomfortable.

    All of my ideas along these lines create a bunch more problems to solve (although good full body tracking would help with a lot of them), but I think there’s some potential for VR + some well designed and integrated physical features to result in the creation of a new class of arcades that provide experiences that even the best consumer HMD couldn’t do alone.

    Keep up the good work, and thanks for sharing what y’all are learning.

    • MAbrash says:

      There’s certainly interesting potential there – I bet the Disney Imagineers could tell us a lot about such things. However, while location-based gaming certainly has interesting potential, our focus is on in-home gaming, and it’ll be a long time before most people have mechanical chairs in their homes.

  35. Paul says:

    Hi Michael, I was really impressed by your talk at steam dev days which I just got a chance to catch on you tube, I am 100% excited about VR and believe it is the future, I am actually developing my own VR game in my spare time.
    for my day job I work in the games industry for a larger developer as a user interface artist. In traditional console/PC games the UI we make would ideally be Diegetic (in world such as dead space) 100% of the time, however much of what we would love to do does not always work in practice, for example a 3rd person game where the camera is freely moving, and not fixed it presents many readability concerns, as you could not read an ammunition display on a gun from 50 feet away. I believe that re thinking UI & Hud’s be crucial in VR, especially for non first person games, I am interested to see if there is any documentation or research into this, or if there is anywhere the conversation is being held, as I would love to be a part of it.
    Best Regards and keep up the great work, I can’t wait until I get to fly a tie fighter with Vader
    Cheers!

    • MAbrash says:

      VR UI is far trickier than people imagine, and as you say, UI should be in the world but that’s easier said than done. (Everyone thinks it can just be the same HUD we’re used to – but when you put the same HUD display in each eye, that means that by stereopsis it’s at infinity, but it draws in front of everything which means it’s near than everything – and the brain really doesn’t like that!) I’m sure there is research on it, but I am not personally aware of it, although Joe Ludwig discussed this in the context of Team Fortress 2 VR in his talk at GDC.

  36. Jared says:

    Thanks for the talk and all the comments, Michael. On the topic of the normal maps problem, has there been any looking into alternative methods of 3D rendering besides triangle rasterisation and how it relates to VR? Like sparse voxel octrees, for example? I know John Carmack has looked into this some and come up with the feeling that the tools are not quite there yet to utilize them, but this answer has left me very unsatisfied. I feel that as polygons get to the point where they are as small as pixels, they will not need to be polygons anymore and genuine geometric detail as opposed to faked detail (normal maps, parallax maps) will be needed as stereo rendering becomes popular (As VR will likely popularize stereo 3D). Additionally, it seems to me that one could use the LOD hierarchy of an SVO in the periphery of a VR display. Voxel cone tracing into a scene could cheaply provide you with the information to fill peripheral pixels with any LOD the hardware can afford (very accurately, but at very low resolutions if needed). I would think this ability to easily control the level of detail of rendering would be very valuable as peripheral vision and possibly future eye tracking become existing concepts in rendering via VR. So I am curious, is VR currently guiding the future of real-time 3d graphics rendering as it approaches alternatives to standard polygon rasterisation (SVOs, User guided tessellation, etc) and what might be the best option for VR? I personally am excited by the prospect of exploring highly detail SVO worlds via VR. I would think it could easily effect features of the new source engine.

    • MAbrash says:

      Variable level of rendering detail is certainly promising; foveated rendering is one example, although still polygon-based. And geometric detail does seem to often be relatively more effective in VR than on a screen. So good points. However, polygon rendering is currently still generally the fast way to do things, because it’s what the hardware’s built to accelerate most effectively, and also, as you say, tools support it. I don’t know what will be the best rendering approach in VR, and it’ll be fun figuring that out.

  37. Inspiring talk!
    With our setup at Immersix, we have a better than 1mm accuracy in x and y axis, and about 3mm for the z axis for the position. For orientation, it’s about 5mrad. All done with a regular IR camera and 6 LEDs. So it’s in the ball park already. Joe Ludwig and Pravin Bhat saw it a few months ago.
    Our setup uses a TV as the display, transforming it into a window to the virtual world. It’s like a hybrid presence, or a portal in your living room (no pun intended). 99% of the latency comes from the TV itself, but with this setup, it is less problematic, since the objects have lower angular velocities when you, since the turning radius is larger.
    Nausea is not an issue, and the gamer is wireless and can walk around. Also, the product will be significantly more affordable.
    Moreover, you don’t need to wait for 2015, so it could also be a bridging technology for the pure VR experience.
    We would love to integrate it with Steam.
    You can check out our demo video with a mod of HalfLife2 DeathMatch, and we will have a new video soon, for our Unity3D games.

    • Joe Ludwig says:

      You can experiment with all of the Steamworks VR games on your system by writing your own Steamworks VR driver. All of the source code for the runtime for that API is up on Github: https://github.com/ValveSoftware/steamworks-vr-api

      The driver API expects two eyes, so you might need to need to do some odd things with that second eye to get it working. If you have a one pixel viewport in one of the corners it should all work ok and you can see how well it works in practice.

  38. dxtr says:

    How about, Euclideon software, don’t you consider it to be at least part of a solution to high frequency 4K presentation of virtual worlds?

    http://www.youtube.com/watch?v=csvmRgi0gZQ&list=UUI9bdH0LDRCVxeeAFC1Sk3w&feature=c4-overview

    • MAbrash says:

      I don’t know enough about it. Maybe it could be one way to present data fast enough, depending on what it’s designed to deliver (for example, whether it could hold 95 Hz stereo all the time or only most of the time).

  39. Your explanation of presence, and the factors required is so clear and makes so much senses (puuuuuuun). The part that you hit on, that I would like to see more of is your take on 3d audio. I have been a fan of binaural recordings since the behind the scenes for monsters incorporated. I have been wondered a lot lately why we aren’t rendering sound the same way we render light or calculate physics. Please do an in depth on 3d audio next? @danielshealey

    • MAbrash says:

      I think 3D audio will be a huge multiplier for presence. However, it turns out that acoustic propagation has complications that ray tracing of light doesn’t. For one thing, sound diffracts significantly in normal environments. For another, there’s an important time component to modeling sound, where light is instantaneous, for all practical purposes. The combination of those two factors makes good environmental sound modeling very expensive, so shortcuts are needed to do it in realtime. But I’m confident that will be worked out in the not too distant future, and the result will be amazing.

  40. Russ says:

    I think low resolution eye tracking could be done without a camera, just an IR sensor or two. Just sweep a portion of the eye during the off time of the LCD with IR (row of LEDs?) and detect the return with the IR sensor. It could be in one direction only (X positioning) or an X and an Y scan. You get blinks this way too. I don’t think this technique is used in other eye trackers because of interference from other light sources.

    And of course you have the option of electrooculography, which is apparently has low computational requirements and is low latency. The headset would easily cover the electrode placements required.

    https://www.youtube.com/watch?v=-QXGiZBDkUw

    • MAbrash says:

      Interesting ideas, but whether low resolution would be useful depends on what you mean by low resolution. Accuracy to within, say, a degree would be okay; five degrees probably wouldn’t. And it has to be reliable under all circumstances. Also, you need to sweep the whole eye, because you need to find the pupil. In fact, to really know where the eye is pointing, you need both the bright pupil and the dark pupil.

      I haven’t seen anything about the accuracy of electrooculography, but I’d be surprised if it was degree accurate. Also, building the electrodes into an HMD and getting them to land properly on all the face morphologies out there would be quite a challenge.

      • Russ says:

        By low resolution, I just mean enough to convey the direction of someone’s gaze to another user in the same virtual space, for that you may not need to find the pupil, just the iris.

        I would also be shocked if electrooculography is in any way accurate, but again, it’d just be to useful to show other users the direction of someone’s gaze.

  41. LunyAlex says:

    A shift in how developers approach mechanics, visuals, graphics, performance and optimization will obviously take place for any VR-centric game title to be.

    I was wondering if you guys were thinking of initializing any “educational” projects to speed up the adaptation process later on.

    “Tips and Tricks on how to make VR Not Suck”

    “Idiot’s Guide to VR Games Development”

    For example: I’d assume geometrically simplistic game worlds with focus on stylized design will be a natural starting point, if only because everyone will want to make a Tron Game (heh…). But I’m not sure if that’ll really be necessary.

    • MAbrash says:

      Excellent point! I’m sure there will be those sorts of developer resources from several sources – just as soon as there’s a working body of knowledge about how to make VR experiences.

  42. Natalia says:

    Hello Michael, thank you for your inspiring talk at SteamDevDays and thanks for the slides and video, so we made a Russian translation to let Russian developers look deeper into VR technology: http://valvetimes.com/4014-kakoy-virtualnaya-realnost-mogla-byi-dolzhna-i-budet-cherez-dva-goda-doklad-michael-abrash/

    I also have a question about health. Could VR be a cause of a mentality frustration? I there a dependence between amount of the time spent in VR and amount of the time of disorientation in the real world after that? Do you propose any age restriction for using VR at home?

    Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>