Slides from my Game Developers Conference talk

Thursday I gave a 25-minute talk at Game Developers Conference about virtual reality; you can download the slides here. Afterward, I got to meet some regular readers of this blog, which was a blast – there were lots of good questions and observations.

Much of the ground I covered will be familiar those of you who have followed these posts over the last year, but I did discuss some areas, particularly color fringing and judder, that I haven’t talked about here yet, although I do plan to post about them soon in more detail than I could go into during a short talk.

Putting together the talk made me realize how many challenging problems have to be solved in order to get VR and AR to work well, and how long it’ll to take to get all of those areas truly right; it’s going to be an interesting decade or two. At least I don’t have to worry about running out of stuff to talk about here for a long time!

Update: Here’s a PDF version of the slides. Unfortunately, I don’t know of any way to get the videos and animations to work in this version, so if you want to see those, you’ll have to use the Powerpoint viewer.

26 Responses to Slides from my Game Developers Conference talk

  1. Matthew Seabright says:

    Thanks a lot for posting that Michael. I really enjoy reading about the challenges of VR. Is there any chance we will be able to see Joe Ludwig’s slides? It sounds like they might be very interesting also.

  2. Random says:

    Dear Michael, could you share the slides in PDF format or something similar? Not everyone has Powerpoint. Thank you!

  3. Ryan says:

    Thanks for posting these! Really appreciate the speaker notes with each slide. Any chance of getting Joe Ludwig’s talk also?

  4. Yuval Boger says:


    Great presentation.

    I agree that cost and resolution are not the only barriers to adoption of consumer VR. To me, the ability to perform more meaningful interaction with the content beyond mouse and keyboard is key, as is the ability to integrate various sensors (motion, camera, position, biometric) into understanding the context of the user. See my post here for some thoughts.

    Also, what do you think about variable-resolution goggles like the old Fakespace Wide5, which had higher resolution in the center of the visual field and lower in the edges. Would that be useful in your opinion or too much of a programming hassle?


    • MAbrash says:

      Higher resolution in the center is a good approach, and not a problem in terms of programming (it’s just an undistort pass), and in fact that’s true of the Rift, due to the lens distortion. However, remember the fovea can move 25-30 degrees in each direction, so the higher-res area has to be pretty big.

      Everything you mention can affect the VR experience, but some of it will only matter after the basic stuff (like HMDs) works well enough. That’s why I said that it’ll take decades to make VR great.


  5. James Abrahams says:

    Really interesting talk! Would love to learn more about those other problems you mentioned on the slides! :)

  6. Hiccup says:

    Hello Michael,

    Great talk.

    My first comment here, as I’m taking my first steps into VR/AR, though your talk did give me a head start of where we currently are. Though my understanding might be completely wrong here.

    You spoke about anomalies that occur when dealing with human perception. Considering that we currently don’t possess the technology to bring in pure human perception into the the world of VR/AR, can’t we use the anomalies/bugs to our advantage in some way?

    By that I meant, during the good ol’ days of PC game dev, many bugs/restrictions were taken into advantage, by spawning new tiles/players/opponents, or just covering the crack with paper, so to speak, so that the core bugs aren’t really discovered unless one beta tests a game very, very efficiently.

    How far is that possible when you port a 3D game into the VR/AR? Is it possible at all or would that cause such a perceptive abnormality that it just won’t be good enough for the human brain to perceive/reciprocate its existence?

    (I hope I made some sense.)

    • MAbrash says:

      It’s an interesting thought, but I don’t know if there’s a way to apply it to VR. Of course, you could and would design games so that the art, animation, movement, etc., worked well with the limitations of the display, but I don’t think that’s what you mean. The thing is, the kinds of anomalies I’m talking about are core to the way we perceive the world, and perceiving the world as real is core to VR, so I’m not sure it’s possible to paper over them. But I’d be interested to hear any ideas people have!


  7. Brandon says:

    Hello Michael,
    Another great insight! i’ve just recently found VR as my newfound obsession, and while i’m not much of a coder or hardware guy, I find your research and progress to be absolutely fascinating and informative. Not only are you very open about your findings and such, but i’m really blown away by the fact that you are open to discussion and are so proactively involved with the community. I like the way you work.

    • MAbrash says:

      Thank you very much! I have to say it’s been a great experience exchanging ideas with readers of this blog.


  8. Inscothen says:

    Thank you for your research Michael.

    You mentioned the blurring or smearing in HMDs. Would upping the panel refresh rate to 75+hz and blinking or strobing the backlight, synced with the frames, reduce this smearing considerably?

    • MAbrash says:

      Excellent questions!

      The higher the refresh rate, the better.

      The shorter the persistence, the better for whatever your eye is tracking – less blurring – but the worse for whatever is moving relative to the eye – more strobing.


  9. Mark says:

    There are no animations in pdf, you can link to them if you want, but this download is great.

  10. Chris says:


    Great and thoroughly fascinating presentation. I tried the Oculus Rift at GDC and then promptly ordered a dev kit – it’s an amazing experience.

    I wondered what your take on depth of field in relation to VR is. I noticed when trying the Hawken demo that when I would look at the cockpit overhanging my head, it looked a little flatter than it should as the background was not blurred in my view like it would be to the naked eye. Do you think that retinal tracking devices would need to be implemented in the VR device to fix this?

    • MAbrash says:

      Depth of field would be great – but is highly non-trivial. It seems to me that it would require both eyetracking and display devices that can actually display depth of field. Kurt Akeley did that by blending multiple screens, but that’s not a promising solution for an HMD. Depth of field isn’t a requirement, but I think it is one of the hard problems for making AR/VR great, and will take some time to solve.


  11. Brennan says:

    What about having the screen move with the eyes instead of with the head? That is, have the screen(s) track with the eyes instead of being statically mounted to the face. If the physical screen pixels moved along with the eyes then judder, color fringing, low pixel density, etc wouldn’t be such issues, although it would add slightly more complexity to tracking, and provide new latency problems (probably new cost + weight of device problems as well).

    • MAbrash says:

      Cool idea – and I think you summarized the problems pretty well. Moving a screen at several hundred degrees per second might cause whiplash :)


    • MAbrash says:

      Also, I don’t think there’d be any way to handle vergence – the panels would collide.

  12. STRESS says:

    In the meantime Microsoft and the University of Illinois have got their research paper on their IllumiRoom out. I still think this is a far more sensible approach to this whole subject. At least in terms of VR. And it is working now today with a much more limited amount of problems and could likely be seen in the next Xbox platform