Two Possible Paths into the Future of Wearable Computing: Part 1 – VR

Almost exactly twenty years ago, my family and I were living near Burlington, Vermont, and I was working remotely for a small graphics software company in California. They were great people to work for, but I had the sense that their business wasn’t doing well, and living as I was far from potential employers, I had to be proactive in anticipating problems. So I cast about for other work; two different lines of inquiry led to Microsoft, and I ended up as a contractor working on the VGA driver for the first version of Windows NT, which was a little more than a year from being released.

After a couple of months of that, I was offered the opportunity to interview for a full-time position. That was a harder decision than you might think, because if I got the job, we would have to pick up and move across the country. We liked Vermont; it was a good place to raise children, it was beautiful, my wife was working on a master’s degree and putting down roots, and it was a low-pressure lifestyle. I’d have more job security at Microsoft, and my work would be interesting, but we’d be giving up a lot.

In the end, I decided to interview, and when I was offered a job as the NT graphics driver lead, I took it. A few months later, the dev lead for GDI retired, and I ended up in that position for the first two versions of NT. Working on NT was a great experience on the whole. For one thing, Dave Cutler gave me a whole new perspective on what it meant to write good software, and what it took to do that. For another, I helped bring an operating system into existence that directly benefited a lot of people. In his book Flow, Mihaly Csikszentmihalyi describes work satisfaction as a function of having challenging but achievable goals combined with a sense that the work is worthwhile but not overwhelmingly so, and my work on NT fell right in that sweet spot.

How did it all work out in the end? Well, I’m not at Microsoft anymore, but we’re still content living in the Seattle area 20 years later, so the move was a good one. Equally important, prior to NT I had worked mostly on low-impact projects for small companies; starting with NT, I’ve gotten to work on interesting stuff that really matters. So things worked out well as far as I’m concerned.

I’d say it worked out well for Microsoft as well, since I played a significant role in getting GDI and the graphics drivers done and shipped, and NT became a huge success. (To be clear, I was just one programmer among many excellent ones on the NT team, and I’d love to work with most of them again. In fact, I’m working with two of them now at Valve; if you were part of that NT team and you’re ready for a new challenge in an amazing environment, drop me a line.)

But here’s the funny part: I completely bombed the first of the five interviews in my interview loop. The interviewer asked me Windows debugger questions; alas, I didn’t know much about Windows, let alone Windows debuggers, back then. I could not possibly have done worse. If they had continued with the usual interview process, there’s no way I would have gotten the job, because I just didn’t have the kind of experience Microsoft looked for in their standard interview loop. However, the next interviewer, Darryl Havens, said, “Okay, that was a waste of time. What do we have to do to get you here?”, and that set the tone for the rest of the day. (Thank you, Darryl!) Darryl knew that the graphics drivers had been badly neglected until I started working on them, and NT couldn’t ship until they were solid; hiring me was the fastest way to fix that.

If Microsoft had stuck to its interview process, I wouldn’t have been hired, and that would not have been good for either Microsoft or me. One implication of this that has stuck with me through the years is that it’s a bad idea to get too attached to a particular way of thinking about or doing anything. The world is complicated and constantly evolving, so it’s essential to constantly reexamine your plans, decisions, processes, assumptions, and mental models to see if they’re still tracking reality.

My first post on this blog talked about why augmented reality (AR) could well be the next great platform shift, and I still think that’s likely to be true. However, as I’ve worked on AR, I’ve been checking my assumptions, and as part of that process I’ve been thinking about whether a drive straight for AR or a path that includes VR as well – especially in the near term – makes more sense. There are good arguments for both sides, and it’s been an interesting exercise in visualizing the future. In this post, I’ll follow one line of thought that argues for an increased emphasis on VR; in the next post, I’ll follow another that concludes that AR should remain the dominant area for R&D, even in the immediate future. I don’t yet know what the correct choice is, so don’t expect any profound conclusions at the end, but the thought processes are interesting in their own right, and provide some insight into how the future of wearable computing could evolve.

As you read, please keep in mind that I’m not saying this is how it will be, but rather here’s a way it could be. The point is not to wrap things up with a neat bit of prophecy, since I don’t know what the future will hold, but rather to get you thinking, and to start a discussion that I’m looking forward to continuing in the comments.

Before I begin, I’d like to make it clear that this post and the next reflect my thinking, not Valve’s, and don’t represent a product or strategy announcement in any way. They’re just thought experiments on my part, trying to catch a glimpse of what promises to be a really interesting future.

A few definitions

If you’re not familiar with VR and AR, VR is the one where you sit down, put on a headset, and find yourself completely immersed in a virtual world like Snow Crash’s Metaverse or Ready Player One’s OASIS (and if you haven’t read Ready Player One, run don’t walk; it’s a great read, especially if you grew up in the 80’s, but even if not – I didn’t, and I still loved it). AR is the one where you put on glasses and walk around, and find that the real world is still there, but modified to a lesser or greater extent, as in Rainbow’s End’s belief circles or the Rivet Couture virtual society of “To Hie from Far Cilenia.”

So with VR, you might take a seat at your computer, put on your VR headset, and find yourself in Middle Earth or a starship or a Team Fortress 2 level. With AR, as you walk down the (real) street wearing your AR glasses you might find that there are (virtual) aliens shooting at you, or that when you encounter (real) members of your Belief Circle they’re wearing (virtual) medieval costumes and glowing faintly, or, to continue the TF2 analogy, that everyone you see is wearing virtual hats.

The sort of AR I just described, which is what I’m going to talk about in this post, is unconstrained AR – what I call walk-around AR, the kind that works wherever you go. That’s certainly the long-term goal, because it’s a platform shift, but for the next few years it’s something of a strawman, because there are a lot of challenging technical issues to be ironed out before it’s ready for prime time. In contrast, highly constrained AR, for example tabletop or room-scale AR, is considerably more feasible than walk-around AR right now, and certainly has some potentially interesting uses. However, it’s obviously not as generally useful as walk-around AR, is less immersive than VR, and is currently farther from a consumer-ready product than VR, with less capable, more expensive hardware. Nonetheless, constrained AR is a strong counter-argument to VR’s near-term advantage, and will feature prominently in the next post.

There’s also a third sort of wearable display technology, which I’ll call HUDSpace, based on the display of 2D information on see-through glasses, much like having a phone or tablet in view at all times; this is the direction Google appears to be going in with Project Glass. I include in this category very lightweight AR such as having people’s names floating over their heads, arrows to guide you turn by turn to your destination, and information popping up when you’re near points of interest. There’s a great deal of value to this, and it’s clearly going to happen, but it’s considerably less technologically demanding than AR or VR, has little opportunity for deep entertainment experiences, seems largely like an extension of the smartphones we have today rather than a genuinely new platform, and is just way less cool to me, so I’m going to focus on AR and VR.

So if AR is where we’re all headed, why is VR worth bothering with? Two reasons: in the long run, VR-like experiences may be how we use our spiffy AR glasses much of the time, and in the short run, VR is poised to take off well before AR.

Why VR is interesting now

Right now, VR is much closer to becoming a consumer product than AR. Perhaps the biggest reason for this is that VR hardware is more capable and easier to make right now. The Oculus Rift, which is intended to ship at a consumer price, has a 90-degree horizontal field of view; in contrast, I’ve never heard of see-through AR glasses with anything like that field of view at any price, and while they may exist, it’s hard to see how they could be made at consumer prices with anything like current technology. (Video-passthrough AR glasses could of course have the same field of view as the Rift, since all that would be required would be to add a camera, but I don’t think video-passthrough AR will be good enough for a number of years, for reasons discussed here.) Also, because VR is used in a fixed location, it can be tethered, sweeping away a host of hard power problems that walk-around AR has to deal with, and enabling the use of far more powerful CPUs and GPUs. Alternatively, VR headsets can be designed to run for just an hour or two between recharges; in contrast, AR has to have the same order of battery life as a phone or tablet. Furthermore, because VR is restricted to one location, it’s much easier to develop tracking technology for. And since you’re not going to wear a VR head mounted display in public, or walk around with it, it doesn’t have to be as stylish, and while it still has to be light and comfortable, it is considerably less constrained than AR glasses that have to look like fat sunglasses. Finally, VR can use existing controllers initially; you’ll be able to play VR games with standard game pads, for example, although I think new VR input will have to evolve quickly in order for VR to really reach its potential. In contrast, the input scheme for AR is an open question.

In terms of hardware problems to be solved, VR is closely related to AR, and in many cases figuring something out in VR’s more tractable space will help in AR as well. In this respect, resources devoted to VR R&D aren’t subtracted from AR efforts; in fact, this may be the most effective way to make progress on technology related to AR, because VR hardware can be made fully functional and iterated on much more rapidly than AR at this point.

This is particularly true because a VR marketplace appears to be emerging as I write this, in the form of the Oculus Rift and support for it in Doom 3: BFG Edition, Hawken, and other games, while AR is still some distance from viable products. It’s far easier to push technology forward when there are real customers to provide feedback, real products to provide incentive for better, cheaper components, and real revenue to spur competition, and VR will likely have all those long before AR does.

VR is more approachable on the software side as well. New experiences often evolve from existing experiences; it’s hard to make a complete break with the past in every respect, if only because your audience will be confused, and also because it’s hard for developers to solve multiple problems in a new space simultaneously. There’s a direct path to at least some interesting VR experiences; PC and console games like first-person shooters and flight, space, and car sims are designed for immersion, and should seem like they’re on steroids in VR. It’s even more obvious what interesting HUDspace experiences are; a few are listed above. However, it’s not at all clear what will constitute compelling walk-around AR experiences. I have no doubt that they exist, but they’re unknown right now. (It’s a lot clearer what might be interesting for constrained AR, and we’ll look at the implications of that in the next post.)

VR for the long run

So VR looks pretty good in the short run; how about after that? Even though I think it’s likely that in the long run (defined as five to ten years) AR will have a more radical effect on our lives, it’s possible that VR-like experiences will be where we will spend more of our time once we have really good AR glasses. The key is that AR glasses will be able to get darker or lighter on demand, because that’s necessary in order to work well in both dimly lit rooms and bright sunlight. That means they’ll be able to become almost completely dark at any time – and when they do, they’ll effectively be VR glasses. So your AR glasses will be able to provide both AR and VR experiences.

That’s interesting because VR experiences are richer in important ways. VR is more immersive, and that’s a big plus for many types of games. VR also has better contrast, since it doesn’t have to compete with photons from the outside world, so virtual images will look better. Because VR doesn’t have to interact with the real world, it doesn’t suffer from any of the inconsistencies that inevitably arise in AR; for example, lighting and shadowing in VR can be completely consistent. VR also avoids all the work that’s required in AR to figure out what real-world objects are in the field of view at any time, and to calculate how virtual and real images interact. Another point in VR’s favor is that it has no equivalent to the per-pixel opaquing limitation of AR, so VR software has complete control over the image that reaches the eye. Furthermore, small amounts of latency and tracking error may be more acceptable in VR, because the virtual images don’t have to match the real scene; since we’re not going to get to zero latency or perfect tracking anytime soon, that’s potentially a significant plus. (However, it’s also entirely possible that small amounts of latency and tracking error could cause simulator sickness under conditions of full immersion; this is one of many areas that we’re all going to learn a lot more about in the next few years.)

So AR is the only way to go when you want the virtual and real worlds to interact, but VR and VR-like experiences seem best for purely virtual experiences. (Here, “VR-like” means AR when it dynamically becomes opaque enough so that the virtual world is visually dominant.)

And it’s arguable that you spend most of your time in experiences that are more virtual than real (or at least that I do).

Our lives are more virtual than you might think

You’re probably thinking that you don’t spend any significant amount of time in virtual experiences, but consider: as you read this, you’re looking at a screen. Imagine you’re doing it on a head-mounted display, and you’ll see that it maps better to VR than to AR. Sure, you could have the text floating in your field of view while still seeing the real world, but why? It seems far more useful to just look at a virtual screen in VR, since all that’s of interest is the text. You could have lots of virtual screens up in 3-space around you, and you could have information presented in all sorts of other ways as well.

Similarly, the real world often doesn’t play an important role in watching TV or movies, or playing video games; certainly it does when you’re with friends, but when you’re alone, the real world doesn’t particularly enhance the experience. And if you ask yourself what percentage of your waking time you spend looking at a screen by yourself, you’ll find it’s a majority if you’re anything like me. So that’s why I say that VR-like experiences may be where we’ll spend a lot of our time once we have good AR glasses; until that time, this argues that VR by itself is interesting.

This is not to say being able to see the real world at the same time as the virtual world doesn’t have benefits; I’ll discuss that aspect in the next post. One thing that absolutely has to be figured out for VR is how to become not-blind instantly, for example by touching a control on the glasses that switches to a camera view; being unable to see without taking the HMD off just isn’t going to be acceptable in a consumer device.

Finally, there’s a wild card that could change the long-term balance between AR and VR dramatically. My thinking to date has assumed that AR will be a major platform shift that fundamentally changes the way we interact with computers, while VR won’t, except to the extent that VR-like experiences are part of the AR future. However, it’s possible that that VR will be a major platform shift all on its own; we could all end up spending our time sitting in our living rooms wearing VR headsets and haptics, while the traditional movie, TV, and videogame businesses wither. (In fact, I’d be surprised if that wasn’t the case someday, but I think it’ll be a long while before that happens.) We all know what that would imply, since we’ve all watched Star Trek – that way lies the Holodeck. If that happens, VR is more than interesting; it’s a big part of the future.

All of which implies that VR and VR-like experiences seem likely to be important in the long run.

Summing up the case for VR

None of the foregoing says that standalone VR is going to be more important or successful than AR in the next five to ten years, although that could happen. AR is most likely going to change the way we interact with the world, much as PCs and smartphones did, long before VR makes it to the Holodeck. However, it seems likely that VR is much closer to being deliverable in a truly workable form than walk-around AR, and it also seems likely that VR-like experiences will be an important part of the ultimate AR future. Given which, there’s a strong case to be made that while the long-term goal is to produce superb, do-everything AR glasses, VR and VR-like experiences are worth pursuing as well, both in the near term and down the road.

84 Responses to Two Possible Paths into the Future of Wearable Computing: Part 1 – VR

  1. Mike Smullin says:

    I’m still waiting for those keyboard pants to appear on kickstarter. I want a trackball sewn into the crotch too. Bluetooth not cords, of course :) coords went out a long time ago ;)

    • MAbrash says:

      Keyboard pants seem like a joke, but they’re as plausible as most input devices I’ve come across for walk-around AR. Solving input for AR is going to be a lot harder than most people think; there’s a reason why SF writers have yet to come up with even a plausible-sounding input system (as far as I know), unlike display systems.


      • Adam Dane says:

        Why isn’t some sort of non-invasive BCI (Brain-Computer Interface) (using EEG or similar) a good candidate for input?

        As long as you’re donning some sort of headgear anyway, why not have the controller built into it as well?

        • MAbrash says:

          It would be a good candidate – if any of that technology worked well enough right now. Unfortunately, our investigations found that that’s not the case yet.


      • Hugo says:

        Well it has to read your mind of course!

        • MAbrash says:

          See the previous replies – that would be indeed be a great solution if it was feasible, but it isn’t now, and doesn’t look like it will be for some time.


      • I think that if it isn’t possible to directly ”realize” thoughts (think what you want to appear), similar to nowadays keyboards fitting to the writer could be enough feasible solution, I’d say.

        Still, there is ongoing research on thought-stealing technology, or mind-control, that might be realized, for good or bad..

      • Will says:

        Charles Stross is vague about interfaces in his fabulous AR primer novel Halting State and its sequel Rule 34. his characters seem mostly to interface with their AR environments by waving their hands around awkwardly and blinking, and the world’s just kind of gotten used to that. one’s noted to have “microchips implanted in her finger joints–not a gamer interface.” airtyping seems common. a lot of the internet native generation is already comfortable touchtyping QWERTY on a keyboard that isn’t there (while trying to stay awake listening to lectures in college classes that won’t let you keep a laptop out, f’rex). from there, it’s just (“just”) a question of tracking the fingers and machines learning how individuals screw up their personal imaginary keyboards, I spose.

        isn’t it awesome that we’re living in a world where THIS has become a problem? this is the kind of problem societies want to have. your work is blisteringly exciting. look at these excited blisters. can’t wait to see it bear fruit.

  2. Some people might want to spend their work time in VR WORK SPACE but I, for one, work with my windows open – I like fresh air, sunlight, and the sounds of birds, cars, and whatnot. I can easily step away from the computer to stretch. Do you really think most people would prefer to, essentially, wear blinders while working? Sure, some people will. But I think I’d much prefer to have an AR screen tethered to some particular point in space than to only see it, and nothing but it, when I put on VR goggles.

    I *like* having peripheral reminders that there is a Real World.

    YMMV, of course.

    • MAbrash says:

      Hi Margaret,

      I like those things too! That’s a big part of why I spend a lot of time in my garden. But you know, the truth is that I work in a skyscraper, with windows that don’t open. The view out the window is nice, but most of the time I’m looking at the screen. So I’m not sure AR would really give me much of a better experience in terms of the outside world, and so long as I’m interesting in technology, I think I’m going to be pretty removed from the natural world while I work. But de gustibus non est disputandum – “there is no arguing matters of taste” :)

      I will point out that thirty years ago, most writers would have said similar things about writing with typewriters or pens versus staring at a computer monitor, and in fact most workers would have hated the idea of being stuck in front of a screen all day. And without question most people would never have thought they’d be looking at a tiny phone screen a lot of the time, even while walking and, alas, driving, or that they’d be so insensitive as to take phone calls in the middle of conversations or while walking with their children. Not to mention reading books on electronic devices rather than paper. And yet all that is now routine. Times and social norms change, if only by the turnover of generations.


      • Oh, yeah, I’m not saying nobody would ever want to do this, ever!

        But let me suggest a different path for AR-vs-VR work usage. Let’s posit a couple more steps of Moore’s Law, so that J. Random User’s phone can do everything they’d ever want to do with a computer. They have a Bluetooth keyboard and AR glasses.

        And then to get work done, instead of hanging around in a little room in a skyscraper, sometimes they walk out to a park, find a comfortable place to sit, and pop up a couple of virtual screens.

        (Heck, there are some people who do this TODAY with their smartphones. Mostly writers, I believe.)

        Your vision of VR workspaces basically takes the “stuck in front of the screen all day” and turns it up a notch. Me, whenever it’s a nice day, I’m prone to undocking my laptop and going out to a cafe, or up to one of the many parks nearby. (Seattle has so many!) It’s not perfect – I have to leave my 24″ monitor behind, my laptop’s screen can’t fight direct sunlight, limited battery life, spotty net (sometimes that’s a plus) – but it’s pretty nice. Idyllic, even.

        There are certainly reasons to be in an office that will stick around for a while. Being able to talk to people in person is good, and having that “I am in in the place where Work Happens” vibe is VERY useful. And any job that requires physical assets needs to have a specific place that it happens, whether it be a workshop, a workplace, or a studio. And there will be people who simply prefer to block off all external distractions some, most, or all of the time!

        But how many of today’s Internet companies were started by a couple guys hanging out in cafes with their laptops?

        I might be biased by the fact that I’ve made a deliberate decision to be more aware of the world around me after spending most of my life ignoring it in favor of the growing digital world. But I’m probably not the only one.

        And I should probably also wait to see what the promised part 2 of this post will be about. *grin*

        • MAbrash says:

          If there was perfect per-pixel occlusion in AR, your idyllic vision would work very nicely. There are ways to approximate that with areal opaquing, and they might work well enough, although bright sunlight will be quite a challenge. Anyway, once again I agree with you that this would be pretty great, but I think it’ll be a while before it works well enough for you to prefer it to your laptop.


        • George Kong says:

          Hi Margaret…

          It would be very easy to include virtual backgrounds that simulate and idyllic (and even impossible) scenes while you work. Personally, I think that would be one of the nicest benefits of working in a VR space over a traditional monitor based desktop.

          And when you get bored, you’d ideally be able to change the scenery as easily as you change the desktop background.

          Yesterday, I worked while looking up at the moons of Jupiter. today, I’ll work from the top floor office of the Chyrsler building… and tomorrow… well, who knows?

        • Ducky says:

          Alternately, you could put your VR goggles on and be doing your programming on your virtual 9×27″ screen setup whilst being in the middle of a virtual forest with virtual animals to keep you company.
          And in real life you’re just sitting in a windowless cubicle.

          Companies could allow people to do their work anywhere they like virtually, whilst cutting down on the space that employees use (all they need is a chair and their VR machine).

    • JazW says:

      I know it’s not quite the same thing, but you could have simulated environments of whatever you please in your VR workspace? :)

  3. Kyle LeFevre says:

    Great post! I think that AR and VR certainly go hand in hand. I hope that more people jump on this bandwagon. I’m working myself towards establishing a great VR system. Glad Valve continues to think of these items going forward.

  4. bgr says:

    What I’d like to supported by VR headsets is eye pupil tracking, and seems like Rift isn’t going to have that, at least there’s no word about it in any article I read. I think that being able to detect where user is looking would enable developers to make some neat new things, or at least a realistic depth-of-field effect for the first time ever :)

    • MAbrash says:

      Eye-tracking could definitely enable some new gameplay tricks. Depth of field could be trickier. Yes, you could look at the vergence of the two eyes to determine depth, and draw the scene accordingly, and that might be pretty cool. But the focal distance would still be at infinity, and I don’t know whether that would make your visual system unhappy or not. Yet another thing we’ll learn along the way!


  5. Bert DZ says:

    As an intermediate step to AR glasses, how about a VR headset that has external cameras, and then displays that inside the headset? In principle it seems like something that would not be very difficult to build. Also means you could get away with less than zero latency, since both the real and virtual image could be displayed a bit delayed.

    • MAbrash says:

      It’s an option, but I discussed in the last post why I don’t think that would be satisfactory. You certainly wouldn’t want to use it for walk-around AR, because you wouldn’t have good peripheral vision, and the image quality would be poor at best, and also lagged, which is a recipe for simulator sickness. But it could be adequate to allow VR users to be not-blind on demand, and if it was there for that purpose, quite possibly it would be used for relatively crude AR, and that could set off a positive spiral.

      Now you can see why predicting this particular part of the future is challenging; there are many possible paths :)


      • Bert DZ says:

        Sorry, missed that part of the previous blog post. Perhaps such a video passthrough system might come in as a step beyond AR glasses, in that it could actually enhance vision once the technology is good enough.

        You could have 360° vision, built in binoculars with extreme zoom, infrared/night vision, etc. I’m not sure those would be essential applications for me, nature perhaps would have evolved them already if they were so important, but who knows.

        • MAbrash says:

          I think it’s quite possible we’ll have all that eventually. I think, though, that we’ll need at least 8Kx4K 120 Hz cameras with at least 120 degree FOV for this to be good enough, and that’s going to be a while.


  6. Alex Howlett says:

    Very insightful post, Michael. I’m really glad that you were careful to point out that VR involves the user sitting down. I can’t imagine a scenario in which it would be safe for a gamer to run around with completely-occluded vision. I’m not saying it could never happen, but it’s not something that seems realistic in the near future, and it’s not something I’d be excited about even if it were somehow made to be safe.

    Walk-around AR is much safer than walk-around VR, but just like you wouldn’t play Quake on a mobile phone, you wouldn’t play Quake using walk-around AR. The mobile phone interface is more conducive to simple gesture-based games like Angry Birds. Similarly, the mechanics of Quake just wouldn’t make sense anymore if you’re walking around. I agree with you that walk-around AR would not provide a new way to play existing video games. It could, however, add meta information to games you already play in real life, and potentially enable new types of reality-based games.

    For example, kids could be playing tag on the playground using an AR interface. The AR would highlight the person who’s “it” and the software would register every time a player gets tagged. Or maybe you could use AR to make it so people don’t have to bring a light and a dark shirt to your pickup ultimate frisbee game… assuming it can be integrated into your rec-specs. But I mostly don’t see the distinction between walk-around AR and what you call “HUDSpace.” I guess I’ll have to wait for your AR-centric blog post for that.

    AR provides a new way to interface with computers. VR really doesn’t. I think you were right on with your initial intuition that VR, at best, provides only an incremental platform shift. But that’s okay. VR enhances an existing interface. Switching from a monitor to a VR headset should be like switching from keyboard-only controls to keyboard/mouse controls. It can greatly enhance my experience with games I already play. It’s for this reason that I’m really excited about VR. That’s not to say I wasn’t geeked out the first time I saw a video of Sebastian Thrun wearing his Google Glass. Because I was. But VR can significantly improve a gaming experience I already love.

    Constrained AR, as you say, may often be used to to provide VR-like experiences. You’d no longer need to make space for a monitor on your desk and you could adjust the opacity of your virtual screens to see through to the real world behind them. You can give your laptop as big of a virtual screen as you want. Wouldn’t it be cool if laptops evolved not to have screens? It would look like everyone was just carrying keyboards around. The opposite of a tablet? =P

    I’ll finish up this longer-than-I-thought-it-would-be comment with a question. You mentioned that VR can use existing controllers. I agree. But then you go on to say that new VR input will allow VR to reach its potential. What do you have in mind when you say “new VR input”? What can it do and how would it be fun? I can’t imagine a scenario in which input from the HMD should be used for anything other than looking around. And this type of input can be used for first-person games only if the in-game character is also sitting. (e.g. flight simulators, racing games, Descent, MechWarrior, or anything with a cockpit) rather than Quake-type first-person shooters.*

    I have been a fan of yours for a while and I thoroughly enjoyed reading your articles from the 80’s and 90’s about x86, graphics, and VGA programming. It makes me happy that a person who works as hard as you is now putting his effort into a VR project. I’m pretty sure it’s a sign that awesome things are coming. Looking forward to hearing more! =)

    * – It makes me nervous that the Oculus Rift people are focusing on Doom 3 because I feel like that’s exactly the kind of game that can’t make good use of VR head motion tracking inut. Hawken, being a cockpit-based game, gives me a little more hope.

    • MAbrash says:

      I likewise don’t imagine VR users moving around. Instead, I see an evolution toward better haptics to let them stay in place but experience virtual actions.

      HUDSpace is if in your tag game there’s a top-down map showing you the playfield and where you and “it” are, or if there’s an icon hovering roughly over “it”; AR is if “it” has the head of a dragon, or is glowing red, or if there are three “its” and you have to guess which one is the real one. Basically, AR is when the virtual world intermixes with the real world as if it was part of it; HUDSpace is when you see information, which may relate to the real world, but is clearly not part of it.

      It would be great to have laptops with no screens :) However, why would you want to see through your virtual screen to the real world? Trust me, I’ve done it, and readability is enormously dependent on the background. Regional opaquing could help, but then you wouldn’t really be seeing through your screen, just around it.

      The obvious VR input is haptic gloves. I’m not saying that would be a great input device, although it might be; I’m just saying it would enable new types of interaction in VR. Feeling the stick push back at you as you’re flying, or even just feeling the stick at all and controlling it with your hand, would be pretty novel.

      Why do you think Doom 3 can’t make good use of head motion tracking?

      Thanks for the kind words about my articles. Always nice to hear someone enjoyed them; makes all those nights and weekends writing them worthwhile :)


      • Alex Howlett says:

        I’ll address your points in reverse order.

        1. Within a first-person mouselook-style the game, there’s no distinction between the character’s aim, the character’s head orientation, and the character’s body orientation. The mouse controls all of these things in lock step.

        The most intuitive way to use head tracking data is to control the user’s viewing direction in a virtual world, but if you do this with Doom 3 (or any standard FPS), and assuming you’re still aiming with the mouse, the user’s viewing direction would have to be decoupled from the character’s aim. Essentially, VR would do no more than provide a virtual screen that surrounds the user. That might be pretty cool, but simulating a bigger monitor is not the most immersive way to use head tracking data.

        If you’re not aiming with the mouse, then you’re losing the game, so “head aim” is almost not worth discussing. Maybe some day when we have pupil tracking, people would enjoy using VR to shoot monsters with their laser vision, but I can’t imagine it would be fun playing games where the metaphor is that you’re floating around a virtual world with a gun attached to your forehead. You could give someone a Duck Hunt style gun to point and aim with while they use the head tracking to look around, but you’d still need a way to control the direction the character is running. But in the end, you’re sitting in a chair and the character is not, which breaks the illusion that you *ARE* the character.

        Cockpit-based games like Descent don’t suffer from this problem. The character is sitting and so are you. Your ship and your aim are locked together and controlled by your joystick/mouse/keyboard/controller, but you can be free to look around from within your cockpit. VR in this case would be smooth and it would be immersive. And, if done right, it would provide a significant competitive advantage.

        2. Haptic gloves sound great, but why do they need to be tied to VR display devices? Seems like they’d be just as useful with a normal monitor. You could use them to make Wii-style games more realistic.* Bowling with a ball you can actually feel? Heck yeah! But how do you add the weight? I dunno. Anyway, for a joystick-based game, what’s wrong with just using an actual joystick that you can actually feel?

        3. I’ll trust you on your experience with see-through virtual screens as I’ve never used one. How did the one you used work? Was it an emissive OLED display with space to let light through between the pixels and then an adjustable LCD shade behind it that dampened incoming light? Or perhaps a projection onto a one-way mirror?

        4. Regarding the distinction between HUDSpace and AR, it sounds like what you’re really hoping for is hard AR. Until then, any dragon heads we might see are clearly distinguishible the real world. We’ll probably acheive hard AR, hard VR, or immersive tactile interactivity by sending signals directly to the brain/nervous system and intercepting signals that the brain sends out. Maybe I tell my legs to walk, but since I’m plugged in, the signal goes to my character’s legs in the game instead of my own legs.

        I actually see something a little bit like this happening fairly soon. For one example, there could software that reads your brain waves and you could train it by typing on your keyboard. Then, by typing while it watches, it can learn to match the letters that you type with your brain wave patterns. It would be like training voice recognition software. Eventually, you could unplug the keyboard and the software would still know what you’re typing. You could then theoretically remove the keyboard and eventually you wouldn’t even have to move your fingers anymore. The same technique could be used for controlling games too. And this is without being intrusive and without blocking any signals that your brain might send to your legs etc.

        I’m not really sure what ever happend with the Jedi Mouse, but that’s the kind of thing that I think would be a realistic/useful input technology for all electronic devices.

        * I was annoyed when the Wii controller was first advertised as a new way to play games. It’s not a new way to play existing games. It’s a way to play new kinds of games, and a way to add dumb gimmicks to existing games. When the Kinect was announced, it was clear that the most popular games would be the ones that didn’t make you pretend that you were holding something.

        • MAbrash says:

          It’s definitely true that cockpits are a more direct match between existing software and VR. As for aiming wearing an HMD, your thoughts are logical, but at some point it’s necessary to just go try things and see how they work, rather than deducing the answer, which is what we’re in the middle of doing.

          Haptic gloves don’t require VR, as you say. But you didn’t ask what required VR; you asked what kind of input I envisioned helping VR achieve its potential. But that’s just one obvious thing to try, not intended as any kind of definitive answer (although I’m pretty sure haptic gloves would in fact enhance the VR experience); I certainly don’t know what the answer will end up being.

          I’ve used several types of see-through virtual screens: LCOS into a waveguide, OLEDs into a beam-splitter, LCOS into a beam splitter, laser into a waveguide, and video passthrough from a camera into VR glasses.

          No, it’s still AR, not HUDSpace, if the dragon head is clearly distinguishable as virtual. The key is that it’s not just information floating somewhere in the right neighborhood, but an image that tracks precisely with the world. I described HUDSpace as including very lightweight AR, so all this is on a continuum, but if you saw a tank crawling over your desk and bumping into things, or even if you played Battlechess on your kitchen table, I’d call that AR. If you saw a chess game top-down in your glasses, with the location tied to the glasses, not the world, that’s HUDSpace, as would be up/down/left/right arrows giving you turn-by-turn directions. If I had to say what the key difference was, I guess I’d say it’s proper registration of virtual objects (not just text or icons) with the world. Maybe it’s not a generally useful distinction, but it’s interesting to me because AR seems to me to have more potential for entertainment than HUDSpace, and because I expect HUDSpace glasses to be common pretty soon, but there’s no particular reason they’d be AR-capable as I’ve described it, because AR has many more unsolved problems and would require more expensive hardware, at least initially.

          Direct brain control just keeps coming up in this particular set of comments :) It’ll be great when it happens, but it seems to be far enough out so that it doesn’t really have any bearing on my thinking right now.


          • Alex Howlett says:

            I’ve tried VR with head aiming in a few different forms. You can make it kind of work with a “clutch” control that disables the head tracking as you reposition your head. This allows you to turn further than your neck would normally allow. It’s analogous to lifting up the mouse to reset it at other side of the mouse pad. It’s not pleasant to try to whip around your head as fast as you would whip around the mouse. And aiming by pointing your head feels weird. At least it felt weird to me.

            You’re right that I asked about what kind of input would help VR achieve its potential rather than what input required VR. But I still don’t see haptic gloves being much help. What do you see as its benefits? Maybe for adult entertainment, you could have a haptic sheath or someting. I could see that potentially being popular.

            I understand where you’re coming from on the distinction between AR and HUDSpace. I think of them as a continuum and that we’ll gradually progress along analogous to the way we progressed from Pong to Crysis. My speculation is that, at least at the consumer end of things, it’ll start with a monochrome display that will render simple text, symbols, and icons and we’ll slowly add in things like more colors, higher resolution, perspective projection, and positional tracking. The key to getting from here to there is that you have a compelling product the whole way through.

            I guess I’m just not as excited about SF-style AR as you are. But, as someone who plays sports, I can imagine scenarios in which HUDSpace could improve my recreational experiences well before it reaches the level that would satisfy your definition of AR.

            My impression right now is that useful direct brain control (as input from EEG) will arrive before haptic gloves achieve wide adoption for serious gaming.

          • MAbrash says:

            I’m certainly not claiming I know for sure about any of this, and I’m looking forward to seeing how things evolve. Do keep in mind that there’s potentially a big difference between HUDSpace and AR for gaming, just as there was between 2D and 3D gaming.

            Maybe you’re right about direct brain control. Do you have any references indicating it’s anywhere near good enough?


          • Alex Howlett says:

            (Michael, this is a reply to your comment asking for evidence that direct brain control is anywhere near good enough for gaming. For some reason, there was no link to reply directly to your comment. Perhaps there’s a cap on the nesting level?)

            No. In this regard, I’m probably speculating just about as much as you are. But these are the reasons why my gut tells me I should put effort into direct brain input rather than haptics:

            1. Gamers traditionally shun haptics. Force feedback messes up your aim, so the accepted practice is to disable it. For a while my favorite joystick was the Microsoft Force Feedback 2, but I disabled the game-related force feedback stuff. I did, however, set a custom level of resistance on my stick, so I suppose I was making use of a very basic form of haptics. But the actual force feedback forces weren’t anything more than a novelty.

            2. Game developers, as far as I know, haven’t found a good use for the haptics that have been available to them. We have vibrating controllers, but they very rarely add anything to gameplay. I can’t think of a situation in which a controller vibration would provide better information than a warning on the screen or a sound. You might argue that it provides a more varied experience, but that’s not what competitive gamers look for. They look for a way to have more precise control and a way to have as much information about the game state as possible.

            3. Haptics is limited by physics. For example, haptic gloves will never allow you to feel the weight of a heavy object. A game might squeeze or apply localized pressure to your hands and forearms, but that’s about it. You might be able to simulate something crawling on your arm, or, perhaps you have a quad damage powerup and your hands pulse. Something like that would work. But you couldn’t ever hold a convincing gun or any object that has any significant amount of mass. Even if you could, the gamers who are getting tired from holding up their guns are at a disadvantage to the ones who don’t have to hold anything up.

            4. Haptics require moving parts. When I was playing Descent seriously, I’d go through a Microsoft stick every two or three months. Incidentally, that’s currently how long it takes me to go through a pair of Nike cleats. When Microsoft discontinued their Sidewinder joystick line, I switched over to the Logitech Extreme 3d Pro, but they only last 3 weeks on average. Why? More moving parts. The Microsoft stick had a potentiometer for the twist, which is often what wore out first (competing with the buttons). The sensors for the x and y axes were optical. The Logitech stick uses all mechanical sensors. If we’re just talking about input, then Kinect basically provides the same type of input that haptics can provide. The only reason you’d want haptics is for the output/feedback. And gamers playing action games will break their gloves. Brain control would be input only, and it would use electrodes that don’t have any moving parts. All you have to do is sit there:


            Now I’ll play devil’s advocate here for a second:

            In Descent, there’s a setting for newbies called “auto-leveling” that gently rights their ship if they find themselves upside down. Basically, it’s all done in the software. But I could see an “auto-leveling” feature or even an “auto-aiming” feature where, instead of the software making adjustments directly in the game world, it can gently apply force to your joystick to encourage you to point in the right direction. That could be more useful for newbies than traditional auto-leveling because instead of automatically fixing their problem, it teaches them how to flip back rightside up by nudging them through the motions.

            I don’t know of any games that make use of force feedback like this. I just thought of it while writing this comment. But I’ve been out of the gaming loop for quite some time and I wouldn’t be surprised to find there were some flight simulators or racing games that already do it. It makes sense.

            I could be wrong about brain control before haptics. I envy your position of having so much time and resources to devote to VR. You’re a lucky guy. And by “lucky,” I mean you worked really hard to get where you are. I look forward to hearing about any progress you make on haptics.

          • MAbrash says:

            As usual, good points. I do think the first time you can reach out and grasp a sphere and feel it, even though it has no weight or inertia, it will be a huge different, but I don’t know how that translates into gameplay. Also remember, though, that there are a lot of things to do in VR other than hardcore games. Haptics might be much more useful for construction sets, or puzzle games. As for existing haptics – sure, they’re not very exciting. But having a joystick push back or a controller vibrate is pretty removed from the virtual world; what I’m looking for is haptics that directly map to the virtual world. Anyway, I’m just speculating at this point, so take it for what it’s worth :)


          • Alex Howlett says:

            Good point about puzzle games and construction sets. But something important to keep in mind is that there is no way to physically to prevent your haptic-glove hands from moving through objects in the virtual world. This especially includes objects that are being grasped by your other hand. In most cases, the software could move objects out of the way of your hands, but if you’re grasping an inertia-free sphere with one hand, the other hand would pass right through it. You could, I suppose, give the player some kind of sensation when his hand is passing through objects.

            Oh! You know what I just thought of? You might be able to use a haptic glove to help teach people sign language. It could give feedback on the screen with little pictures showing whether your hand positioning is correct and also gently nudge your fingers in the right direction.

            …and then I Googled it:



            Force feedback piano lessons. That’s pretty cool too.

            Something I try to remember is that nobody wants to use a product that says, “You can see where we’re going with this,” without being immediately useful. We’ve seen a lot of those products in the VR realm. Nobody 20 years ago said, “I want a smart phone” and then set about cobbling together a prototype smart phone that had all the elements, but couldn’t really do anything well.* It evolved incrementally.

            Haptics isn’t ready for VR, but it’s ready for something. The better it succeeds in currently-applicable markets, the more the technology will evolve. I’m not sure whether you’re actually doing anything substantive with haptics over at Valve, but by attacking markets that benefit from existing haptic technolgy, we can encourage the technology to improve.

            Maybe start an ASL project. Maybe Guitar Hero with a real guitar and haptic gloves coaxing you into the proper fingerings. Or maybe “Mavis Beacon Forces Typing.” =)

            * – Or wait. Is that what the Newton was?

          • MAbrash says:

            You’d be amazed how many people have independently come up with the idea of AR guitar lessons :)


        • Codes says:

          I don’t know why there would be a distinction between fps games and cockpit style games where it comes to design targeted for VR.
          The mechanics are the same, the body and the gun are tied together in a mech, why wouldn’t they be tied together in a FPS?
          The head tracking turns the game character’s POV, not the body or gun. That’s an immersive solution for both scenarios.
          As far as FPS pro-player adoption of the HMD, it does impact aiming mechanics, but it also give a much “stabler” sense of space if done correctly.
          Imagine that there are a few enemy targets spread out a various horizontal and vertical separation…
          Without your head view moving at all, you can aim your gun separately and take them down without losing visual contact with any of them.
          This is just a very small part of the impact of freeing up your control system to do the aiming independently, and overall I think VR and FPS will have a very fruitful relationship because of it, in the very very near future!

          • MAbrash says:

            Cockpit games provide you with a stable (tied to your body) near environment (the cockpit), which anchors you in the virtual world and may help a lot with simulator sickness. FPS games have nothing similar except the weapon, so you feel like you’re unanchored in the virtual world.

  7. Micah Rust says:

    Dear Michael Abrash, I have been reading your blog since it began and have gained an increase in interest VR/AR as it could or should apply to everyday life. Because of my increased interest I have been indulging in some science fiction about VR and AR applications and their implications upon our lives set in the near future.

    Recently I have read the foreign series called “Sword Art Online” in which the VR/AR is accomplished through a combination of neuron stimulation (possible, but hard/imprecise), blocking and reading (already possible(muscular)). As this is rather far “out there” I was first very skeptical and considered it to be “Complete Sci-fy” rather than having some possibility of existing in the future.

    It was only after hearing about the use of MEG (Magnetoencephalography) in combination with the existing EEG, and MRI technology used to get an accurate “mapping” of the brain. Which one application uses it to counteract depression by stimulating different portions of the brain in order to activate different portions of a persons brain so that they can escape their proverbial loop of depression (similar kind of logic as shock therapy, but more precise).

    Now I am no nuro-scientist, so I won’t pretend to know exactly what I am talking about, but as a concept, given that stimulation, reading, and blocking(not yet, except through physical means) of the nervous system in various places on the human is possible, this (Full Drive) should be possible, right?

    All this being said, I will simple ask your opinion, that given the precision of MEG increases to the necessary level required to accurately read human movements through the nervous system and the shielding problem (electromagnetic noise) is “fixed” do you think that this “Full Drive” (partial/total convergence on the five senses overlapped and/or blocked) as the book calls it will surpass the traditional concept of “2-D” VR/AR where only sight and sound are virtually simulated?

    • MAbrash says:

      In the long run, direct neural connection may well be how we all interact with computers. But it’s far enough out that I don’t have an informed opinion. If you check out the work Sheila Nirenberg is doing at Cornell in decoding the message stream from the retina down the optic nerve, you’ll see that real progress is being made on this stuff, but we’re a long way from having it all figured out. I’m not a researcher, so I’ll wait until researchers get all this nailed down before thinking about it in any way except as an SF concept that may be interesting some day.


  8. Kajos says:

    Good article. However I tend to be skeptical about people talking about AR, as well with Google Goggles, since, as you said, the technical difficulties are tremendous. For one, with AR the computer needs to model the world to be able to lay over info etc. This is extremely hard to accurately do with just a camera, let alone a Kinect (which would never be used with AR since it uses infrared). Yes, you could do some basic stuff like translating text on the fly, or recognizing a face, but I think it won’t get much better than that in the coming ~10 years and it WILL work iffy.

    • MAbrash says:

      It is indeed hard to do with a camera – but not impossible. And it’s a lot more manageable at room scale, or on a tabletop, rather than anywhere in the world at large. I think room scale AR is a solvable problem in the relatively near future.


  9. Alexandros Gekas says:

    In terms of entertainment I feel that VR is absolutely worth pursuing, not only in the short term but also for the future, for one main reason: most people are lazy when it comes to relaxing and having fun. Entertainment, whether we’re talking about movies, gaming or any other medium, is mostly passive in the sense that much of the person’s body doesn’t participate in the experience. Movies are an audiovisual experience so they engage your eyes and ears, games go one step further by engaging your hands and motion controls try to include more motions, but in every case the range of motions you have to execute remain limited. My guess is that this is not only due to technological reasons, but also because home entertainment is mainly about relaxing and having fun while not straining yourself.

    In that context, VR seems to me that it is more attractive than AR. When you sit down to play an immersive game, you want to lose yourself in the game’s world and forget your everyday troubles. You also want to feel like the hero and that is much easier to achieve through the abstraction of the protagonist’s skills, ie the hero being able to do some awesome stuff with only limited movement on the player’s part. To use an everyday analogy, many people go out to play football but a huge number of people just sit at home and watches football on TV, because they don’t have the skills or the physical endurance to play football, or they just can’t be bothered.

    I don’t know if the above made any sense, so I’ll just sum it up with this: VR seems to me like the most attractive proposition for the future of home entertainment because it is all-inclusive, which means that most people will be able to use it. I don’t see AR replacing that for many, many years, so VR will be relevant. Besides, Picard had his fun in the holodeck fromtime to time, but when he really wanted to relax he would always pick up a book, pour himself a cup of tea and sit on the space sofa :) I think the future lies with whichever technology requires the least amount of effort to enjoy.

    Your article was truly facinating Michael and I enjoyed reading it. keep it up!

    • MAbrash says:

      Personally, I think people want to be seated most of the time when gaming, so I agree with you that that’s plus for VR. However, it’s equally a plus for tabletop and room scale AR.

      You never know, though – maybe walk-around AR will be what gets people up and moving again, which would produce huge health benefits. I wouldn’t bet on it, but it could happen.


  10. Wai Ho says:

    Firstly, thanks for another great post. I have a silly grin on my face (according to my girlfriend) from reading your thoughts on VR. So many possibilities :)

    Anyway, I think there are a few more factors at play that _may_ make VR finally no longer “5 to 10 years away”. Although, the short term product will probably be VR head mounted displays (HMDs) that are low cost, have wide FOV and head tracking as they can be easily integrated into existing technologies and content.

    Some pseudo-random thoughts about VR HMDs:
    1) Escapism is bigger than ever. People like to get away. This includes mental holidays through games, TV, movies and books. VR displays adds to the immersion of any kind of Escapism that is primarily visual, virtual and spatial-temporal. The fact that a wide-angle VR display hardware is within the price range of even an academic like me makes it even more attractive :)

    2) There is confluence with new sensing technologies such as RGB-D (Kinect etc) and services like Kinect@Home and others that offer 3D model building to the layman at low cost and without the need for prior training. This has the potential to add large amounts of content to VR environments, including content from one’s own home (or a friends home, office, you name it!) The good thing with this scenario is that the captured models does not have to be perfect (in fact, one may want to jazz them up for VR). This loosens the accuracy requirements on the modelling and makes low cost tools usable. I am playing around with this idea at the moment, and it seems pretty feasible.

    3) A portable wide FOV screen is awesome for work, especially if it provides more screen real estate than tablets or phones through higher pixel count and virtual pan-tilt desktop through head tracking. This kind of displays will actually help lower cognitive load as I find (citation needed) having more screen space and virtual desktops helps a variety of tasks. Now that people have been consistently stuck with small screens with fewer usable pixels, VR displays is an obvious remedy. This also opens up new form factors for wearable computers as you eluded to in your post.

    4) There are potentially really great applications that can only exist with a good VR HMD, such as useful telepresence (as opposed to teleconferencing via a narrow FOV webcam and display). A lot of robotics research uses wide FOV sensors to get around aperture (windowing) problems in sensing. Humans also use wide FOV sensors to navigate. VR HMDs may be the bridge to let humans really immerse themselves in VR environments that are built real time from real world environments. Imagine having a personal CAVE VR room in your backpack, linked to another part of the world (ignoring obvious latency issues)!

    5) Eye strain. I stare at monitors hours on end and most of my colleagues, friends and family spend _way_ too much time looking at digital displays. Many of them stare at tiny screens all the time. Being able to take off your glasses (if any) and have a diopter adjusted VR HMD focused far away sounds fantastic for getting some eye rest. Colour calibration and properly illumination control (as in photons from screen to eye) is also easier with a VR HMD as there are few to none stray photons from the environment around a user.

    6) Privacy. While privacy screens and filters exist, it is always tricky to use computers in public. Especially true when entering passwords. A touchscreen keyboard without any visual buttons (think Swype) would be very secure if the display is only a HMD.

    7) Travel. Having been on a few long flights recently (being Australian sucks in this regard), a VR HMD would have made passing the time a whole lot more fun and relaxing.

    8) Education. I see so many applications in this area. A lot of data and systems are 3D (or more). Diving into a model via tools like MATLAB is a great way to play around with maths and understanding something. A lot of people I know in Engineering think visually. Low cost VR displays adds to this kind of data visualisation experience.

    Sorry for the wall-o-text. Coffee plus the post really got me hyped up :)

    Wai Ho

    • MAbrash says:

      Lots of good points there.

      The one thing about captured VR models is that I don’t anticipate anyone ever walking around in them; being blind and walking around just don’t go together, no matter what’s being displayed. Sure, you could set up an empty CAVE-type room, but almost no one will ever do that in their house, so it’s a tiny potential market. To me, the interesting scenario is sitting in a chair or sofa and navigating virtual environments with a controller.

      Do you have a citation for #3? It would be useful to have some solid research backing this up. I will note that HMDs are not going to have high pixel density for a while. At 90 degrees horizontally, even 1080p is only 12 pixels per degree; the monitor I’m looking at now is about 50 pixels per degree. It’s physically possible to have higher HMD pixel densities, but the projectors aren’t there at this point.


      • Wai Ho says:

        Agreed about walking and VR. Chair or sofa with haptic feedback would be ideal for navigating through VR environments. I was also thinking of scanning in objects and people as avatars, which is quite doable now. I always wanted to play Quake through Virtualised real world environments with avatars that look like my friends. It may now be possible albeit with still crude visual appearance.

        According to Wikipedia, we humans have ~50 Cycles per Degree (CPD) and I have seen numbers between 30 to 150 quoted with the latter based on retinal photoreceptor density which is probably not correct especially when we are moving our eyes. This means we “max out” at ~100 pixels per degree assuming no super-sampling effects, which may happen when we move our eyes. This would mean 9000 pixels over 90-degrees or a bit more than a 8k UHDTV panel per eye. Not too crazy but definitely difficult in the confined space of a HMD especially in the next 5+ years.

        However, our retinal resolution is log-polar and our eyes can only saccade so far away from facing front-on. This means we only need the high resolution near the center of the display. This may mean that it is possible to achieve the required CPD with lower density displays given the right optics and/or combination of displays. May also help reduce the computational load of rendering and pre-warping so many pixels.

        I have definitely noticed better productivity (or at least feeling better when I work, also an important metric) with a Laptop + 2540×1440 display than just the 1080p display I use on my gaming machine. Anecdotal evidence from colleagues points to a max-useful-pixel cap of around 2x 2540×1440. I understand that his is by no means conclusive evidence, but the number of people that seem to prefer more screen and pixel real estate is certainly large.

        A quick google search found these references re: multiple displays. Couldn’t find much from peer reviewed literature, which is surprising given how many people use more than one monitor (or virtual desktop)

      • George Kong says:

        While I wouldn’t ever discount the willingness of people to sit on a couch and twiddle their thumbs even when other options are available, it seems that if a functional full body motion alternative were available, people would be likely as willing to engage in that as well.

        I mean… there’s certainly precedence for it in the form of the Kinect and to a lesser extent the PS Move and Nintendo Wii. The unfortunate manner in which those motion systems were implemented forced an either or situation between motion controls and classic controls, creating an unnecessary dichotomy in the market place and in the culture between casual gamers and hardcore.

        As a result, most traditional game developers continued to base their games around classic game pad controls, with little movement in the motion area.

        But if it were done right – combining a Razer Hydra style split motion controller + kinect + VR, it would provide a substantially more valuable and immersive experience for gamers than been limited to controller inputs.

        Where using kinect, you use that to track limb and body movement, while the motion controls allow you to navigate through menus and control context in a much more responsive manner.

        The real trick to this proposed set up is to discard the notion that ‘it’s necessary to move forward’ in order to simulate movement. That is, allow the player to walk, jog, run on the spot – and using the rate of movement of the knee relative to the hip – use that data to calculate the speed of movement of the character.

        The more accurate this virtual model of movement is, the more accurate the sensation becomes – i.e. as you lift up your leg slightly and slowly, your avatar mirrors your leg raising movement – as your drop it, so too does your avatar.

        Direction of movement can be coupled to the direction in which you push the analogue stick, or maybe relative to a home point (i.e. moving a couple feet from a predetermined center point will determine your direction of travel – while the rate at which you move your legs will determine the speed of movement).

        This effect isn’t going to be as convincing as simply letting players roam freely with a back top or some such – but it’ll be a significant step up from solely gamepad based controls, and provide a functional alternative to various omni-directional treadmill designs.

        More to the point, it’s going to make movement and light exercise an implicit part of gaming – helping to reverse decades long trend of sedentary lifestyles and growing obesity. Well… that’s the dream anyway.

  11. George Kong says:

    Great thoughts Michael.

    I don’t think there can be any doubt for someone that considers the future of computing technology long and hard enough that the obvious end point of converging technology is a duality of AR and VR. High quality VR is basically post-scarcity technology at its best; allowing all sorts of experiences independent of material resources. If the cards are played right, then the global-social-economic landscape will be rewritten through the emergence and iteration of these technologies.

    Sorry to pile the pressure on.

    As for AR input… it seems like the obvious solution to me would simply be some sort of camera based motion system akin to Kinect or the Leap motion sensor… no?

    Of course I imagine that there are power draw issues that complicate its use and inclusion at this stage – but ultimately, the ability to track your hands in virtual/augmented space would be the most logical way of interacting with those augmented elements.

    Sprinkle on a dash of voice, maybe some brain wave eeg scanners, and even continue to use touch technology (either letting the users navigate submenus on their phones, or via built in touch strips ala Google Glasses, or having such a thing on a pendant worn around the neck), and you’d have a very robust system of interaction for both AR and VR.

    • MAbrash says:

      Voice is great when it works and is appropriate, which is… sometimes.

      Brain waves don’t work well enough, at least not yet.

      Touch on would be fine, but in general you have to look at the screen or else limit interaction to only swipe direction, plus it ties up both hands.

      As for camera based motion… I’ll write about this one of these days, but it has a bunch of problems. First, there are occlusion problems if you only have head-mounted cameras. Second, you have to hold up your hands for this to work, which is tiring, often socially awkward, and not possible if your hands are otherwise occupied (remember, this is AR). Also, while it may be intuitive, it won’t be very satisfying without haptic feedback; it’ll lead you to expect experiences similar to the real world, which won’t be delivered. I’d be more enthused about pants with touchpads in the thighs (not that I know that that’ll be satisfying either, but it would at least allow for less tiring and less obvious motions, and will have at least some physical feedback to your fingers, much like a phone). But honestly, I don’t know what the answer is; actually, it’ll probably be multiple answers, with different interaction modes used in different situations.


      • George Kong says:

        This is why a pendant is an important part of the equation – you can offload weight and battery life to the pendant AND include cameras that increase FOV for the system. Used fish eye lenses to increase the FOV in both the head mount and pendant, and suddenly, a much larger area becomes detectable.

        Additionally, while details on the Leap Motion detector is scant at the moment – that direction seems pretty promising (at least from what I can infer about its FOV and functional detection area).

        Haptics can be a problem… but one possible solution (that is also cheap and easy to implement for the consumer) is in providing appropriate audio/visual cues. It’s not a replacement, but by strengthening and increasing the ‘bandwidth of information feedback’, we get a far better sense than no or little feedback.

        Literally, this would boil down to: as your fingers run through a virtual button – it provides a small audio register, and a visual cue showing that it’s been intersected. To interact with the button – you stab it through with your finger tips (or whatever other suitable interaction a designer comes up with), rather than simply passing through it from the side. That interaction of course has its own set of audio and visual cues (perhaps a meaty click, instead of a soft ‘tink’ – and the box or button lighting up, rather than just expanding slightly).

        It’s not going to compare to the tactile feedback of a button laden keyboard, or even a smooth touchscreen glass – but then those devices won’t allow you to have the degree of flexibility that a virtual touch and motion gesture based interface will have, even in the near future.

        In the near and middle term, we’re going to have to weigh up the things we gain against the things we’re giving up and realize that if the we’re gaining more than we’re giving up, the jump is worthwhile!

  12. Very interesting observations – good luck on your development.
    My area of interest is in the Out-of-Home entertainment approach of this technology (I will pass on commenting on specific projects we are linked with – and also leave the AR side of the business to another discussion).

    Having worked in the VR scene since the VPL / Virtuality days – I was interested to see the Oculus Rift approach, and evaluated -in detail- all the aspects of the prototype doing the ‘dog-an-pony-show’ at present. You can see a detail evaluation by me of the systems limitations here:

    I would like to address two aspects of VR in consumer that has been ‘brushed’ under the carpet by a consumer game media hungry for some aspects of interest from E3, and desperate to take the gaze off of the hemorrhaging consumer development/publishing sector.

    – Simulation Sickness (SS)
    I have only recently read from PAX and QuakeCon reviews about the appearance (recognition) of the latency issue of the Rift prototype. I know John addressed it in his keynote, but the consumer media seemed to ignore or deny this was an issue. Now we have had a number of editorials from less ‘vested’ causes that suffered nausea from trying the system. I hope that we can have all recipients of the Rift-SDK, to seriously ‘lock-down’ the performance specifications of their planned-experiences to address the limitations of the hardware and negate SS.

    I oversaw development two major VR attractions (still operating) that had to address the danger of “simulation sickness” and created detail guidelines for the software team to abide by to avoid the 60-second issues of “…confused state induced nausea…” – I would like to see consumer game developers avoid blindly jumping into developing content that breaks these rules that have already been established in the deployment of VR in training and simulation for the last 15-years! (thinking they know best only to hit the same walls).

    – Quality over Deployment
    My other key issue, is the danger that Oculus will be forced to meet the $300 proposed retail price for the system, and so cut back on the quality / performance issues to such a extent that VR could see another ‘Virtual Boy’ approach to the technology, (hype-it-up-and-bang-it-out!)

    From my own vested interest – I hope that VR is first deployed fully more in a public-play (out-of-home) approach, so that an expensive HMD can be fielded without having to pander to demands from consumer game executives. This concern is that also the growing demand for the Rift to consider a console placement could see the project be steered by a group of executives desperate for a ‘game saver’ against the less than stellar reaction from the investment community to their proposed Gen-8 consoles.

    Having stuck my head in the majority of HMD / VWD systems I know the limitations – and also the enthrall of a good virtual environment representation though a virtual viewer system. It is also a experience that can cause over expectation and enthusiasm that can do more harm than good. Remember for some of us this is the second time that VR has caused a stir (and we in training&simulation would like to share the experiences learned)!

    Anyway, those are my observations from a position of current experience with the high-end approach to VR, and look with interest in how you will approach the consumer deployment.


    • MAbrash says:

      Simulator sickness is absolutely an issue; in fact, I’m very sensitive to it personally, even when just playing FPSes on a monitor. However, as far as I’ve been able to learn, its causes are not well understood. There’s a theory that it results from a discrepancy between VOR (our built in IMU) and OKR (our visual tracking), but it’s only a theory. It would be cool if you could provide a pointer to the “rules that have already been established in the deployment of VR in training and simulation for the last 15-years,” because I haven’t come across those yet.

      The problem with public play is that it’s just not what people do any more. That would have been the way to go in the 1980s.

      What aspects of an HMD do you think are cost-sensitive enough so that a consumer-priced HMD would not be good enough, but an expensive public play one would?

      I haven’t heard of a growing demand for the Rift to consider a console placement, and frankly it doesn’t make sense to me, given that there’s no magic technology in the Rift as far as I can see; any console manufacturer could prototype one in a few weeks at most. Can you point me to something that talks about that demand?

      It’s encouraging to hear that you have had great VR experiences!


      • williamepps says:

        Mr. Abrash, we are already discussing the issues of “valve motion sickness” relating to before the rift and now including the oculus rift. I am concerned developers are going to make mistakes all over again.

        I think Kevin makes a good point, and I have posted a few other threads over at oculushub relating to these problems.

        When Palmer first came to me (I was martinlandau at that old forum – space1999) I pointed him to the information on howlett’s leep VR site, but those links are dead now so I cannot provide you with what I provided Palmer – sorry. I am not sure if that is what Kevin is referring too, but the LEEP people were pretty far ahead of everyone else, I also would be interested in anything Kevin could share beyond howlett, bourke, or others.

        • MAbrash says:

          We’ve been learning all we can from the work already done in the field; any pointers you want to post to public knowledge would be cool.

          I took a quick look at OculusHub, and saw you wrote this:

          “A lot of people get sick while playing fps’s because of the limited field of vision. Believe it or not, most fps’s have a fish eye lens, it’s just not as prominant as the alien’s in AvP. One game that doesn’t is Half Life and it’s sequels, because of this the Valve motion sickness is notorious”

          I’m not sure what it means. Could you explain?


        • Alex Howlett says:

          Oops. Which links to the LeepVR site are no longer working? After my father passed away over the winter, I went through and removed a lot of the marketing-type stuff from the site. I tried to leave up any information that I thought might be of use to anyone interested in VR or the LEEP stereo photography system.

          If there’s any information you need, I can probably get it back online. And if you were actually getting 404 errors, that’s a bug. =)

          • Security Guard Class 4 stanley tweedle says:

            Hey Alex, there is nothing I need, but thank you. Your fathers work was great! I would like to talk to you though why you felt a need to change the links and “remove” the marketing? I say this as someone who worked for IBM and some years 70% of our budget went to marketing, so marketing is very important and CEO Gerstner told me it was the MOST important! He confessed our entire PC division was run at a loss just to keep “marketing” forces leveled for the mainframe business (where the real money was)

            specifically these links below do not work or link the way I remember them. The post was from aug 2009 so maybe you can go back to the archives of your site you have from that date?

            Re: HMD help, and introduction.
            Palmer start here

            Why full FOV is as important as stereoscopic 3d for our gaming experience.

            In fact just read all the links there, should learn a lot.

            Mr. Howlett, lets look at the real history of what has transpired from what I remember. You had this AWESOME marketing at those links, with the picture of an Xray skull with the large lenses, and some great marketing material. It enthralled me, and when Palmer got sent to that same marketing he got very excited. If you care to search there is a palmer post at MTBS3D where he is wetting his pants looking at your marketing material at those links. It got him EXCITED, the marketing was very effective, he was gushing about how it was finally revealed to him why and how large FOV was so important.

            Perhaps without the marketing a palmer wouldn’t have been as excited, the right neurons wouldn’t have gotten stimulated? And Carmack wouldn’t have gotten a palmer “leep marketing inspired” oculus device? Honestly it was your cool marketing was the reason I was posting the links to various places over the web, without the marketing and that k3wl xray picture, blah.

            Anyways while I have your attention, I already raised some questions to Mr. Abrash below and would like your take if you have the time, thanks.

            Abrash just downplayed stereoscopics versus Parallax, relating to the rift I suppose, but below is one neuroscientists medical account of how Sterescopics is far more important, now I am confused. Which is more important, parallax or stereoscopy? Can anyone point me to more info? Thanks.

            M.Abrash:”Stereoscopy is not that important. Parallax is more important: we need good head tracking”


            The whole article is good, but here is the money paragraph:

            Entering awe-inspiring European cathedrals I always had to keep moving, sometimes to the annoyance of my companions, to appreciate the dimensionality of the space from parallax alone; I suspect that I could now experience them even better from stereo disparity alone. A year ago Dr. Suzanne McKee gave a masterful talk on stereopsis at our department; I objected that parallax could provide the same information, but she informed me that stereopsis provides finer-grained stereoscopic information than motion parallax, at lower thresholds. That took me aback, as I had thought and taught for years that the two sources of 3D information should be equivalent, one successive and the other simultaneous.

          • Security Guard Class 4 stanley tweedle says:

            Also Alex if you have the time,

            This is but one iteration of many, I was using a J-Dome clone myself.

            The “marketing” material at your old site convinced me I needed Full FOV gaming. I made some objections to Palmer many years ago that an HMD was too hot, too tiring, gave me a headache with the headband cutting off circulation etc. I am lazy and like sitting back in my recliner, but wanted that full FOV experience. I can’t understand why more people after being exposed to the memes of full FOV, don’t want more of the TOOB or J-Dome like experiences, they can’t ever seem to take off. Maybe if you or Abrash leak a few pictures (with K3wl marketing a consideration ;)) to the internet playing on one of these DOME systems, it will generate excitement? I imagine if Valve would start developing for TOOB like displays, we could bring full FOV gaming to far more people than just the Niche HMD community, what do you think?

          • STRESS says:

            @Security Guard Class 4 stanley tweedle

            Sorry this is a reply to above but this strange blog commenting software doesn’t allow me to reply directly under your post

            Also Alex if you have the time,

            Thank you for posting this, this looks very interesting.

            The “marketing” material at your old site convinced me I needed Full FOV gaming. I made some objections to Palmer many years ago that an HMD was too hot, too tiring, gave me a headache with the headband cutting off circulation etc.

            Exactly my reservations and I think we finally getting to the heart of the problem and I am pretty sure that the majority of the people agree with you on this one. But it looks like people like John and Michael are stuck with the HMD idea I don’t understand why they are so keen to repeat the same mistakes from the past.

          • MAbrash says:

            I’ve been wearing the Rift prototype, and it seems pretty comfortable to me. But it’s certainly possible that HMDs won’t be the way to go. I just think this is too complicated, and technology is changing too quickly, to be able to figure it out deductively; the only way to find out is to try it. Which is what we, and a number of other people, are doing. Should be interesting and informative.

          • Alex Howlett says:

            You’re welcome. I’m glad to hear that our effort inspired you and others to get excited about VR. I cleaned up the site for various reasons, but mostly I wanted it to be useful, historical, concise. It didn’t seem appropriate to be marketing a product that doesn’t exist and can never exist.

            LeepVR never had any money, so we never had a budget. But even if we did, spending 70% of the budget on marketing seems silly to me. Then again, I’m nowhere close to being a businessman.

            I guess I didn’t realize how many people cared. It’s much appreciated. Searching through MBTS, I see that Palmer was the one who broke the news of my father’s death to that community. I remember that thread and I really appreciated it at the time. He would be honored to know that his legacy in some way helped spark a new generation of VR fans. Thanks for doing your part to spread the word.

            There have always been plenty VR fans and dreamers, but technology is in the right place now that something serious can happen. The world needs people like you, Palmer, Michael, John, and Mark Bolas to go out and get it done. =)

            Regarding the old version of the site, we’re lucky that the internet gods have provided us with the wayback machine. It means I don’t have to do any work to help you out! =D





            There’s the old version of the site in all its glory. My guess is that it’s not as awesome as you remember. Thanks for your compliments on the “x-ray skull” image. It took a lot of work to get it to look okay. I used actual LEEP lenses for the eyes. It’s a composite of two photographs with some serious touching up.

            As a bonus, here’s a YouTube interview with my dad from before he died:


            He doesn’t say much about VR, but I figured I’d share it in case you hadn’t seen it yet.

            Regarding “parallax vs stereoscopy,” it’s important to note that stereopsis is simply the result of the brain interpreting parallax (binocular disparity) between the two eyes. They’re both parallax. One is temporal and the other is positional. You get motion parallax in normal 3d-gaming (Quake). You get stereopsis if you pop on your 3d-glasses and flip the renderer over to stereo mode (or whatever the kids do these days).

            I wouldn’t say the two forms of depth perception are equivalent. For one, motion parallax applies to your entire field of view. Motion parallax is immersive. It provides you a visual sense of three-dimensional positioning within your world. Stereopsis is not immersive. It’s just a small area of your vision in the middle. On a standard monitor, your field of view isn’t wide enough for this distinction to matter.

            Does the world seem any less real when you close one eye? That’s the difference that stereo makes in terms of immersion. For me, the answer is “No; not really.” I agree with Michael on this and my father did too. The key is still field of view. If you don’t have a wide enough field of view, you lose your sense of being there.

            That’s not to say that stereo doesn’t matter. It does. When you’re looking at small objects, Dr. McKee is absolutely right that stereo provides more precise information. With stereo, your brain can better figure out the shape of objects in your hand and the distances of close objects. On the other hand, far away objects and big objects (cathedral, the moon, etc.) only provide a sense of depth when you move around.

            A problem I noticed when I went to see “How to Train Your Dragon in 3D” was that some of the scenes felt like I staring into a miniature diorama. The more stereo effect you add to something, the smaller it appears to be. That’s because in real life, the stereo effect increases as objects get closer and smaller. One thing that I love about movies is that they seem larger than life. Not so much with this one.

            If you want the scale of your game to seem grand, you’ve got to minimize the stereo effect. If I get hit by a shrink ray (world-enbiggener) in a game, the appropriate effect would be to bring my eyes closer together and reduce the stereo. If I’m Mario and I get the super mushroom (or whatever it’s called), my stereo should increase so I feel huge and the world looks tiny around me. If you want your ship to seem fast, one way to do it is to make your environment seem bigger.

            I’m not sure I understand the benefits of a mini-dome. Maybe I’ll have to read more about it, but I don’t see how the FOV would be any wider than that of a flat screen. In the omnimax dome, you’re looking out from the middle and your viewing position from within the dome hardly changes during the course of the movie. A mini-dome seems like it would be a very different experience. Besides, I don’t know if me being involved with a project has ever generated excitement. =P

            Unfortunately, I’ve been so far removed from the VR scene lately that I’m probably more of an outsider than most people here. I’ve been playing catch-up on the latest in VR for the past few days and I’m really excited about what the future holds.

      • Adam Kolar says:

        Well the cause of the simulator sickness seems to be obvious. When the information from various senses (mainly inner ear and eyes) doesn’t match up, body concludes that the cause is probably some kind of poisoning and tries to detoxify the body by throwing out the contents of your stomach. :))

        • MAbrash says:

          Unfortunately, that doesn’t explain why some people suffer badly from simulator sickness while others in exactly the same situation have no problem at all. And why different people have problems with different things.

  13. JazW says:

    Hi Michael,

    Thanks for taking the time to write such lengthy posts!

    Decoupling looking, aiming and movement makes traditional input devices kind of useless and I was wondering what your plan is regarding interface hardware for your experiments?

    The Leap motion controller looks like it has some amazing potential in a VR environment, but it’s not out until February next year so the current front-runner among Rift enthusiasts is the Razer Hydra.

    • MAbrash says:

      You’re correct – a whole new syntax of looking, aiming, and moving needs to be worked out, much as the syntax of such things had to be worked out for FPSes. I remember when it wasn’t clear which way mouselook should move when you moved the mouse; we’ll have to work through the same things here. The best controller at the moment for VR games in general is probably the keyboard and mouse, but the problem is that you can’t see it with an HMD on.

      We’ve done some stuff with the Hydra and also with gamepads. Both work well for some things, not so well for others.

      See an earlier reply for a discussion of hand tracking of the sort the Leap does.

      I think in the end we’ll need a new input device for VR to be its best for gaming, and that’s a significant hurdle. I don’t know what that device would look like yet.


  14. STRESS says:

    Still imho the wrong direction. Pico projector based systems is the far more promising direction, check out work like this:

    • MAbrash says:

      Yes, you’ve been consistent and clear :) It’ll be interesting to see how things develop. Pico projectors have their uses, and solve some problems that are hard with HMDs, but I still have a hard time seeing how they’d be widely useful. I could be wrong, though.


      • STRESS says:

        Sorry to be so persistent on that. But I think it is not the technology which is the problem it is the fundamental technology solutions to the problem which is the actual problem keeping VR/AR out of the mainstream.

        Well you limited your use-cases a couple of answers before to room-based AR for these scenarios pico projectors based solutions blow wearable glass based solutions completely out of the window. I see the same case for VR when you’re restricted to non-movements. Plus it is more useful in non-interactive entertainment as well. Where HMDs only have the privacy plus argument for that.

        • MAbrash says:

          I’m still waiting to see a pico projector seem even mildly interesting for gaming. If you’re playing Battlechess at your kitchen table, where is the projector projecting the game? If you’re playing a puzzle game with a friend in your living room, what surface in that room is good projection surface? And then there’s the question of what you see when you turn your head.

          Pico projectors may be great for showing pictures or giving presentations, but I don’t see it for gaming.

          As always, I could be wrong :)


  15. Robert "Anaerin" Johnston says:

    It sounds like we’re starting to get back to where we were in the mid 90’s.

    I had (Probably still have, if my parents haven’t thrown it out) a full VR system back in England. It was made by Forte technologies, and was called the VFX-1.

    There’s a brief overview at the Museum of Interesting Technology:

    There were some technical limitations – the VR interface card was ISA based, and it used the VESA local bus connector on some graphics cards to transfer the video. Maximum resolution was 320x240x2 in 256 colours, and the head and puck tracking was done using an odd proprietary connection called the “Access-BUS”. The second release used a VGA output and the (brand-new at the time) USB connection rather than the ISA card, but this setup (The VFX-3D) is extremely rare to find these days.

    However, the HMD was truly awesome. Individually adjustable optics, integrated (high-quality) stereo headphones and a mic on the visor for full-duplex communication. The HMD also had full magnetic head-tracking in all 6 directions (using the earth’s magnetic pull), and it came with a “Puck” controller, that was also tracked in 2 directions as a virtual joystick, so you could look in one direction and shoot in another. The system was natively supported in Quake, QuakeWorld, Descent, Doom, DN3D, DirectInput and LOTS of other titles, and had supplied software that would let you “mouselook” with the HMD if your game wasn’t directly supported.

    While it was kinda heavy, it was extremely comfortable to wear once you had it on, and given the restrictions in resolution, it was a truly amazing experience to use. I’m certain it wouldn’t be that difficult to re-fit the device with better optics, although you would probably have to start from scratch in order to get the tracking systems working under any modern gaming title.

    Apparently, Forte Technologies’ assets were bought by Vuzix, but the VFX-1 units (and especially the VFX-3D boxes) are very rare and hard to find these days.

    Still, it’s interesting to see just how far back we’ve come. Perhaps this is the decade for at-home VR…

    • STRESS says:

      Exactly my point I have made a couple of posts before. It would be worth analyzing why it failed so tremendously.

      That would be my priority task if you ask me as you should never be blended by your own ideal world view.

      The ecosystem was starting to be there. So we are at the same point again as you brilliantly pointed out. On the hardware technology hasn’t improved that significantly in the last 15 years when it comes to wearable displays. Resolution has improved, tracking obviously but it is not a quantum leap forward. So if it didn’t fly in the past (people obviously were not convinced) it is very likely it will not fly again. Unless you attack the fundamentals why people didn’t accept it.

      • MAbrash says:

        There’s a lot more to wearable than tracking and displays, although they’re important. CPU and GPU price/performance and performance/power have increased by orders of magnitude. Battery technology is vastly better. Wireless is ubiquitous and cheap. Everyone is used to being online almost everywhere and almost all the time. Those are huge differences for wearable from 10 years ago.


        • STRESS says:

          CPU/GPU performance is pretty much irrelevant in this case as they just increase the realism in terms of visual presentation. This has nothing really to do with VR/AR per se. Games look better then back in the past but they look better in a non VR environment as well. I am almost certain that people knew that graphics will improve back then so I am bit skeptical that this is the main reason behind the failing.

          Wireless and battery technology yes they help to reduce the weight especially in terms of battery which most certainly could be a positive factor in increasing the comfortableness but besides that do not add much more in terms of VR as you already established a stationary situation.

          And finally I have a heard time to see what online connectivity has to do with VR?

          • MAbrash says:

            I’d say CPU and GPU performance could make a big difference for CV and tracking, as could the far better, lighter, and smaller cameras and IMUs that are available.

            Lower weight, especially for batteries, makes a big difference for VR even if you’re stationary if you want to avoid having wires, and weight makes a huge difference in general even if you’re stationary.

            I’d think online multiplayer gaming would be important for VR.


  16. Aaron Martone says:

    One of the things that has to be rather difficult and unpredictable in the development of AR/VR is how the end user is going to adapt to the early iterations of hardware/software. As I’m sure you’re well aware, predicting issues before they occur can be almost impossible (and sometimes issues arise that you’d have never guessed possible) such as the psychological and metaphysical ramifications. The old adage “Just because you CAN do it doesn’t mean you SHOULD do it” comes to mind.

    That being said, Valve (and their employees) have always exuded a sense of “Out-of-the-box” thinking that I think would go a long way in maturing the AR/VR experience. Technology has moved rapidly forward, but gaming in general has somewhat of an intangible bar when it comes to the user experience. How amazing and “world-altering” (literally) would it be to hurdle that bar with the myriad of opportunities that AR/VR can provide?

    Not a topic I think about often, but definitely one that seems infinitely expandable. Will be very interested in seeing where this tech takes gaming (and how it can be adapted to other fields as well).

    • MAbrash says:

      Yes, I think I said in one of the replies that you can only think about something deductively for so long, and then you just have to try it. We’re going to learn a lot of unexpected things once the hardware starts being sold.


  17. Ken Kopecky says:

    Hi Michael,

    This is my first time reading your blog, and I loved the post. I’ve been working with VR for about 8 years now, mostly with CAVE systems, and I think the points you make about VR being more accessible than AR are completely true.

    One thing I’ve noticed about VR is that actual stereo immersion goes a very long way towards compensating for low-quality art assets and visuals. These days, it seems like AAA game studios are so hung up on having better graphics that production costs are getting out of control. A single game that isn’t a huge success can put a studio out of business. High pressure to succeed is the mortal enemy of innovation! That said, do you think that the relatively easy jump to VR (imagine the HMD you could come up with if you spent $50 million on research rather than another Call of Duty game) could make existing, or even last-generation graphics “good enough” that game studios could catch their collective breath?

    Human-Computer Interaction PhD Candidate, Iowa State University
    Co-Founder, Hex-Ray Studios

    • MAbrash says:

      Interesting thought! I don’t know the answer yet, but it’s certainly possible.


    • STRESS says:

      Highly doubt it. Big studios are run mostly by accountants the only thing they are really scared of is taking risk. VR is a huge gamble and a high risk factor. Especially if you look at the past failures.

      This technology will most definitely not come from a major gaming software house it has more changes to succeed if it is backed up by a company with lots of manufacturing resources or a company that is more diverse. In this case Sony, Samsung, Apple, Microsoft, Intel, Google, or someone like that.
      But as Microsoft clearly see a different path to VR or immersive gaming by not going through HMD you can count them out too.

      • MAbrash says:

        I don’t think he was saying the technology would come from a big gaming software company. He was just asking whether the pluses of a good VR HMD might make it possible to spend less on content and yet still have a compelling game that would sell in AAA quantities. Maybe not – but Angry Birds is pretty clearly less content-heavy than Call of Duty, so there is some precedent for a new platform allowing less expenditure on content while garnering large sales. Not that Angry Birds has revenue anywhere near COD, to be fair, but it has been a huge seller for new platforms.

  18. Marcus says:

    Dear Michael,

    I was late to jump in on this very riveting piece you wrote recently. I must say that the current developments going on at Valve and with the Oculus Team are wondrous and thought provoking. When someone of your caliber or the likes of John Carmack step onto the VR bandwagon so to speak, it sounds alarms of anticipation to all aficionados of technology like myself.

    I can’t help but feel we are stepping over a precipice, but not one that will lead to a sheer drop. But rather one that will lift us up into the dizzying heights that are the Virtual Reality promised lands. It merely takes a few giants like yourself and others to show us the way.

    I remember the first time watching Quake 2 run naively in 3D I had butterflies in my stomach because the realization that this was the beginning of something truly significant and groundbreaking was upon us. Names like Voodoo Rush and Voodoo Banshee were dominating names of burgeoning products in those days. Now I heard names like Oculus Rift and I can’t help but see some striking similarities. Not to romanticize the whole period of 3d graphic accelerators too much…but all great products seem to start off with these affectionately named labels, until they go back to a more formal serialized way of naming convention like the current ATi 5000 or 6000 series. Back then there was alot of cynicism about the need for a 3d graphics card at all.

    I can’t help but think that many years from now we will see the very same happening to VR units that are spurred into being by the Oculus Rift phenomenon. In a little within a decade or so the home user consumer market for 3d graphics cards shot up and became a very mature technology. When this happens with Oculus and yet to be heard of competitors maybe from the likes of AMD or Nvidia there is certainly going to be development and maturity of the technology unlike the world has yet witnessed. Michael sorry to overflow with such nerdgasmic musings but I just wanted to ask if you also can envisage much what I was trying to say about this wonderful new VR home user industry in the making. And any thoughts on how it may parallel the 3d graphics market in its eventual success in coming years.

    Thankyou and by the way when is your next piece on this exciting subject due, as it makes for engaging reading.

    • MAbrash says:

      VR could parallel 3D accelerators, but my guess is that it’ll go more slowly. 3D accelerators just made something that already existed work a lot better. HMDs enable new experiences, and it will take time for the software that provides those experiences to be figured out.

  19. Chad says:

    I just wanted to take the time to thank you for recommending Ready Player One. I finally got around to starting reading it yesterday, and I finished it early this afternoon. It was a fun read and I probably wouldn’t have heard about it if you had not mentioned it.

    Reading about virtual realities is fascinating, and I love it when a new blog post from you shows up in my feed reader. I look forward to reading more of your posts.


    • MAbrash says:

      I thought it was one of the most enjoyable reads I’ve had in a long time – and it gave me new perspective about the potential for VR (some of it very long-run).

      Thanks for the kind words!


  20. Romert says:

    When I first heard about oculos rift i immediately thought about using it with a omnidirectional treadmill like this to control movement. The pricing of those might not get consumer friendly for some time, but wow, how immersive wouldn’t that be? I’ve always pictured future fps games were you could both turn your head AND actually running around. It’s pretty amazing it’s actually possible with todays technology. I so much hope VR will get mainstream this time around.

    Very interesting reading by the way, waiting for part 2.