Valve in the New York Times

The Sunday New York Times business section has a nice piece about Valve that talks about our wearable effort, among other things, and includes a few quotes from me.

I’ve seen some speculation online that one of the pictures that accompanies the article, featuring Gordon Stoll wearing a headset, depicts a prototype of a Valve HMD. That might be more plausible if the headset didn’t clearly have the letters “NVIS” on it. It’s an off-the-shelf NVIS ST-50 see-through HMD, with a couple of mounts added and a camera mounted in one of them. It’s a useful platform to experiment on, but it’s definitely not a prototype. It’s also not a video-passthrough AR HMD, as some have guessed; it’s see-through, and the camera is used only for tracking.

Two Possible Paths into the Future of Wearable Computing: Part 1 – VR

Almost exactly twenty years ago, my family and I were living near Burlington, Vermont, and I was working remotely for a small graphics software company in California. They were great people to work for, but I had the sense that their business wasn’t doing well, and living as I was far from potential employers, I had to be proactive in anticipating problems. So I cast about for other work; two different lines of inquiry led to Microsoft, and I ended up as a contractor working on the VGA driver for the first version of Windows NT, which was a little more than a year from being released.

After a couple of months of that, I was offered the opportunity to interview for a full-time position. That was a harder decision than you might think, because if I got the job, we would have to pick up and move across the country. We liked Vermont; it was a good place to raise children, it was beautiful, my wife was working on a master’s degree and putting down roots, and it was a low-pressure lifestyle. I’d have more job security at Microsoft, and my work would be interesting, but we’d be giving up a lot.

In the end, I decided to interview, and when I was offered a job as the NT graphics driver lead, I took it. A few months later, the dev lead for GDI retired, and I ended up in that position for the first two versions of NT. Working on NT was a great experience on the whole. For one thing, Dave Cutler gave me a whole new perspective on what it meant to write good software, and what it took to do that. For another, I helped bring an operating system into existence that directly benefited a lot of people. In his book Flow, Mihaly Csikszentmihalyi describes work satisfaction as a function of having challenging but achievable goals combined with a sense that the work is worthwhile but not overwhelmingly so, and my work on NT fell right in that sweet spot.

How did it all work out in the end? Well, I’m not at Microsoft anymore, but we’re still content living in the Seattle area 20 years later, so the move was a good one. Equally important, prior to NT I had worked mostly on low-impact projects for small companies; starting with NT, I’ve gotten to work on interesting stuff that really matters. So things worked out well as far as I’m concerned.

I’d say it worked out well for Microsoft as well, since I played a significant role in getting GDI and the graphics drivers done and shipped, and NT became a huge success. (To be clear, I was just one programmer among many excellent ones on the NT team, and I’d love to work with most of them again. In fact, I’m working with two of them now at Valve; if you were part of that NT team and you’re ready for a new challenge in an amazing environment, drop me a line.)

But here’s the funny part: I completely bombed the first of the five interviews in my interview loop. The interviewer asked me Windows debugger questions; alas, I didn’t know much about Windows, let alone Windows debuggers, back then. I could not possibly have done worse. If they had continued with the usual interview process, there’s no way I would have gotten the job, because I just didn’t have the kind of experience Microsoft looked for in their standard interview loop. However, the next interviewer, Darryl Havens, said, “Okay, that was a waste of time. What do we have to do to get you here?”, and that set the tone for the rest of the day. (Thank you, Darryl!) Darryl knew that the graphics drivers had been badly neglected until I started working on them, and NT couldn’t ship until they were solid; hiring me was the fastest way to fix that.

If Microsoft had stuck to its interview process, I wouldn’t have been hired, and that would not have been good for either Microsoft or me. One implication of this that has stuck with me through the years is that it’s a bad idea to get too attached to a particular way of thinking about or doing anything. The world is complicated and constantly evolving, so it’s essential to constantly reexamine your plans, decisions, processes, assumptions, and mental models to see if they’re still tracking reality.

My first post on this blog talked about why augmented reality (AR) could well be the next great platform shift, and I still think that’s likely to be true. However, as I’ve worked on AR, I’ve been checking my assumptions, and as part of that process I’ve been thinking about whether a drive straight for AR or a path that includes VR as well – especially in the near term – makes more sense. There are good arguments for both sides, and it’s been an interesting exercise in visualizing the future. In this post, I’ll follow one line of thought that argues for an increased emphasis on VR; in the next post, I’ll follow another that concludes that AR should remain the dominant area for R&D, even in the immediate future. I don’t yet know what the correct choice is, so don’t expect any profound conclusions at the end, but the thought processes are interesting in their own right, and provide some insight into how the future of wearable computing could evolve.

As you read, please keep in mind that I’m not saying this is how it will be, but rather here’s a way it could be. The point is not to wrap things up with a neat bit of prophecy, since I don’t know what the future will hold, but rather to get you thinking, and to start a discussion that I’m looking forward to continuing in the comments.

Before I begin, I’d like to make it clear that this post and the next reflect my thinking, not Valve’s, and don’t represent a product or strategy announcement in any way. They’re just thought experiments on my part, trying to catch a glimpse of what promises to be a really interesting future.

A few definitions

If you’re not familiar with VR and AR, VR is the one where you sit down, put on a headset, and find yourself completely immersed in a virtual world like Snow Crash’s Metaverse or Ready Player One’s OASIS (and if you haven’t read Ready Player One, run don’t walk; it’s a great read, especially if you grew up in the 80’s, but even if not – I didn’t, and I still loved it). AR is the one where you put on glasses and walk around, and find that the real world is still there, but modified to a lesser or greater extent, as in Rainbow’s End’s belief circles or the Rivet Couture virtual society of “To Hie from Far Cilenia.”

So with VR, you might take a seat at your computer, put on your VR headset, and find yourself in Middle Earth or a starship or a Team Fortress 2 level. With AR, as you walk down the (real) street wearing your AR glasses you might find that there are (virtual) aliens shooting at you, or that when you encounter (real) members of your Belief Circle they’re wearing (virtual) medieval costumes and glowing faintly, or, to continue the TF2 analogy, that everyone you see is wearing virtual hats.

The sort of AR I just described, which is what I’m going to talk about in this post, is unconstrained AR – what I call walk-around AR, the kind that works wherever you go. That’s certainly the long-term goal, because it’s a platform shift, but for the next few years it’s something of a strawman, because there are a lot of challenging technical issues to be ironed out before it’s ready for prime time. In contrast, highly constrained AR, for example tabletop or room-scale AR, is considerably more feasible than walk-around AR right now, and certainly has some potentially interesting uses. However, it’s obviously not as generally useful as walk-around AR, is less immersive than VR, and is currently farther from a consumer-ready product than VR, with less capable, more expensive hardware. Nonetheless, constrained AR is a strong counter-argument to VR’s near-term advantage, and will feature prominently in the next post.

There’s also a third sort of wearable display technology, which I’ll call HUDSpace, based on the display of 2D information on see-through glasses, much like having a phone or tablet in view at all times; this is the direction Google appears to be going in with Project Glass. I include in this category very lightweight AR such as having people’s names floating over their heads, arrows to guide you turn by turn to your destination, and information popping up when you’re near points of interest. There’s a great deal of value to this, and it’s clearly going to happen, but it’s considerably less technologically demanding than AR or VR, has little opportunity for deep entertainment experiences, seems largely like an extension of the smartphones we have today rather than a genuinely new platform, and is just way less cool to me, so I’m going to focus on AR and VR.

So if AR is where we’re all headed, why is VR worth bothering with? Two reasons: in the long run, VR-like experiences may be how we use our spiffy AR glasses much of the time, and in the short run, VR is poised to take off well before AR.

Why VR is interesting now

Right now, VR is much closer to becoming a consumer product than AR. Perhaps the biggest reason for this is that VR hardware is more capable and easier to make right now. The Oculus Rift, which is intended to ship at a consumer price, has a 90-degree horizontal field of view; in contrast, I’ve never heard of see-through AR glasses with anything like that field of view at any price, and while they may exist, it’s hard to see how they could be made at consumer prices with anything like current technology. (Video-passthrough AR glasses could of course have the same field of view as the Rift, since all that would be required would be to add a camera, but I don’t think video-passthrough AR will be good enough for a number of years, for reasons discussed here.) Also, because VR is used in a fixed location, it can be tethered, sweeping away a host of hard power problems that walk-around AR has to deal with, and enabling the use of far more powerful CPUs and GPUs. Alternatively, VR headsets can be designed to run for just an hour or two between recharges; in contrast, AR has to have the same order of battery life as a phone or tablet. Furthermore, because VR is restricted to one location, it’s much easier to develop tracking technology for. And since you’re not going to wear a VR head mounted display in public, or walk around with it, it doesn’t have to be as stylish, and while it still has to be light and comfortable, it is considerably less constrained than AR glasses that have to look like fat sunglasses. Finally, VR can use existing controllers initially; you’ll be able to play VR games with standard game pads, for example, although I think new VR input will have to evolve quickly in order for VR to really reach its potential. In contrast, the input scheme for AR is an open question.

In terms of hardware problems to be solved, VR is closely related to AR, and in many cases figuring something out in VR’s more tractable space will help in AR as well. In this respect, resources devoted to VR R&D aren’t subtracted from AR efforts; in fact, this may be the most effective way to make progress on technology related to AR, because VR hardware can be made fully functional and iterated on much more rapidly than AR at this point.

This is particularly true because a VR marketplace appears to be emerging as I write this, in the form of the Oculus Rift and support for it in Doom 3: BFG Edition, Hawken, and other games, while AR is still some distance from viable products. It’s far easier to push technology forward when there are real customers to provide feedback, real products to provide incentive for better, cheaper components, and real revenue to spur competition, and VR will likely have all those long before AR does.

VR is more approachable on the software side as well. New experiences often evolve from existing experiences; it’s hard to make a complete break with the past in every respect, if only because your audience will be confused, and also because it’s hard for developers to solve multiple problems in a new space simultaneously. There’s a direct path to at least some interesting VR experiences; PC and console games like first-person shooters and flight, space, and car sims are designed for immersion, and should seem like they’re on steroids in VR. It’s even more obvious what interesting HUDspace experiences are; a few are listed above. However, it’s not at all clear what will constitute compelling walk-around AR experiences. I have no doubt that they exist, but they’re unknown right now. (It’s a lot clearer what might be interesting for constrained AR, and we’ll look at the implications of that in the next post.)

VR for the long run

So VR looks pretty good in the short run; how about after that? Even though I think it’s likely that in the long run (defined as five to ten years) AR will have a more radical effect on our lives, it’s possible that VR-like experiences will be where we will spend more of our time once we have really good AR glasses. The key is that AR glasses will be able to get darker or lighter on demand, because that’s necessary in order to work well in both dimly lit rooms and bright sunlight. That means they’ll be able to become almost completely dark at any time – and when they do, they’ll effectively be VR glasses. So your AR glasses will be able to provide both AR and VR experiences.

That’s interesting because VR experiences are richer in important ways. VR is more immersive, and that’s a big plus for many types of games. VR also has better contrast, since it doesn’t have to compete with photons from the outside world, so virtual images will look better. Because VR doesn’t have to interact with the real world, it doesn’t suffer from any of the inconsistencies that inevitably arise in AR; for example, lighting and shadowing in VR can be completely consistent. VR also avoids all the work that’s required in AR to figure out what real-world objects are in the field of view at any time, and to calculate how virtual and real images interact. Another point in VR’s favor is that it has no equivalent to the per-pixel opaquing limitation of AR, so VR software has complete control over the image that reaches the eye. Furthermore, small amounts of latency and tracking error may be more acceptable in VR, because the virtual images don’t have to match the real scene; since we’re not going to get to zero latency or perfect tracking anytime soon, that’s potentially a significant plus. (However, it’s also entirely possible that small amounts of latency and tracking error could cause simulator sickness under conditions of full immersion; this is one of many areas that we’re all going to learn a lot more about in the next few years.)

So AR is the only way to go when you want the virtual and real worlds to interact, but VR and VR-like experiences seem best for purely virtual experiences. (Here, “VR-like” means AR when it dynamically becomes opaque enough so that the virtual world is visually dominant.)

And it’s arguable that you spend most of your time in experiences that are more virtual than real (or at least that I do).

Our lives are more virtual than you might think

You’re probably thinking that you don’t spend any significant amount of time in virtual experiences, but consider: as you read this, you’re looking at a screen. Imagine you’re doing it on a head-mounted display, and you’ll see that it maps better to VR than to AR. Sure, you could have the text floating in your field of view while still seeing the real world, but why? It seems far more useful to just look at a virtual screen in VR, since all that’s of interest is the text. You could have lots of virtual screens up in 3-space around you, and you could have information presented in all sorts of other ways as well.

Similarly, the real world often doesn’t play an important role in watching TV or movies, or playing video games; certainly it does when you’re with friends, but when you’re alone, the real world doesn’t particularly enhance the experience. And if you ask yourself what percentage of your waking time you spend looking at a screen by yourself, you’ll find it’s a majority if you’re anything like me. So that’s why I say that VR-like experiences may be where we’ll spend a lot of our time once we have good AR glasses; until that time, this argues that VR by itself is interesting.

This is not to say being able to see the real world at the same time as the virtual world doesn’t have benefits; I’ll discuss that aspect in the next post. One thing that absolutely has to be figured out for VR is how to become not-blind instantly, for example by touching a control on the glasses that switches to a camera view; being unable to see without taking the HMD off just isn’t going to be acceptable in a consumer device.

Finally, there’s a wild card that could change the long-term balance between AR and VR dramatically. My thinking to date has assumed that AR will be a major platform shift that fundamentally changes the way we interact with computers, while VR won’t, except to the extent that VR-like experiences are part of the AR future. However, it’s possible that that VR will be a major platform shift all on its own; we could all end up spending our time sitting in our living rooms wearing VR headsets and haptics, while the traditional movie, TV, and videogame businesses wither. (In fact, I’d be surprised if that wasn’t the case someday, but I think it’ll be a long while before that happens.) We all know what that would imply, since we’ve all watched Star Trek – that way lies the Holodeck. If that happens, VR is more than interesting; it’s a big part of the future.

All of which implies that VR and VR-like experiences seem likely to be important in the long run.

Summing up the case for VR

None of the foregoing says that standalone VR is going to be more important or successful than AR in the next five to ten years, although that could happen. AR is most likely going to change the way we interact with the world, much as PCs and smartphones did, long before VR makes it to the Holodeck. However, it seems likely that VR is much closer to being deliverable in a truly workable form than walk-around AR, and it also seems likely that VR-like experiences will be an important part of the ultimate AR future. Given which, there’s a strong case to be made that while the long-term goal is to produce superb, do-everything AR glasses, VR and VR-like experiences are worth pursuing as well, both in the near term and down the road.

Virtual Insanity at QuakeCon

I should have posted this sooner, but it’s been a little crazy. It was a blast getting up on the stage with John and Palmer and talking about VR, but it was more as well. As I said during the panel, it felt like this might be one of those seminal moments when the world changes, the point at which a new technology that will change our lives started down the runway for takeoff. Of course, it’s entirely possible that that won’t happen, but it feels like the pieces are falling into place: affordable, wide-field-of-view, lightweight HMDs that can deliver a great experience; inexpensive tracking (cameras, gyros, accelerometers, magnetometers); and, critically, an existing software ecosystem – first-person shooters – that can readily move to VR (although that’s just a start; many other experiences more uniquely suited to VR will emerge once VR is established as a viable consumer technology). VR can only take off if all three pieces are working well, and we’re getting close on all three fronts. I don’t think we’re quite there yet, but the remaining issues seem solvable with time and attention, and once they’re solved, we may be off on a long, transformative journey. Where that ends, I have no idea, but I’m looking forward to the ride – and I think it might have started at QuakeCon.

An Interview from QuakeCon

This interview that I did at QuakeCon just got posted. It covers a lot of ground; there’s some personal history from the game industry, plus discussion of where AR/VR is headed and why the whole area is interesting to Valve.

Why You Won’t See Hard AR Anytime Soon

I’ve often wondered why it is that I’ve had the good fortune to spend the last 20 years doing such interesting and influential work. Part of it is skill, hard work, and passion, and a good part is luck – in other places or times, matters would have worked out very differently. (My optimization skills would certainly have been less valuable if I was working in the potato fields of Eastern Europe alongside my great-great-grandparents.) But I’ve recently come to understand that there’s been another, more subtle, factor at work.

I became aware of this additional influence when my father remarked that my iPad seemed like magic. I understand why he feels that way, but to me, it doesn’t seem like magic at all, it’s just a convergence of technologies that has seemed obvious for decades – it was only a matter of when.

When I stepped back, though, I realized that he was right. The iPad is wildly futuristic technology – when I was growing up, the idea of a personal computer, let alone one you could carry around with you and use to browse a worldwide database, would have ranked up there with personal helicopters on the improbability scale. In fact, it would have seemed more improbable. So why do I not only accept it but expect it?

I think it’s because I read science fiction endlessly when I was growing up. SF conditioned me for a future full of disruptive technology, which happens to be the future I grew up into. Even though the details of the future that actually happened differed considerably from what SF anticipated, the key was that SF gave me a world view that was ready for personal computers and 3D graphics and smartphones and the Internet.

Augmented reality (AR) is far more wildly futuristic than the iPad, and again, it doesn’t seem like magic or a pipe dream to me, just a set of technologies that are coming together right about now. I’m sure that one day we’ll all be walking around with AR glasses on (or AR contacts, or direct neural connections); it’s the timeframe I’m not sure about. What I’m spending my time on is figuring out the timeframe for those technologies, looking at how they might be encouraged and guided, and figuring out what to do with them once they do come together. And once again, I believe I’m able to think about AR in a pragmatic, matter-of-fact way because of SF. In this case, though, it’s both a blessing and a curse, because of the expectations SF has raised for AR – expectations that are unrealistic over the next few years.

Anyone who reads SF knows how AR should work. Vernor Vinge’s novel Rainbow’s End is a good example; AR is generated by lasers in contact lenses, which produce visual results that indistinguishably intermix with and replace elements of the real world, and people in the same belief circle see a shared virtual reality superimposed on the real world. Karl Schroeder’s short story “To Hie from Far Cilenia” is another example; people who belong to a certain group see Victorian gas lamps in place of normal lights, Victorian garb on other members, and so on. The wearable team at Valve calls this “hard AR,” as contrasted with “soft AR,” which covers AR in which the mixing of real and virtual is noticeably imperfect. Hard AR is tremendously compelling, and will someday be the end state and apex of AR.

But it’s not going to happen any time soon.

Leave aside the issues associated with tracking objects in the real world in order to know how to virtually modify and interact with them. Leave aside, too, the issues associated with tracking, processing, and rendering fast enough so that virtual objects stay glued in place relative to the real world. Forget about the fact that you can’t light and shadow virtual objects correctly unless you know the location and orientation of every real light source and object that affects the scene, which can’t be fully derived from head-mounted sensors. Pay no attention to the challenges of having a wide enough AR field of view so that it doesn’t seem like you’re looking through a porthole, of having a wide enough brightness range so that virtual images look right both at the beach and in a coal mine, of antialiasing virtual edges into the real world, and of doing all of the above with a hardware package that’s stylish enough to wear in public, ergonomic enough to wear all the time, and capable of running all day without a recharge. No, ignore all that, because it’s at least possible to imagine how they’d be solved, however challenging the engineering might be.

Fix all that, and the problem remains: how do you draw black?

Before I explain what that means, I need to discuss the likely nature of the first wave (and probably quite a few more waves) of AR glasses.

Video-passthrough and see-through AR

There are two possible types of AR glasses. One type, which I’ll call “video-passthrough,” uses virtual reality (VR) glasses that are opaque, with forward-facing cameras on the front of the glasses that provide video that is then displayed on the glasses. This has the advantage of simplifying the display hardware, which doesn’t have to be transparent to photons from the real world, and of making it easy to intermix virtual and real images, since both are digitized. Unfortunately, compared to reality video-passthrough has low resolution, low dynamic intensity, and a low field of view, all of which result in a less satisfactory and often more tiring experience. Worse, because there is lag between head motion and the update of the image of the world on the screen (due to the time it takes for the image to be captured by the camera, transmitted for processing, processed, and displayed), it tends to induce simulator sickness. Worse still, the eye is no longer able to focus normally on different parts of a real-world scene, since focus is controlled by the camera, which leads to a variety of problems. Finally, it’s impossible to see the eyes of anyone wearing such glasses, which is a major impediment to social interaction. So, for many reasons, video-passthrough AR has not been successful in the consumer space, and seems unlikely to be so any time soon.

The other sort of AR is “see-through.” In this version, the glasses are optically transparent; they may reduce ambient light to some degree, but they don’t block it or warp it. When no virtual display is being drawn, it’s like wearing a pair of thicker, heavier normal glasses, or perhaps sunglasses, depending on the amount of darkening. When there is a virtual display, it’s overlaid on the real world, but the real world is still visible as you’d see it normally, just with the addition of the virtual pixels (which are translucent when lit) on top of the real view. This has the huge virtue of not compromising real-world vision, which is, after all, what you’ll use most of the time even once AR is successful. Crossing a street would be an iffy proposition using video-passthrough AR, but would be no problem with see-through AR, so it’s reasonable to imagine people could wear see-through AR glasses all day. Best of all, simulator sickness doesn’t seem to be a problem with see-through AR, presumably because your vision is anchored to the real world just as it normally is.

These advantages, along with recent advances in technologies such as waveguides and picoprojectors that are making it possible to build consumer-priced, ergonomic see-through AR glasses, make see-through by far the more promising of the two technologies for AR right now, and that’s where R&D efforts are being concentrated throughout the industry. Companies both large and small have come up with a surprisingly large number of different ways to do see-through AR, and there’s a race on to see who can come out with the first good-enough see-through AR glasses at a consumer price. So it’s a sure thing that the first wave of AR glasses will be see-through.

That’s not to say that there aren’t disadvantages to see-through AR, just to say that they’re outweighed by the advantages. For one thing, because there’s always a delay in generating virtual images, due to tracking, processing, and scan-out times, it’s very difficult to get virtual and real images to register closely enough so the eye doesn’t notice. For example, suppose you have a real Coke can that you want to turn into an AR Pepsi can by drawing a Pepsi logo over the Coke logo. If it takes dozens of milliseconds to redraw the Pepsi logo, every time you rotate your head the effect will be that the Pepsi logo will appear to shift a few degrees relative to the can, and part of the Coke logo will become visible; then the Pepsi logo will snap back to the right place when you stop moving. This is clearly not good enough for hard AR, because it will be obvious that the Pepsi logo isn’t real; it will seem as if you have a decal loosely plastered over the real world, and the illusion will break down.

There’s a worse problem, though – with see-through AR, there’s actually no way to completely replace the Coke logo with the Pepsi logo.

See-through AR == additive blending only

The way see-through AR works is by additive blending; each virtual pixel is added to the real world “pixel” it overlays. For example, given a real pixel of 0x0000FF (blue) and a virtual pixel of 0x00FF00 (green), the color the viewer sees will be 0x0000FF + 0x00FF00 = 0x00FFFF (cyan). This means that while a virtual pixel can be bright enough to be the dominant color the viewer sees, it can’t completely replace the real world; the real-world photons always come through, regardless of the color of the virtual pixel. That means that the Coke logo would show through the Pepsi logo, as if the Pepsi logo were translucent.

The simplest way to understand this is to observe that when the virtual color black is drawn, it doesn’t show up as black to the viewer; it shows up as transparent, because the real world is unchanged when viewed through a black virtual pixel. For example, suppose the real-world “pixel” (that is, the area of the real world that is overlaid by the virtual pixel in the viewer’s perception) has a color equivalent to 0x008000 (a medium green). Then if the virtual pixel has value 0x000000 (black), the color seen by the viewer will be 0x008000 + 0x000000 = 0x008000 (remember, the virtual pixel gets added to the color of the real-world “pixel”); this is the real-world color, unmodified. So you can’t draw a black virtual background for something, unless you’re in a dark room.

The implications are much broader than simply not being able to draw black. Given additive blending, there’s no way to darken real pixels even the slightest bit. That means that there’s no way to put virtual shadows on real surfaces. Moreover, if a virtual blue pixel happens to be in front of a real green “pixel,” the resulting pixel will be cyan, but if it’s in front of a real red “pixel,” the resulting pixel will be purple. This means that the range of colors it’s possible to make appear at a given pixel is at the mercy of what that pixel happens to be overlaying in the real world, and will vary as the glasses move.

None of this means that useful virtual images can’t be displayed; what it means is that the ghosts in “Ghostbusters” will work just fine, while virtual objects that seamlessly mix with and replace real objects won’t. In other words, hard AR isn’t happening any time soon.

“But wait,” you say (as I did when I realized the problem), “you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. You can’t focus on an LCD screen an inch away (and you wouldn’t want to, anyway, since everything interesting in the real world is more than an inch away), so a pixel at that distance would show up as a translucent blob several degrees across, just as a speck of dirt on your glasses shows up as a blurry circle, not a sharp point. It’s true that you can black out an area of the real world by occluding many pixels, but that black area will have a wide, fuzzy border trailing off around its edges. That could well be useful for improving contrast in specific regions of the screen (behind HUD elements, for example), but it’s of no use when trying to stencil a virtual object into the real world so it appears to fit seamlessly.

Of course, there could be a technological breakthrough that solves this problem and allows true per-pixel darkening (and, in the interest of completeness, I should note that there is in fact existing technology that does per-pixel opaquing, but the approach used is far too bulky to be interesting for consumer glasses). In fact, I actually expect that to happen at some point, because per-pixel darkening would be such a key differentiator as AR adoption ramps up that a lot of R&D will be applied to the problem. But so far nothing of the sort has surfaced in the AR industry or literature, and unless and until it does, hard AR, in the SF sense that we all know and love, can’t happen, except in near-darkness.

That doesn’t mean AR is off the table, just that for a while yet it’ll be soft AR, based on additive blending and area darkening with trailing edges. Again, think translucent like “Ghostbusters.” High-intensity virtual images with no dark areas will also work, especially with the help of regional or global darkening – they just won’t look like part of the real world.

This is just the start

Eventually we’ll get to SF-quality hard AR, but it’ll take a while. I’d be surprised if it was sooner than five years, and it could easily be more than ten before it makes it into consumer products. That’s fine; there are tons of interesting things to do and plenty of technical challenges to figure out just with soft AR. I wrote one of the first PC games with bitmapped graphics in 1982, and 30 years later we’re still refining the state of the art; a few years or even a decade is just part of the maturing process for a new technology. So sit back and enjoy the show as AR grows, piece by piece, into truly seamless augmented reality over the years. It won’t be a straight shot to Rainbow’s End, but we’ll get there – and I have no doubt that it’ll be a fun ride all along the way.

The New Valve Economics Blog

It’s well worth checking out noted economist and author Yanis Varoufakis’ new Valve Economics blog.