Why You Won’t See Hard AR Anytime Soon

I’ve often wondered why it is that I’ve had the good fortune to spend the last 20 years doing such interesting and influential work. Part of it is skill, hard work, and passion, and a good part is luck – in other places or times, matters would have worked out very differently. (My optimization skills would certainly have been less valuable if I was working in the potato fields of Eastern Europe alongside my great-great-grandparents.) But I’ve recently come to understand that there’s been another, more subtle, factor at work.

I became aware of this additional influence when my father remarked that my iPad seemed like magic. I understand why he feels that way, but to me, it doesn’t seem like magic at all, it’s just a convergence of technologies that has seemed obvious for decades – it was only a matter of when.

When I stepped back, though, I realized that he was right. The iPad is wildly futuristic technology – when I was growing up, the idea of a personal computer, let alone one you could carry around with you and use to browse a worldwide database, would have ranked up there with personal helicopters on the improbability scale. In fact, it would have seemed more improbable. So why do I not only accept it but expect it?

I think it’s because I read science fiction endlessly when I was growing up. SF conditioned me for a future full of disruptive technology, which happens to be the future I grew up into. Even though the details of the future that actually happened differed considerably from what SF anticipated, the key was that SF gave me a world view that was ready for personal computers and 3D graphics and smartphones and the Internet.

Augmented reality (AR) is far more wildly futuristic than the iPad, and again, it doesn’t seem like magic or a pipe dream to me, just a set of technologies that are coming together right about now. I’m sure that one day we’ll all be walking around with AR glasses on (or AR contacts, or direct neural connections); it’s the timeframe I’m not sure about. What I’m spending my time on is figuring out the timeframe for those technologies, looking at how they might be encouraged and guided, and figuring out what to do with them once they do come together. And once again, I believe I’m able to think about AR in a pragmatic, matter-of-fact way because of SF. In this case, though, it’s both a blessing and a curse, because of the expectations SF has raised for AR – expectations that are unrealistic over the next few years.

Anyone who reads SF knows how AR should work. Vernor Vinge’s novel Rainbow’s End is a good example; AR is generated by lasers in contact lenses, which produce visual results that indistinguishably intermix with and replace elements of the real world, and people in the same belief circle see a shared virtual reality superimposed on the real world. Karl Schroeder’s short story “To Hie from Far Cilenia” is another example; people who belong to a certain group see Victorian gas lamps in place of normal lights, Victorian garb on other members, and so on. The wearable team at Valve calls this “hard AR,” as contrasted with “soft AR,” which covers AR in which the mixing of real and virtual is noticeably imperfect. Hard AR is tremendously compelling, and will someday be the end state and apex of AR.

But it’s not going to happen any time soon.

Leave aside the issues associated with tracking objects in the real world in order to know how to virtually modify and interact with them. Leave aside, too, the issues associated with tracking, processing, and rendering fast enough so that virtual objects stay glued in place relative to the real world. Forget about the fact that you can’t light and shadow virtual objects correctly unless you know the location and orientation of every real light source and object that affects the scene, which can’t be fully derived from head-mounted sensors. Pay no attention to the challenges of having a wide enough AR field of view so that it doesn’t seem like you’re looking through a porthole, of having a wide enough brightness range so that virtual images look right both at the beach and in a coal mine, of antialiasing virtual edges into the real world, and of doing all of the above with a hardware package that’s stylish enough to wear in public, ergonomic enough to wear all the time, and capable of running all day without a recharge. No, ignore all that, because it’s at least possible to imagine how they’d be solved, however challenging the engineering might be.

Fix all that, and the problem remains: how do you draw black?

Before I explain what that means, I need to discuss the likely nature of the first wave (and probably quite a few more waves) of AR glasses.

Video-passthrough and see-through AR

There are two possible types of AR glasses. One type, which I’ll call “video-passthrough,” uses virtual reality (VR) glasses that are opaque, with forward-facing cameras on the front of the glasses that provide video that is then displayed on the glasses. This has the advantage of simplifying the display hardware, which doesn’t have to be transparent to photons from the real world, and of making it easy to intermix virtual and real images, since both are digitized. Unfortunately, compared to reality video-passthrough has low resolution, low dynamic intensity, and a low field of view, all of which result in a less satisfactory and often more tiring experience. Worse, because there is lag between head motion and the update of the image of the world on the screen (due to the time it takes for the image to be captured by the camera, transmitted for processing, processed, and displayed), it tends to induce simulator sickness. Worse still, the eye is no longer able to focus normally on different parts of a real-world scene, since focus is controlled by the camera, which leads to a variety of problems. Finally, it’s impossible to see the eyes of anyone wearing such glasses, which is a major impediment to social interaction. So, for many reasons, video-passthrough AR has not been successful in the consumer space, and seems unlikely to be so any time soon.

The other sort of AR is “see-through.” In this version, the glasses are optically transparent; they may reduce ambient light to some degree, but they don’t block it or warp it. When no virtual display is being drawn, it’s like wearing a pair of thicker, heavier normal glasses, or perhaps sunglasses, depending on the amount of darkening. When there is a virtual display, it’s overlaid on the real world, but the real world is still visible as you’d see it normally, just with the addition of the virtual pixels (which are translucent when lit) on top of the real view. This has the huge virtue of not compromising real-world vision, which is, after all, what you’ll use most of the time even once AR is successful. Crossing a street would be an iffy proposition using video-passthrough AR, but would be no problem with see-through AR, so it’s reasonable to imagine people could wear see-through AR glasses all day. Best of all, simulator sickness doesn’t seem to be a problem with see-through AR, presumably because your vision is anchored to the real world just as it normally is.

These advantages, along with recent advances in technologies such as waveguides and picoprojectors that are making it possible to build consumer-priced, ergonomic see-through AR glasses, make see-through by far the more promising of the two technologies for AR right now, and that’s where R&D efforts are being concentrated throughout the industry. Companies both large and small have come up with a surprisingly large number of different ways to do see-through AR, and there’s a race on to see who can come out with the first good-enough see-through AR glasses at a consumer price. So it’s a sure thing that the first wave of AR glasses will be see-through.

That’s not to say that there aren’t disadvantages to see-through AR, just to say that they’re outweighed by the advantages. For one thing, because there’s always a delay in generating virtual images, due to tracking, processing, and scan-out times, it’s very difficult to get virtual and real images to register closely enough so the eye doesn’t notice. For example, suppose you have a real Coke can that you want to turn into an AR Pepsi can by drawing a Pepsi logo over the Coke logo. If it takes dozens of milliseconds to redraw the Pepsi logo, every time you rotate your head the effect will be that the Pepsi logo will appear to shift a few degrees relative to the can, and part of the Coke logo will become visible; then the Pepsi logo will snap back to the right place when you stop moving. This is clearly not good enough for hard AR, because it will be obvious that the Pepsi logo isn’t real; it will seem as if you have a decal loosely plastered over the real world, and the illusion will break down.

There’s a worse problem, though – with see-through AR, there’s actually no way to completely replace the Coke logo with the Pepsi logo.

See-through AR == additive blending only

The way see-through AR works is by additive blending; each virtual pixel is added to the real world “pixel” it overlays. For example, given a real pixel of 0x0000FF (blue) and a virtual pixel of 0x00FF00 (green), the color the viewer sees will be 0x0000FF + 0x00FF00 = 0x00FFFF (cyan). This means that while a virtual pixel can be bright enough to be the dominant color the viewer sees, it can’t completely replace the real world; the real-world photons always come through, regardless of the color of the virtual pixel. That means that the Coke logo would show through the Pepsi logo, as if the Pepsi logo were translucent.

The simplest way to understand this is to observe that when the virtual color black is drawn, it doesn’t show up as black to the viewer; it shows up as transparent, because the real world is unchanged when viewed through a black virtual pixel. For example, suppose the real-world “pixel” (that is, the area of the real world that is overlaid by the virtual pixel in the viewer’s perception) has a color equivalent to 0×008000 (a medium green). Then if the virtual pixel has value 0×000000 (black), the color seen by the viewer will be 0×008000 + 0×000000 = 0×008000 (remember, the virtual pixel gets added to the color of the real-world “pixel”); this is the real-world color, unmodified. So you can’t draw a black virtual background for something, unless you’re in a dark room.

The implications are much broader than simply not being able to draw black. Given additive blending, there’s no way to darken real pixels even the slightest bit. That means that there’s no way to put virtual shadows on real surfaces. Moreover, if a virtual blue pixel happens to be in front of a real green “pixel,” the resulting pixel will be cyan, but if it’s in front of a real red “pixel,” the resulting pixel will be purple. This means that the range of colors it’s possible to make appear at a given pixel is at the mercy of what that pixel happens to be overlaying in the real world, and will vary as the glasses move.

None of this means that useful virtual images can’t be displayed; what it means is that the ghosts in “Ghostbusters” will work just fine, while virtual objects that seamlessly mix with and replace real objects won’t. In other words, hard AR isn’t happening any time soon.

“But wait,” you say (as I did when I realized the problem), “you can just put an LCD screen with the same resolution on the outside of the glasses, and use it to block real-world pixels however you like.” That’s a clever idea, but it doesn’t work. You can’t focus on an LCD screen an inch away (and you wouldn’t want to, anyway, since everything interesting in the real world is more than an inch away), so a pixel at that distance would show up as a translucent blob several degrees across, just as a speck of dirt on your glasses shows up as a blurry circle, not a sharp point. It’s true that you can black out an area of the real world by occluding many pixels, but that black area will have a wide, fuzzy border trailing off around its edges. That could well be useful for improving contrast in specific regions of the screen (behind HUD elements, for example), but it’s of no use when trying to stencil a virtual object into the real world so it appears to fit seamlessly.

Of course, there could be a technological breakthrough that solves this problem and allows true per-pixel darkening (and, in the interest of completeness, I should note that there is in fact existing technology that does per-pixel opaquing, but the approach used is far too bulky to be interesting for consumer glasses). In fact, I actually expect that to happen at some point, because per-pixel darkening would be such a key differentiator as AR adoption ramps up that a lot of R&D will be applied to the problem. But so far nothing of the sort has surfaced in the AR industry or literature, and unless and until it does, hard AR, in the SF sense that we all know and love, can’t happen, except in near-darkness.

That doesn’t mean AR is off the table, just that for a while yet it’ll be soft AR, based on additive blending and area darkening with trailing edges. Again, think translucent like “Ghostbusters.” High-intensity virtual images with no dark areas will also work, especially with the help of regional or global darkening – they just won’t look like part of the real world.

This is just the start

Eventually we’ll get to SF-quality hard AR, but it’ll take a while. I’d be surprised if it was sooner than five years, and it could easily be more than ten before it makes it into consumer products. That’s fine; there are tons of interesting things to do and plenty of technical challenges to figure out just with soft AR. I wrote one of the first PC games with bitmapped graphics in 1982, and 30 years later we’re still refining the state of the art; a few years or even a decade is just part of the maturing process for a new technology. So sit back and enjoy the show as AR grows, piece by piece, into truly seamless augmented reality over the years. It won’t be a straight shot to Rainbow’s End, but we’ll get there – and I have no doubt that it’ll be a fun ride all along the way.

87 Responses to Why You Won’t See Hard AR Anytime Soon

  1. Dave says:

    The same technology that goes into making the transistors that power all our computing devices might be useful here! When patterning transistors, very very small lines and spaces need to be exposed onto silicon. This is usually done with an ultraviolet light being blocked by a photomask. However, the same problem you describe above happens–The blocked light (black pixel in your case) turns out “fuzzy” and doesn’t have a hard edge.
    Photolithography engineers rectify this by using lenses, numerical apertures, and varying illumination systems. These are probably bulky and impractical for eyeware (like you describe), but another technique is called Optical Proximity Correction. Go Google search that when you get a chance.

    I just thought you would like to have something cool to look at on Wikipedia….That is all :P

    • MAbrash says:

      Yes, there are certainly ways to do this – the problems revolve around cost, ability to see through it, weight, form factor, and response time.

  2. Great piece. As an entrepreneur in the industry, it’s interesting to see how a larger company examines the problems faced by technology that is about to converge. As a smaller company, we have to come up with a product that will stick in a related industry that will hopefully position us well when the time comes for true AR (hard or soft).

    “skate where the puck’s going, not where it’s been”

  3. Regarding your comment:

    “Worse still, the eye is no longer able to focus normally on different parts of a real-world scene, since focus is controlled by the camera, which leads to a variety of problems.”

    This is a solvable problem. Using new light-field cameras (see the $400 lytro at lytro.com), which capture not a static image but all of the information about light in a volume, it is possible to focus on any part of the scene you wish without altering the camera optics. So an eye-tracking sensor could determine where you are looking and dynamically refocus the scene in software. Light fields also yield 3D information about a scene, eliminating the need for two cameras in an AR system.

    • Ali E says:

      I believe the idea used in light-field cameras is having multiple cameras capture at different focuses and interpolating between these images to have a smooth range of focuses available. To apply the same to video would require serious processing power and would add to latency greatly, at least with today’s technology.

      • MAbrash says:

        Yes, it’s a great idea, but it’s not clear how it could feasible for an HMD in the near future. I’d love to see it, though!

        • Evan Zimmermann says:

          The key to any processor-intensive technology that attempts to function as a human interface will be the development of neural feedback procedures. The processing power required increases based on rendering in multiple perspectives, but this can be reduced by responding to the user’s perception of an image.

          We already have technology that can capture signals from the brain and direct simple games based on it, and I’d wager that we can develop these to a level of precision that can distinguish between the brain processing an unclear vs. clear image. As someone with severe myopia, I can guarantee that I can take off my glasses and process my environment in a completely different way from how I would process it with glasses on. All we need to do capture that difference reliably. That might be a difficult problem, but it’s solvable.

          This is only one example of signal analysis, but if we can find any reliable process of reading user experience, we have found a method of dramatically reducing the processing power necessary to direct it.

          A ten-year timeline for the development of this technology is very reasonable, but any previous generation might have considered such a quick development imminent.

  4. Gray says:

    Draw black with selective opacity LCD lenses. Still not perfect, and registration & lag are still big issues, but it helps mitigate the fundamentally additive nature of OST AR displays.

    • MAbrash says:

      Right, I noted that it would be an improvement – but even with perfect registration and latency, it wouldn’t enable hard AR.

  5. Toby says:

    Black out more than you need (fuzzy black pixels), and draw in more, including part of the reality blocked out?

    If you have to solve all of the other visual fidelity problems: tracking light sources, prediction, calculating effects due to motion, etc. matteing back in some part of reality along with the virtual image doesn’t seem like such a problem. The mask can easily be predicted- the machine ‘knows’ it’s own form. We can also arrange that the virtual image projector is higher fidelity than the blackout LCD.

    I think this falls into the trap of, “we tend to over-estimate what can be achieved in the short term (5 years) and under-estimate what can ultimately be achieved”.

    Take as inspiration the intro to Cook’s “The Reyes Image Rendering Architecture” paper, “In designing Reyes, our goal was an architecture optimized for fast high-quality rendering of complex animated scenes. By fast we mean being able to compute a feature-length film in approximately a year; high-quality means virtually indistinguishable from live action motion picture photography; and complex means as visually rich as real scenes.” This statement gave them a goal, and something to navigate by for the next ~15 years.

    What could we say of this project? “We want to calculate and project the ultimate wavefront to fool the viewer into thinking they are seeing a potential reality. We do this by a synthesis of negative and positive: remove unwanted areas of the scene and generate new inclusions to our design. Controlling artefacts is the key to maintaining the illusion for the viewer”. [Ed Catmull was especially hot on this last point].

    You can write a better one, I’m sure.

    • MAbrash says:

      Toby,

      That’s an interesting approach. However… if it worked well enough, then by extension video-passthrough would work well enough, right? And there’s a new problem. Consider what happens if part of a pixel is ambient and part is virtual (making up for the intensity loss from the opaquing fringe) – and the two parts have different focal depths. Also, even the slightest misregistration would show up as blurring.

      I like your project statement!

  6. Random Nutter says:

    Interesting post.

    I think see-through AR displays have a few more major hurdles to overcome in addition to what you’ve mentioned. e.g. Current displays have a rather limited range of accommodation.

    Say you had a working hard AR system that was fast enough to be seamless, could magically draw black, etc. but was based on current AR glasses technology, such as what Google’s project glass uses. Say this system is rendering a pencil on a table a few feet in front of you. You can move your head side to side and it looks perfect. Now, move your head in close to the table. You’re eyes adjust their focus and the table remains in sharp focus, but the pencil turns into a blurry blob!

    This is what happens when you try to render objects in a context outside the accommodation range of the AR display. Fundamentally different display technologies are needed to overcome this problem. e.g. Projecting an image directly onto the retina. People have been working on this sort of thing for probably over a decade now, but I’m not really up to date on it. However, this kind of technology might actually solve your black-drawing problem at the same time as the accommodation problem. This sort of system would necessarily rely on optics allowing you to place a controllable display element, such as a masking LCD panel, into focus regardless of what your eyes is looking at.

    Is this sort of display necessarily more than 5 years off? Probably not if there was a good application for it and a lot of money to develop it. Such displays, even if not initially used for hard AR, would probably be less fatiguing to users than current limited accommodation displays. e.g. If you were holding an object in front of your face you wouldn’t have to continually look away to see your AR display or hold the object far enough away to focus on both at the same time. If project glass catches on in a big way this might well be the 2nd gen display technology google goes for.

    Cheers!

    • MAbrash says:

      Excellent point – accommodation is indeed a problem. There are several possible ways to address it, some of which could potentially do per-pixel opaquing, but they’re all decidedly non-trivial. I also think it will be solved, although not necessarily within five years. This is one of the reasons it’s going to be so interesting watching AR evolve.

  7. Chuck says:

    Hmm, what about LCDs? Let me explain.

    A modern LCD displays ‘black’ by blocking all of the light coming from the backlight. It displays color by allowing some (or all) of the light through to illuminate its color dot. In hard AR you need an LCD between reality and you. When you are going to replace a ‘real’ pixel with one on the transparent display, you turn the LCD on to block light coming through at that point, and then you turn ‘on’ the pixel on the display. This works well for reflective laser scanned displays, but not as well for displays where the light generation is in the display (an OLED display for example, although I’ve not seen a transparent micro OLED screen so they probably don’t exist yet.

    • MAbrash says:

      Chuck,

      As I explained, LCD pixels wouldn’t be in focus, so they would be translucent blobs; hence the inability to mask out specific pixels, as opposed to regions with fuzzy borders.

      • ObscureWorlds says:

        Is there no possibility of just sticking a focusing lense ontop of the LCD screen. similar to looking through a pair of binoculars backwards but the actual amount of lengthening so tiny that it wouldn’t be bulky or visually warping, this solution may also allow you to pack more pixels in than is currently possible and get a higher resolution!

        • MAbrash says:

          This is pretty much what the Oculus Rift does, but it does create warping. How would you avoid warping without bulkiness?

  8. Alex says:

    Forget about the fact that you can’t light and shadow virtual objects correctly unless you know the location and orientation of every real light source and object that affects the scene, which can’t be fully derived from head-mounted sensors.

    It seems like devices in a given area could use local networking to share information they’ve picked up about the lighting (and other factors) in the environment. This would allow for a more accurate simulation than could otherwise by provided by a head-mounted display in a single position. Also, the environment could be scanned constantly as the device is moved around.

    I suspect this won’t be an issue for several generations of the technology, because we still haven’t fully explored the implications of simple HUDs and cartoon-like game graphics.

    • MAbrash says:

      Alex,

      Agreed on both counts. Theoretically, there’s great potential for local information sharing, as well as for pre-mapping. In the next few years, we’ll just be trying to get standalone systems to work well in relatively limited ways.

  9. it’s a very interesting piece and a nice summary of what technology can do now. i was wondering though what are your thoughts on the technology and processes described in William Gibson’s “Spook country”. would localized router-like devices allow for better indoor and outdoor positioning? would hard-AR be doable if it’s effects were selected/restricted?

    • MAbrash says:

      I haven’t read Spook Country, so I don’t know. Precise tracking (and AR registration requires very precise location and orientation sensing) is hard, all the more so in unconstrained space. I’m not sure what you mean by effects being “selected/restricted,” but hard AR would work in, for example, a dark room (at least in terms of the no-black problem), so yes, restrictions could help, but such restrictions would be very limiting.

  10. jellydonut says:

    Weeell.. I don’t really care about imposing graphics on the world.

    Honestly, I just want a useful heads-up display.

    Put a rangefinder on the thing, a bluetooth input that tells it my vehicle’s speed, and let it tell me range and speed of other vehicles while driving.

    Things like that.

    • MAbrash says:

      Fair enough – I agree that a wearable tablet would be very useful. It’s different from AR, though, and supports a different set of uses.

  11. Leon says:

    Do we really want hard AR? I’m perfectly fine with soft AR. I don’t want virtual objects added to my reality that are indistinguishable from real objects. I just want a HUD overlay that displays information.

  12. jama says:

    While I’m truly excited about all the technology, and I do assume Valve will explore this field a lot more in the future too, I wonder how much this AR technology (e.g. the glasses) will strain our eyes. Your article makes some mention of that, but what came to my mind when I read it was how my short-sightedness could potentially become a lot worse because of these AR glasses as images are displayed not even an inch away from your eyes.

    Speaking as educator: Integrating Soft AR in classrooms would be nice though. After tablets, this seems to become the next thing that will “revolutionize” classrooms. Well, we’re still not there with tablets yet, but we’ll (probably) get there eventually. Exciting times indeed.

    Also, I missed a reference to Star Trek’s holodeck. ;)

    • MAbrash says:

      > Also, I missed a reference to Star Trek’s holodeck.

      Speaking of unrealistic expectations :)

      The key isn’t where the image is displayed, it’s how the light is focused. Current HMDs mostly focus at infinity, so your eye relaxes. However, that can be a problem too, when your eye is looking at a virtual object that by other depth cues is, say, a meter away, but still focuses at infinity, since that’s all the optics can do. This could indeed lead to eyestrain.

  13. Heinz says:

    Here is a thought about blacking things out:
    1) use the proposed LCD screen to black out areas – accept the rough edges around the sides. With ever growing density of LCD screens you can probably make ‘small enough’ pixels in some time to not put in tombstone sized black with every pixies. Make them a bit wider then needed.
    2) add a second projects to the inside of the whole frame and re-project the real word ‘over’ the fuzzy edges to add the real world there again.

    solved :) I hope I should not have patented that ;)

  14. Dan Lewis says:

    How far can you get with a high resolution model of the real world? It seems like this might be less intense than capturing the image in real time.

    If you crowdsourced this model it might not help in real time, but it could help the next guy to come along.

    N.B. There ain’t no apostrophe in Rainbows End.

    • MAbrash says:

      What do you want the model of the real world to help with? It could be helpful as a map to match images to in order to know where you are, but I’m not sure how it would help with per-pixel opaquing.

      Thanks for the correction on Rainbows End!

      • Evan Zimmermann says:

        The idea of a high-resolution model of the real world is intriguing, and might be a viable transition between normal vision and AR.

        Imagine that you are satisfied with the AR version of reality that blocks out your normal vision but makes a reasonable attempt to recreate your environment, missing some insignificant detail that you’d ignore anyway. Objects render based on their velocity, so if a car is coming at you quickly, it will be highlighted above all else to make sure you react to it. This could actually be an advantage to normal vision. Also, the crowdsourced model is intriguing. Someone is speeding on the highway and everyone ahead is notified by objectively-verified GPS velocity observations. Of course, by the time this becomes feasible, someone is overriding their self-driving car and this problem is already solved, but it’s an obvious application of the technology.

        Would you be willing to give up normal vision for a computer-generated model? It might not seem reasonable now, but what if dangers could be easily highlighted and avoided. If it was demonstrably safer than normal vision, quite a few people might latch on.

        • MAbrash says:

          Interesting thought, but what is the advantage of this over video-passthrough AR? With the latter, you get the same lag, but you get a correct view of the real world, rather than an approximation.

  15. Mike says:

    With “Video Passthrough”, I’ll bet you could mitigate simulator sickness if you moved the output image around using immediate accelerometer input. In other words, orient the image as the final stage of the pipeline.

    • MAbrash says:

      Good thought, but you’d be surprised at how hard that is to do properly. Your head can turn much faster than you think, and linear shifting of pixels only works well over a surprisingly small range.

    • Wai Ho says:

      I agree video pass through is definitely easier as alignment of real and virtual is solved by getting the user to adapt to seeing the word through a “window” (phone, tablet) and getting used to the camera(s) as their new eye(s).

      Having seen some recent research with diminished reality, there are probably good ways to fool users by replacing “real world” pixels with pixels that make the AR content look better.

      That said, I don’t think we can fully immerse fully sighted people in HMD-based AR until we figure out how to account for eye movements (saccades) and compensating for HMD-eye shifts when a person moves around (or moves their eye). While I never get motion sickness from my HMD use, I have colleagues who are AR researchers that get motion sick just from looking at scrolling POVs on a monitor!

      HMD-based VR sounds a lot more promising for the near future, especially with external high “frame” rate head tracking :)

      • MAbrash says:

        I certainly get motion sick easily – when I was working on Quake, I staggered out into the parking lot feeling ill at the end of pretty much every day :)

        VR is worse than AR for me for motion sickness, hands down. I’m okay with AR; I think it’s because I have stable input from the real world that matches what my other senses are telling me. However, VR is certainly more tractable as a technology right now, and I think it’ll get a good trial in the marketplace soon. I think good AR is a little farther down the road.

  16. Shaun Shull says:

    I agree, hard AR will take significant time. To me hard AR will be the end result of numerous soft AR and VR technologies that will launch within the next decade. This post rings true to so many of us. Like you I grew up on Science Fiction and have been connecting the mental dots of various technology platforms my entire life. Influences such as Rainbow’s End, Otherland, Snow Crash, Ender’s Game and of course Star Trek have shaped the way I look at future technology. I think most of us are witnessing the current hyper-competitive mobile race with a profound sense of excitement. For many SF fans it’s easy to see the inevitability of this lightweight, high-power, long-charge tech being adopted into the wearable computing space, a space that has been largely ignored for many years.

    In my opinion the push for hard AR will only come after the mainstream adoption of immersive VR. Once people have personal context regarding the practical uses of wearable visual technology there will be a mainstream demand for advancement of such technology eventually leading us to hard AR. When I describe VR to people the responses I receive are typically “people will not want to wear nerdy glasses”, or “no one will want to strap something to their head all day”. I find this funny considering we all go to the movie theatre and wear ridiculous so-called 3D glasses that provide minimal enhancement to the movie experience and afterwards we drive home so we can jump on our computer and use Facebook all day. We are already unintentionally being conditioned to use VR / AR technology. What they really should be asking themselves is what would they be willing to do to talk to a distant friend as if they were in the same room or experience the wonder of floating through the International Space Station as if they were really there. My guess is they would be willing to put on a pair of nerdy glasses for such experiences.

    Immersive VR will impact people in a significant way, far more than their current web browser. It’s easy for SF fans to understand why. VR and AR has the capability to transcend the way we currently search for and interact with information. If people thought the Internet was transformative then they haven’t seen anything yet. The Internet is just the plumbing below ground, VR / AR is the palace above it. Just like most technologies, the time has come to bring all of the pieces together, to connect the dots in a way that everyone can understand. I for one can’t wait to see what happens and look forward to contributing in whatever way I can.

  17. Dave Sutter says:

    If the light coming in were coherent you could darken a pattern that maps to a single pixel on the eye. It would look like concentric circles on the lense. This is how a hologram works. I’m not sure how well that would work with normal light. You can do more with 3D “darkening” patterns.

  18. DeftNerd says:

    Trying to overlay pixels onto the real world would indeed be difficult since all you can do is add more photons and make the world seem brighter and brighter as you try to overwhelm the real world with the information you’re trying to project onto it.

    I think a better idea would wait until crystalline photonics is a reality. It’s a way of slowing down or speeding up wavelengths. It can be done in mass, but nobody has done it in a grid, let-alone one that has small pixels.

    If someone could figure it out, you could have a clear lens that would let you alter the properties of all wavelengths passing through the material. Infrared could be up-converted to reds, you could visualize radio-waves or terahertz waves, etc.

    Using the same tech, indiscriminately altering pixels to alter the image would be easier, and you could also adjust the light levels by converting 90% of a bright pixel to radio waves, resulting in a low-visibility pixel.

    Basically, the future of augmented reality imaging won’t be just sensing light and projecting light, it’ll be altering the light before it enters your eye using meta-materials.

  19. Jason says:

    “Hard AR” is a bot dangerous however. It implies an “unknowing” of the nature of reality around you. This can easily be co-opted by the ‘evil genius’ (http://plato.stanford.edu/entries/descartes-epistemology/#3.2) and your reality can be manipulated without your knowledge.

    I think for a good while, people will always want some kind of indication when they are receiving augmented data so they can choose to opt out whenever they desire to go back to a more primitive, real-world representation. Kind of like the option we have to turn off our iPads and hike in the woods.

    Then again, perhaps these issues were discussed in the Sci-Fi that you mention, many of which I have not read, yet.

    I too have attended every AR lecture at Siggy the pas dozen years and love to see the latest progress in the industry. It is fun to see Google make such a spectical to get people exited. Adding a Extreme Sports angle to the hipness of the movement was brilliant, in my opinion.

    Thanks for such a nice summary. I especially appreciate the color mapping theories.

  20. Tyler says:

    If it’s not too much bother to explain, why don’t the translucent additive blending pixels of see-through AR have the same problem as opaque black pixels on an LCD screen on the outside of see-through AR glasses? I mean, I get that you can’t focus on pixels that close to your eye, but you wouldn’t be focusing directly on the black pixels, just like you’re not focusing directly on the translucent pixels used for additive blending. If the black pixels look like a speck of dirt on your glasses, then why don’t the translucent pixels look like, say, a speck of colored water on your glasses?

    • MAbrash says:

      Good question. The answer is that typically the optics are designed so that the virtual light rays are collimated, or parallel. So unlike opaque black pixels on an LCD screen, you don’t actually focus on virtual pixels at the distance of the glasses, but rather at infinity.

  21. Adam Menges says:

    Thanks for taking the time to write this up, I agree with most of your points. The future is exciting.

  22. Grant Husbands says:

    I agree that hard AR is some distance away, but I think you overly penalise video-passthrough AR: Virtual Retinal Displays will be able to solve the focus, resolution and intensity issues (some of which anyway apply to see-through AR). Abtin’s comment covers the incoming focus (though focus-tracking could use a normal camera). The mentioned motion problems already need solving for the other variants of AR.

  23. Tim Smith says:

    Fascinating stuff, thanks.

    “That means that there’s no way to put virtual shadows on real surfaces.”

    I suspect you’ll be able to fake it by adding a little light everywhere except where you want the shadow. It still won’t get you black, but I think perceived darkening will be possible.

  24. rdm says:

    So…

    Within a limited range of lighting conditions, we can block out a fuzzy region and replace it with an inferior image which represents what we would be seeing. Call early implementations “cartoon reality”., and “hard AR” becomes some hypothetical future version with more pixels, lower latencies and improved contrast, and [need terms here to describe the issues involved with changing light levels and unhandled light sources]?

    I think I’ve repeated back what you’ve written here?

    • MAbrash says:

      As I’ve described elsewhere, I don’t think replacing fuzzy regions with inferior images that represent what you would be seeing would look good enough – if it did, we could just do video-passthrough and have a much easier time of it. Plus this has the additional problem of requiring perfect registration or the virtual replacement will visibly diverge from the reality it overlaps. Otherwise, sure, initially it won’t look great, and it will improve over time, which I think is what you’re saying.

  25. monsto says:

    Excellent writeup. Good to see technologists making games and game makers thinking about technologies. Tech conglomo’s that aren’t thinking like this have no place in… well MY future at least. And thx for the book list :-)

  26. Bruce Cohen says:

    Excellent summary of the current state of the technologies and analysis of the tradeoffs. Obviously the update latency problem can be (mostly) solved by using faster computing hardware and more efficient object recognition and rendering algorithms, and I expect that a lot of that will come out of the continuing competition between mobile device vendors; smart phones and tablets need a lot of the same kinds of performance enhancements.

    A partial solution to the pixel replacement problem is to increase the brightness of the displayed images on top of and next to the region you want to replace; the eye will adjust to the average brightness and make the incoming light seem darker, pushing the replaced pixels’ perceived colors closer to the values of the replacing image. Not a perfect solution; and obviously not very effective in bright external light, but it would improve the AR effect in many cases.

  27. Cory Bloyd says:

    Maybe wishful thinking, but I wonder if the new research in “Tailored Displays to Compensate for Visual Aberrations” ( http://tailoreddisplays.com/ ) could be adapted to help the fuzzy black pixels problem. It seems that instead of light fields, what we need are “dark fields” that compensate for our “hyperopia” at the 4cm range.

  28. Boff says:

    Head up displayer and the meta world come to life would be awesome, and I’ve seen various prototypes with projectors or mobil apps to “approximate” such things.

    however I say skip the screens

    hack the brain.

    Most of what you “see” is a facsimile, an approximation reproduced( or rendered if you will) in the brain.
    The center of the eye sees more colour, the outer (peripheral) sees more contrasting light. – hence tunnel vission in life threatening situations.- the brain just switched of the additional processing power to fill in the blanks.
    You have a glaring whole where the retina sits and boths eyes are constantly fluctuating all over the place.

    Your brain just re-renders the information, and it’s scary to think that a lot of is it “impressions” of what is really there so the brain doesn’t have to spend too much processing power actually deducting 100%, it just assumes – a lot.

    Our brain is constantly suggesting what “might” be there based on a loose collection of visual stimulus. Hnece yuo cna raed tihs eevn tohugh it is spelt incorrectly.
    All you need to do is jack into the brain’s rendering system via the optical nerve.
    And input the basic imagery straight into the optical nerve fiber.
    pixels, anti-alias colour, lighting etc, *could* just be filled in automatically just by the sheer suggestive gullibility of our brain.
    You could go one step further and start tinkering with associative information.
    So when you see a pepsi logo, your internal database returns coca cola instead, (sort of aphasia being handed over to the PR people).

    Love the post, I’ve been involved loosely and have had contact with a number of pioneering flash/web AR projects

    Flash with it’s current gpu accelerated stage3d being render UNDER the gpu accelerated Video layer, which is in turn under the standard CPU layer of rendering which sits in the foreground effectively killed off a lot of AR production for the flash community. Which the what’s left of the flash community is trying to hack and sends pixels here and there through various api’s. (Good job adobe flash was already under-threat without you poisoning the damned well at the same time 99.9% market penetration compared to the 50% html5 browsers that are capable rendering the markup all because mobiles just weren’t “strong enough” for the first initial couple of years. Not to mention Apple forced people to become “apple developers” to make people make apps and sell on their store, instead of people surfing in and playing the same games free of charge on a website (ignoring the weekness of having a platform (flash) within a platform (safari) the parent of which may already be draining huge amounts of resources – okay rant over ;)

  29. NPQ83 says:

    You’ll have to excuse me here if I sound naive, waif-like or foolish, however: Your argument seemingly ignores the concept of [i]networked[/i] AR.

    Consider a current public space: if you’re in the Developed world, in a major urban space, chances are that you’re in the F.O.V of at least three cameras, at least six sensors (including those that track: traffic, crowd-through-flows, density of radio signals, mobile sensors and so forth) and at least twelve other beings who potentially could share your AR hardware. This isn’t to mention the imperceptible bands of (spatially blocked, potentially) non-visual frequencies that you’re traveling through: WiFi, Phone signals, 3G etc (not to mention the higher-order types such as satellites).

    So – we have multiple “eyes” to enhance the subject’s F.O.V and we have multiple wavelengths to transfer the data to our subject’s AR hardware. Today: no additional SciFi required.

    Now: if we posited enough computational power to throw at your glasses (let us assume our AR glasses are magical quantum computers, or the subject has a secondary processor imbedded in them that interface with the AR unit) it could be conceivable that all of the above could be used as both data sources and mediums of exchange, in [b]better than real-time[/b]. i.e. your soft AR could easily block the subject’s viewpoint, and replace it with data from another sensor within range, not only “when viewed” but [b]predicatively[/b] based on the total F.O.V that the other sensors are providing.

    In addition to this, recent research has shown that direct reading of a subject’s brain patterns can show (fuzzy / bad) images ~ in our SciFi projections of what AR is to become, there’s no evidence that the input [i]has[/i] to be purely visual. e.g. Let us remove skin-tone differentiation by rendering all skin-tones as the same color within the subject, by direct stimulation, if we wished to medicate a virulent racist & make them a productive citizen [of course: there are a multiplicity of inputs to determine our example, but not so many in the case of a Brand].

    ~

    Hope that helped. And all of the above can be sourced, using [b]current[/b] papers.

    Be seeing you.

    • MAbrash says:

      The problem with networked AR sensors is latency. Even 30 ms lag makes for a poor experience, and getting data wirelessly from public sensors would surely take longer than that.

      I’m not sure what this means: “your soft AR could easily block the subject’s viewpoint, and replace it with data from another sensor within range”. The whole post was about why it’s not feasible to block the subject’s viewpoint, either wholesale (because video-passthrough isn’t good enough) or per-pixel (because you can’t focus at that distance). So I don’t see how that would work.

  30. John S says:

    What about using one-way mirror film, perhaps in addition to a second opaque LCD? The film becomes opaque/reflective to whichever side as has less ambient light. So for an AR display, your mini projector would only have to overcome the ambient light of the environment (versus the light of your eye) to get a more solid image. An additional LCD would help significantly to reduce the ambient light on the other side, dramatically strengthening the effect of the AR. Granted, it’s not close to perfect, but it could help by being a transitory solution until a real one materializes.

    We talked about this issue a little on the MTBS3D forums, but no satisfactory solution was ever reached: http://www.mtbs3d.com/phpBB/viewtopic.php?f=138&t=15131

    Man, now I realize why scientists have such a hard time understanding the evolution of the eye; so many factors had to come together and be just right before it could be even remotely useful.

    • MAbrash says:

      Wouldn’t this have exactly the same properties as a per-pixel opaquing LCD panel – in particular, a big blurry border?

  31. Andrian Nord says:

    That all nice and fine, but I don’t get one point – why consumer will even want/need hard AR?

    If I would been ever asked what I want to wear – some magical hard AR glasses that may render non-existing in reality things (and hide existing) or bad and “weak” soft AR glasses incapable of such magical effects, I would choose soft AR.

    There is many problems with idea of trying to occlude/replace real vision – it could be abused (as most obvious) in various degrees of harm made, it could became defective, so it won’t actually draw whatever programmer/user implied, it could turn off at the end of the day.

    Consider applications of such things, not harmful or commercial ones – like redrawing weak bridge across abyss with nice and strong one, to give user courage to cross it. And suddenly that thing fail (look at modern cellphones – this could happen anytime. More complex thing is – more problems it have) and user finds himself in the halfway across weak, waving bridge… This could not end well.

    Point is – I suppose that any such technology could not be possible not only because it technically difficult to implement, but because tech would never be (in any near and middle future) reliable enough to make it safe to mess with such vital things as vision. Without some really urgent need, that is.

    So I don’t see a point to even think about hard AR now – all we want is to have some additional helpful information closest possible – directly before our eyes, not being able to fool ourselfs, isn’t it? And I actually can’t imagine any “good” application for such tech, but may imaging awfully lot of “bad” applications.

  32. Ben Reierson says:

    Just wanted to add my appreciation for this post. Sometimes I wonder if I’m alone in thinking contact-lens displays will eventually replace most of the need for embedding displays in everything else. I certainly wish I could be as optimistic as your 10 year guess though. I try to remind myself that we tend to underestimate what can be done in the long-term.

  33. Evan Zimmermann says:

    The problem of focusing on close images seems to have been solved by using regular contact lenses. Innovega has been working with DARPA on this.

  34. Isla S. says:

    Concerning the matter of video-passthrough, does anyone know the amount of lag humans are willing to tolerate? Some people might be willing to put up with that limitation, if, for example, it allowed people who were visually impaired to see the area around them.

    • MAbrash says:

      John Carmack says it’s around 20ms for VR. However, it could be quite a bit different for video-passthrough for walk-around AR.

    • Wai Ho Li says:

      I had a look at this for Simulated Prosthetic Vision (SPV) research a while back where users with HMDs had to move around an environment while seeing really constrained (resolution, FOV and intensity-levels) images from a head-worn camera.

      If you look on Google Scholar using terms such as “latency” and “video” there are a whole bunch of papers on the topic. Some of these papers have to do with acceptable latency of video-based UIs linked with how a user can complete a task (click on a moving icon or drag-and-drop folders).

      I think the problem is that while very high latencies (>250ms) can be OK for a task-oriented scenario (teleoperators in robotics regularly deal with this), it won’t be any fun for games. Latency also makes the world seem “fake”, which again is a problem for game immersion.

  35. Noah Smith says:

    Two words: ARTIFICIAL EYES.

  36. Anthony says:

    Sooner than you think, remember that technology advances exponentially. http://www.kurzweilai.net/matt-mills-image-recognition-that-triggers-augmented-reality

    • MAbrash says:

      That works nicely for things that are dependent on gate counts on chips (DRAM size, for example), but less well for things that require breakthroughs (battery technology, curing cancer, doing per-pixel opaquing).

  37. Guido Kuip says:

    Great read!

    While reading the article and the comments below it, I wondered whether it would help to simply use shaded glasses rather than the fully transparent ones like Google’s glasses? My own eyes are unfortunately fairly sensitive to light, and I can’t usually go out into the full sunlight without sunglasses on or risk getting a headache -adding the AR light to this is likely going to make this worse so I probably wouldn’t be able to wear them outside without some kind of shades anyway.

    Besides that though, it would enable you to create shadows of virtual objects by simply lighting up every other part of the screen but the shadow (or by allowing more light to filter through but that would probably require LCD-like technology). It won’t enable you to create full black but it seems a cheap option to other technologies mentioned.

    I assume this idea isn’t new at all but I don’t think it was mentioned up there yet.

    Also, in regards to the latency in hard AR devices (and full VR): how does the human brain process our visual input so fast and is there any way in which we can emulate and/or augment this with current or near-future technology rather than rely on ever faster digital calculating powers of electronic circuitry to process and overlay what we see before it reaches our eyes? And what about ways in which our eyes/mind can be tricked in order to save calculating powers (like not registering color in the corners of our visual range but rather supplying this with our brain and memory for example)?

    Looking forward to future posts :]

    • MAbrash says:

      Yes, darkening to improve contrast is essential, preferably variable darkening to reflect ambient lighting conditions.

      The human brain processes input so fast by doing a huge amount of parallel processing in the retina, and then doing a lot of interpolation and filling in in the brain itself. One thing that helps a lot is that there’s no scan-in time from the camera (the eyes) or scan out time to the screen (the brain) – that can cost up to two frame times in silicon. (Yes, there is time to get information down the optic nerve; I don’t know how long that is.) And then a lot of processing that the retina does in parallel (and I’d assume the brain too) gets done serially in silicon.

  38. Bill says:

    As a user of an CAVE (think holodeck without the physical touch) I can assure you that with current technology can minimize lag enough to not induce simulator sickness. Granted in a CAVE you don’t have the real world moving to draw attention to any differences in perspective. However you do have to deal minimizing differences in perspective as detected by the inner ear and eyes.

    As for additive blending only, I don’t see any reason why you couldn’t add a mask layer, I believe some cameras already do this for various reasons. Some scientific uses as well as just preventing a flashlight from blinding a camera.

    All in all the human brain is rather adaptable. I’ve seen people convinced that we have physical feedback through our wand because the visual interaction is so convincing.

  39. happladatple says:

    I think the next big revolution in technology will be with metamaterials (a vague term) and synthetic biologicals. There are materials which can, for example, be used to bend light to cloak other objects. These types of properties may let you decouple the display solution from processing limitations, bringing truly darkened ‘aug’ pixels into the ‘real-light’ datastream without any lag or focal issues. We may also be able to grow new types of retinas that will help facilitate AR technology. Of course these are a ways off. If you could custom order a physical property to help solve the issues in these technologies, would that be useful and what would it be?

    Reading about the constraints of current soft-AR technology, it seems that ‘magic’ would be a good art direction, especially considering these technologies are ‘like magic.’ If ghosting is a problem, then “Waking Life” inspired graphics are a solution. Maybe a game based on distinguishing the localized darkening clusters from a sparkly, noisy, ‘real world’ backdrop would be fun. Or vice versa.

    Floating people’s names above their image raises privacy concerns.

    • MAbrash says:

      I don’t know what physical property I’d custom order, although it’s a nice offer :) All I can do is specify functionally useful characteristics, such as per-pixel opaquing and per-pixel focus.

  40. Hoffmann says:

    I wonder if there is any kind of film that is transparent but when a electrical current pass through it, it becomes opaque, that would at least eliminate the additive blending when you want to make virtual reality games and applications as opposed to augmented reality games/applications.

    • MAbrash says:

      Sure, it’s not a problem to do globally variable opaquing; that can be done with an LCD panel or an electrochromic screen.

  41. Robert M says:

    Partial solution to the issue of drawing black:

    The human eye has a very wide dynamic range. In all places except sunny outdoors, the eyes get relatively little light. Stepping into a somewhat light office from the sunshine causes a marked feeling of darkness, before the eyes have a chance to accomodate.

    This also means that dark can be created by floodlighting the areas of retina corresponding to the non-dark regions. While for the general case it only reduces the problem to video pass-through AR, there may be cases when it’s good enough. For example, watching a movie and “darkening out” the surroundings, or using the real life as a faint backdrop to the alternate reality (i.e. the reality augments the virtual reality).

    Also, has anyone tested what happens if white light is selectively added to the scene? Maybe color and contrast would be preserved by the brain accomodating for the difference, but the area not flooded with white light would turn darker (i.e. a shadow or even a dark grey might be possible). We shouldn’t outright rule out approaches like this without trying.

  42. Haniff Din says:

    ’640K is more memory than anyone will ever need.’ Bill Gates.

    The researchers who contributed to the film ‘minority report’ said we’d need gloves to interact. You don’t need gloves. Technology is moving a lot faster than you can predict – especially in the AR market.

    • MAbrash says:

      What do we need to interact? I’m not at all sure what the solution(s) will be for AR. I’m not a big believer in the kind of dramatic arm motions used in Minority Report, but hand gestures seem promising, and using gloves is one way to capture those.

  43. uh20 says:

    i would prefer see through ar just because it would be creepy to have the software encompass everything you see even if you are just trying to view the naked world

    my initial idea for see through ar would be shades switching from facing the person and displaying a few pixels for color, the problem would be that you would need a lot of em, probably one for every 4-8 pixels to be realistic with the surroundings, all electronically driven to face the user and snap back without being very visable

    • uh20 says:

      point is, its probably still going to take a while

      ill be optimistic here and say the first consumer version will be out in 6 years

  44. Fred says:

    I wonder if see-through AR could ever solve the typical issue with stereoscopic 3D – the fact that virtual pixels/objects break the normal relation between eye focus distance and eye convergence – its already causing lots of visual strain when watching a movie, I can’t imagine what it’s going to be like when mixing real & virtual pixels.

  45. Fred says:

    To me the biggest deal about soft AR would be making physical displays such as laptop screens, tv, etc obsolete.
    I could just sit down with a physical keyboard and “summon” in front of me a virtual desktop with as many virtual displays as i want (all at “retinal” resolutions). They would be fixed in physical space, about 2 feet away from me, and I could reposition them around.
    There are less focus/accommodation issues since those v-displays would be flat and at fixed distance. But precise head positioning would still be needed, but maybe this could be solved by placing a physical marker acting as an anchor in the real world (e.g. the physical keyboard in front of me could play that role).

  46. Paul says:

    Very interesting article, this is one of the trends that I have been watching avidly, and I greatly appreciate your explanation of some of the technical issues.

    You mentioned Oculus in an earlier comment, and I’ve seen a quote from you saying that you’re looking forward to getting your hands on one — does this product address any of your issues with pass-through AR? In particular, the Oculus literature states that the head-tracking is fast enough to get around the issues with lag-induced motion sickness. Resolution seems to be fairly high on these units (although not Retina-quality, obviously). I’d be very interested to know if you can wire up an external camera and wander around with one of these headsets on without falling over or getting ill (I’m sure that’s the first thing you’ll do when you get one :) ).

    However, you state that there’s a bigger problem than motion sickness: “Worse still, the eye is no longer able to focus normally on different parts of a real-world scene, … which leads to a variety of problems”. Can you please elaborate a little on what sort of issues this causes? Is it to do with eye strain/fatigue, or something else?

    Finally, has any research been done into what would be perceived if one eye was covered with a pass-through video headset, and the other one left unobstructed? I suspect this setup would be particularly sensitive to lag, but you might be able to usefully perceive a HUD and some other sorts of overlay with this (but obviously not hard-AR, with full replacements of objects).

    Cheers!

    • MAbrash says:

      Giood questions.

      The tracking on the Rift is an IMU. The gyro tracking will be good, and will not drift too much, but the accelerometer tracking will be horrible as always, due to the double integration from acceleration to velocity to position, so position will be wrong on all but tiny time scales, and position matters a great deal. And even the gyros do drift, especially with fast rotation. So the Rift won’t address the issues with passthrough AR, at least not off the shelf, unless they put in an awesome tracking system (and I’m not even sure how they could do that at this point). Which is unlikely, since they don’t need a tracking system that good for VR gameplay.

      Btw, resolution is not high at all on the Rift. Each eye has 640 pixels horizontally over 90 degrees – that’s 7 pixels per degree!

      I’m not sure what the set of problems from the eye not being able to focus normally is. Maybe it won’t be an issue at all, but my guess is that it will in fact lead to eye strain/fatigue.

      I don’t know about one-eye research, although I’m sure it’s been done, since the military uses one-eye HUD setups. I would guess it would be the worst possible situation, since each eye would experience/report different conditions.

  47. STRESS says:

    AR is imho pointless not as pointless as VR 10 years ago was but pretty close. And btw for VR we have heard the same arguments as you bring up for AR that it is just always 5 years away from being mainstream. But in the end since 20 years it never manifested itself as becoming mainstream. There are simply no killer-apps for AR besides the standard information overlay.

    And seriously who wants to wear goofy glasses all day?

    Strangely enough however VR makes more sense in the longterm than AR does at least parts of its research is applicable for a bigger goal.

    The main issue with AR is the problem it needs physical reality and it needs to interact with physical reality but when computing power has reached a level where physical existence itself is rather pointless so will be AR.

    I am really disappointed with your essay on AR as a hard SciFi fan as you seem to be you would know for the real futurist there is only one single goal and that is uploading your existence into a digital network free from any physical body constraints.

  48. Leopold says:

    I’m reminded of a science exhibit I saw some years ago at the Boston Science Museum. It consisted of white surface onto which an image was projected through a piece of glass. A second projector could be turned on that would cast a separate image onto the glass which resulted in optical interference (OI) with the first projection – the result was a completely different image being projected onto the white surface – the result of the interference pattern of the two projections. Using OI you can change the intensity of portions o the image by cancelling out the light waves before they reach the viewer.

    • MAbrash says:

      I do suspect non-intuitive stuff like that will be important in future wearable computing, but of course, that’s hard to predict :)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>