When it comes to resolution, it’s all relative

The first video game I ever wrote ran in monochrome, at 160×72 resolution. My next four games moved up to four colors at 320×200 resolution. The game after that (Quake) normally ran with 256 colors at 320×200 resolution, but could go all the way up to 640×480 on the brand-new Pentium Pro, the first out-of-order Intel processor.

Those sound like Stone Age display modes, now that games routinely run in 24-bit color at 1600×1200 or even 2560×1600, but you know what? All those games looked great at the time. Quake at 640×480 would look pathetically low-resolution now, but when it shipped, even 320×200 looked great; it’s all a matter of what you’re used to.

That’s relevant just now because the first generation of consumer-priced VR head-mounted displays is likely to top out at 960×1080 resolution, for the simple reason that that’s what you get when you split a 1080p screen across two eyes, and 1080p is probably going to be the highest-resolution panel available in the near future that’s small enough to fit in a head-mounted display. At first glance, that doesn’t seem so bad; it falls short of 2560×1600, or even 1600×1200, but it’s half of the latter, so it’s in the same resolution ballpark as monitors. And besides, it’s way higher-resolution than any of my earlier games, and in fact it’s higher-resolution than anything that was available for more than 15 years after the PC was introduced, and, as I noted, those lower-resolution graphics looked great then. By analogy, VR should be in good shape at 960×1080, right?

Alas, it’s not that simple, because when it comes to resolution, it’s all relative. What do I mean by that? There are two very different interpretations, both applicable to the present discussion. We’ve seen the first one already: how good a given resolution looks depends on what you’re used to looking at. 160×72 looks great when the alternative is a text-based game, but less so next to a state-of-the-art game at 2560×1600. This first interpretation applies to VR in two senses. The first is that VR will inevitably be compared to current PC graphics – clearly not a favorable comparison. However, the second is that, like my early games, VR will also be judged against previous VR graphics in the PC space, and that’s a favorable comparison indeed, since there are none. For the latter reason, if VR is a unique enough experience, people will surely be very forgiving about low resolution; the brain is very good at filling in details, given an otherwise compelling experience, as happened, for example, with Quake at 320×200.

Another way to think about resolution, however, is relative to the field of view the pixels are spread across. The total number of pixels matters, of course, but the density of the pixels matters as well, and it’s here that VR faces some unique issues. Let’s run some numbers on that.

My very first game ran on a monitor that I’d estimate to have a horizontal field of view of maybe 15 degrees at a normal viewing distance. At 160×72, that’s about 11 pixels per horizontal degree.

A 30” monitor at 2560×1600 has about a 50-degree field of view at a normal viewing distance. That’s roughly 50 pixels per horizontal degree, and approximately the same is true of a 20” monitor at 1600×1200.

The first consumer VR head-mounted displays should have fields of view that are no less than a 90 degrees, and I’d hope for more, because field of view is key to a truly immersive experience. At 960×1080 resolution, that yields slightly less than 11 pixels per horizontal degree – the same horizontal pixel density as the CP/M machine I wrote my first game for in 1980, and barely one-fifth of the horizontal pixel density we routinely use now.

And that’s only the horizontal pixel density. The vertical pixel density is the same, and in combination they mean that a first-generation consumer head-mounted display will have about one-twentieth of the two-dimensional pixel density of a desktop monitor. As another way to understand just how low a wide field of view drives pixel density, consider that the iPhone 5 is 640×1136 – two-thirds as many pixels as the upcoming head-mounted displays, packed into a vastly smaller field of view; at a normal viewing distance, I’d estimate the iPhone has roughly 100 pixels per degree, so overall pixel density could be close to one-hundred times that of upcoming VR head-mounted displays.

It is certainly true that the brain can fill in details, especially when viewing scenes filled with moving objects. However, it would be highly optimistic to believe that a reduction in pixel density of more than an order of magnitude wouldn’t be obvious, and indeed it is. It’s certainly hard to miss the difference between these two images, which reflect the same base image at two different pixel densities:

And that’s only a 4X difference – imagine what 20X would be like.

If there were no monitors to compare to, low pixel density might not be as noticeable, but there are, not to mention omnipresent mobile devices with even higher pixel densities. Also, games that depend on very precise aiming may not work well on a head-mounted display where pixel location is accurate to only five or six arc-minutes. For that reason, antialiasing, which effectively provides subpixel positioning, will be very important for at least the first few generations of VR.

That’s not to say that the upcoming VR head-mounted displays won’t be successful; a huge field of view, together with high-quality tracking and low latency, can produce a degree of immersion that’s unlike anything that’s come before, with the potential to revolutionize the whole gaming experience. But I can tell you from personal experience that the visual difference between a 960×1080 40-degree horizontal field of view head-mounted display and a 640×800 90-degree HFOV HMD (both of which I happen to have worked with recently) is enormous – what looks like a blurry clump of pixels on one looks like a little spaceship you could reach out and touch on the other – and that’s only a ten-times difference.

So I’m pretty confident that we’ll be begging for more resolution from our head-mounted displays for a long time. Obviously, that was also the case for decades with monitors; the difference here is that every day we’ll encounter much higher pixel densities on our monitors, our laptops, our tablets, and even our phones than on our head-mounted displays, and that comparison is going to be a challenge for the consumer VR industry for some time to come.

Given which, the obvious question is: how high does VR resolution need to go before it’s good enough? I don’t know what would be ideal, but getting to parity with monitors in terms of pixel density seems like a reasonable target. Given a 90-degree field of view in both directions, 4K-by-4K resolution would be close to achieving that, and 8K-by-8K would exceed it. That doesn’t sound all that far from where monitors are now, but actually it’s four to sixteen times as many pixels; there’s no existing video link that can pump that many pixels – in stereo – at 60 Hz (which is the floor for VR), not to mention the lack of panels or tiny projectors that can come close to those resolutions (and the lack of business reasons to develop them right now), so pixel density parity is not just around the corner. However, if VR can become established as a viable market, competitive pressures of the same sort that operated (and continue to operate) in the 3D graphics chip business will drive VR resolutions, and hence pixel densities, rapidly upward. VR could well become the primary force driving GPU performance as well, because it will take a lot of rendering power to draw 16 megapixels, antialiased, in stereo, at 60 Hz – to say nothing of 64 megapixels.

Believe me, I can’t wait to have a 120-by-120-degree field of view at 8K-by-8K resolution – it will (literally) be a sight to behold. But I’m not expecting to behold it for a while.

56 Responses to When it comes to resolution, it’s all relative

  1. I really can’t imagine its that far off, though. My phone, a HTC One X has a silly amount of pixels (1280×720) per degree, what’s to stop the next iteration of heads up displays using two displays primarily taken from smart phones?

    • MAbrash says:

      If you the next iteration of HMDs used two HTC One X screens, you’d get 1280×720 per eye – which is about the same 1 megapixel I described as the high end (at best) for the upcoming generation of VR HMDs. Conceivably you could use one 1080p screen per eye, but there are complications to doing so. And 1080p is as high as phone screens seem to be likely to go in the near future.

  2. All your comments seem appropriate for the case in hand, the VR headset mounting a display that is static, and your eyes can roam about the whole of that screen.

    But what if we could feed the eye positions back to the renderer in time to adjust what was rendered and control where the screen appeared to be? Using refraction to redirect the eyes back at a high resolution screen, potentially with a low resolution surround; have the renderer compensate for the eye position. It might mean we can pump much less data at the headset, and yet also provide a much higher apparent resolution as we’re only targeting pixels that would be seen with the fine detail resolving centre focus. Even if it wasn’t super quick at handling the eye tracking, it may merely look like moving your eyes makes the scene resolve (in sub second time as the refractor clicks to the new rendered position).

    Just an idea, and probably complicated by eye distances shapes and other parameters that hamper laser-into-eye rendering.

    • MAbrash says:

      That’s a great idea, one we’ve thought about. The problems are two.

      First, while it’s easy to say “using refraction,” I’m not aware of any real-world, head-wearable, reasonable-cost system that would allow redirecting the image so that the high-resolution part falls on the fovea. I’m not saying it’s not possible, just that at this time, I don’t know of a workable system that could be made at a low enough cost and with a small enough size and weight to be used in a consumer HMD. (I do know of at least one large, extremely expensive system along these lines.)

      Second, the lag in feeding back changes in eye position is a huge problem. The eye is really good at detecting shift in position, and when you moved your eye, the inevitable lag through the eye tracking, mechanical image repositioning to put the highest-resolution part over the fovea, and rerendering to match the new position would be very hard to keep to levels that weren’t detectable. Again, I’m not saying it’s not possible, just that the difficulty shouldn’t be underestimated. Just tracking the eye accurately enough would be a challenge.

  3. I think that the reason people are pointing it out so much is because it’s the only concept of VR that we collectively have a grasp on, I’m sure we will be comparing tracker update hertz and grumbling over degrees of the field of view but right now resolution is our only point of reference we all have. This is why every journalists has written the same “ha remember the 90s, this is kinda cool, I can see pixels, John Carmack!”(“, puke”) review for the Rift because they have no idea how to review it.

    As for whether the resolution is important, it really comes down to how it shows the game information. It is like a novel, when you read a novel all you have are word but when you read the words you are at the same time picturing the scene and playing out the story as described in the text, and it’s is from there you enjoy the book, games are the same however information is packaged in a more digestible format ‘animated interactive pictures’, however the same rules apply but just more subtly.

    This is why quake at 320×200 worked, because it created a framework for our minds to fill in and enjoy I could understand I had a gun and I could understand that monster had ill intentions. Where this breaks down is and when resolution is important starts when you see that the dragon looks like a duck(Adventure : Atari 1979). Now the resolution and graphics are misrepresenting what is going on in the game and then subsequently breaks the immersion.

    The funny thing is that I found in my own experience that the quality of a games graphics, matter for about the first 2 mins then I don’t even notice, good or bad.

    • MAbrash says:

      Agreed that quality of game graphics tends not to matter much after a little while, unless it’s integral to the gameplay, and I really like your analogy of games to novels. However, you may be underestimating exactly how low the resolution is here. When you played Quake at 320×200, it had twice the pixel density of the VR HMDs I described. I think latency and FOV matter more than resolution in general, but below a certain point resolution is hard to ignore.

      • I did overlook the pixel density, it’s easy to forget that 720p is great until it is stretched to a wrap around view. Despite this I feel that it comes down to the game design its self, snake works fine at 32×32 and the original game boy games did well with 160×144. When it comes to creating a VR game, developers should look to what works well with VR and try to move away from the high fidelity game-play that might work badly(fine text, long distance views) after all ‘art thrives on its limitations’ perhaps this low resolution will force developers to think about what makes a good VR experience.

        Speaking of art, another good parallel to the immersion effect is impressionism, despite the “low resolution” of the paintings we can still see, understand and enjoy Claude Monet’s works. Interestingly the low fidelity of impressionism has an effect of enhancing the experience because it is uncluttered by non-relevant visual detail.

        I had my way it would be mandatory for developers to visit impressionist art galleries so they can learn the difference between graphics and aesthetics, specifically that the former is getting far too much attention and the latter needs far far more.

  4. George Kong says:

    Tim Sweeney made a comment regarding this stuff at a conference earlier this year…
    http://www.youtube.com/watch?v=XiQweemn2_A

    The gist of it was – 8k * 4k @ 72fps is near the limits of human perception… photo real graphics at that resolution would require 2000 times our current (early 2012 at the time of that statement) processing power… which is in a range of about 10-20 years at our current rate of progression.

    I’m inclined to believe that this will happen sooner rather than latter; technology has a tendency to hit inflection points all the time which result in surges of commercial viability and a significant quickening of development and input of technical and monetary resources.

    VR is a good killer app for doing this – for the next couple years, it’ll be enthusaist – as the resolution increases, as the amount of software that supports it increases and as the feature set increases, it becomes more attractive to a larger range of people.

    There are many people outside of gaming waiting (or at least will readily pounce on) the technology once it is more mature – the design, construction, manufacturing, medical… any sort of industry that relies on spatial visualization will benefit greatly from it.

    Any when it starts to become realistically viable for telecommunication and telecommuting (as in, not just acceptable relative to what’s around today, but preferable to what will be available via other means in the future) – then the demand for the technology and all its antecedent parts will ratchet up significantly, driving a synergistic development of all the other parts.

    We’re going to be hitting these milestones of commercial usefulness and viability all along the way to 8k*4k (or 8k*8k). And when you combine that with other changes in the tech paradigm – such as graphene processor technology – that aren’t directly on the same research path, but will benefit VR and end user immensely, then… well, the future for the convergence of computing is promising indeed.

    Both VR and AR sit at the nexus of that convergence, at least as far as user interface hardware and UI goes – so it is inevitable in my opinion that excitement, use and development for it will accelerate.

    • MAbrash says:

      George, I love your vision of the future of AR, VR, displays, and processing power – and I sure hope you’re right!

      –Michael

    • STRESS says:

      Actually I am not so sure if we are hitting that target sooner we might not even hit it at all in 10 years. Looking at the recent progress in compute power density. Progress has definitely slowed down. It has slowed down since years now in the CPU realm. And it has started to slow down in the GPU realm as well. The high-end compute market that is necessary to keep the game going is twindling rapidly.

      Thanks to consumer switching to more low-end devices, so-called cloud solutions and simple no applications on the horizon that justify this amount of local processing power required.

  5. Nukemarine says:

    There’s a slight flaw in your “need” for higher density pixels. With human eyesight, we demand more at the center field of view as less at the peripherals. So while you might “average” 5 pixels per degree of view, the reality thanks to lens technology means that you’ll have more pixels focused on the center of view with the peripheral of your vision having less pixel density.

    Your argument is true that we’re at the beginning stages of user friendly vr. With more demand will come higher density displays. However, you cannot compare displays that sit 3 feet away from a user to displays that rest 1 inch away using special optics to focus those further to the eyes. I think the audio comparison would be akin to headphones versus speakers in that headphones offer a more immersible experience.

    Remember, your physically have a limited FOV. Your eyesockets somewhat and nose bridge expecially stops quite a bit. On top of that, you have a limited overlap allowing for a limited 3d view.

    • MAbrash says:

      The trick is – how are you going to keep the high-pixel-density part of the image focused on the fovea as the eye moves?

      I’m not sure why you say it’s not appropriate to compare monitors to near-eye displays. They seem directly comparable in terms of pixels per degree, since both types of displays end up putting pixel images on the retina. Sure, HMDs are more immersive, but that seems orthogonal to pixel density.

      While it is true that humans have limited FOVs, they’re limited way beyond 90 degrees. Your total field of view is about 210 degrees. I’m not sure what relevance that has to pixel density, though.

      Similarly, I’m not sure how the fact that the overlap area in which stereoscopy is possible is limited is relevant to pixel density. Note that stereoscopy is only one aspect of 3D viewing; parallax is another, and an earlier comment thread discussed which is more important. Occlusion and relative size are powerful 3D cues as well.

      • Sean Barrett says:

        While eye-tracking would be “optimal”, it’s easy to imagine an HMD with higher resolution in the center, say, 20-40 degrees FOV and lower resolution outside that. This changes the math of resolution requirements pretty significantly.

        Obviously the narrower the high-resolution region the less effective eye-motion is and the more you force people to use head motion instead; this might be a little weird, but we might decide this is a better trade-off than uniform resolution (esp. if people will learn naturally do it and it doesn’t feel uncomfortable; e.g. wikipedia’s saccade article says shifts of more than 20-degrees tend to include head movements anyway).

        (Certainly at a minimum we only need high resolution within the range of eye motion + the width of the eyes’ high-resolution perception, which will be much less than 210 degrees.)

        • MAbrash says:

          Hi Sean – great to have you commenting.

          That’s definitely an interesting idea, although a number of experiments would be needed to figure out if it works. My biggest concern is whether the image changes resulting from the virtual image sliding across areas of different resolution would be picked up by the peripheral parts of the retina, even if the resolution was adequate for static viewing. The peripheral part of the retina is very good at picking up motion, and aliasing/shifting as the image moves could appear as motion. Also, of course this doesn’t reduce resolution nearly as much as a setup that follows the fovea; if you have full resolution for +/-25 degrees, that’s more than a quarter of a 90×90 FOV, although as you point out the relative savings increase as the FOV gets larger.

          Certainly more easily doable than eyetracking and moving the image with the fovea, though, and if it works it would be a significant resolution win.

          –Michael

  6. jspenguin says:

    640×800… Does this mean that Valve games will be supporting the Oculus Rift?

    I like Doom 3, but being able to use it with Half Life or Portal would be awesome.

  7. Do you think some of these problems could be mitigated with some kind of eye-tracking feature that changes the pixel density depending on where you’re looking? Obviously we can’t change the pixel density of the screen itself, but it might be possible to save on processing power if the image is rendered at maximum density in the center of what we’re looking at and that density gradually drops off as the image gets further away from the fovea. The net result would be a very sharp image centered on the fovea that gradually blurs as the pixels become larger toward edge of the screen.

    The reason I think this could be feasible is that our eyes work in pretty much the same way so it’s possible that we wouldn’t even notice that this blurring is occurring at the edges. That being said, it would remain to be seen if the processing power needed for eye-tracking and re-adjusting the image would actually be less than the power needed to render the entire image at full density.

    • MAbrash says:

      I can see this is going to be a popular suggestion :) And it’s a good one in theory, but see my earlier response to the same suggestion for details about why it’s challenging. Note that the problem doesn’t have to do with the power to render at full density, although that could certainly be a challenge – it’s about the ability to display at full density, because there just aren’t any head-wearable displays that can display that many pixels.

      –Michael

  8. Hesham Wahba says:

    Thanks for the great article. Confirms pretty much what I expected, the real question is how quickly the industry ramp up the resolution. Even though they already have a 9.6″ 4K display I still don’t think it will be enough, and the graphics power needed to drive that at 120HZ SBS rendering? The next few years will need to see a huge ramp up in display and graphics technology, especially if we want to be driving these displays using cell phones (which is what I want to be using!) Also, quick plug, I care not just for gaming, but for VR desktops as I wrote the Ibex VR desktop for Linux and would really love to have a nice VR environment to work in (http://hwahba.com/ibex). Looking forward to any updates you have on this in the future and how the Oculus does as well :)

    • Michael says:

      I think before the industry can ‘ramp up’ they need a viable product to sell.
      Otherwise there’s nothing to fund this ramping up (or even to persuade the game industry in general to make their games support headsets)

      I think that’s behind Michael’s points about us being happy playing Doom. If we weren’t happy then we wouldn’t have spent nearly 3 decades throwing money at computer hardware and buying these low resolution games. Money which in turn led to faster graphics cards and CPUS and higher resolutions.

      Plus, to some extent more general purpose consumer computing took advantage of the increasing performance than just gamers. i.e, in the past computers were pretty slow and clunky affairs for everyone, and even doubling the chip speed meant it was still slow. So millions and millions of people upgraded at the first opportunity. But, the problem with doubling performance is, at first that’s not really a lot of difference, 100mhz to 200mhz? Eventually it’s a huge difference 1ghz to 2hz.

      Today, although there’s a niche hardcore of gamers who still upgrade like it’s the 90s, it’s no longer typical. The people that just “browse the web” don’t need a faster computer and it seems unlikely they need a triple SLI water cooled hyper-mega-powered graphics card.

      So, unless VR is a viable, affordable application with current technology that shedloads of gamers want to rush out and buy, then it won’t reach the point where it’s getting ramped up. If it needs much higher resolutions than we can do today, Carmack et al are wrong, and now is still not the right time to bother with it.

      The question then is, are there other consumer applications for these 4k or 8k displays (and the graphics cards to render that big) that will push the tech forward before VR exists?

      AR? I don’t think so, I think this is just technology for people who want to walk into traffic, walls and each other and then the last thing they’ll see before the ambulance carries them off is an advert from claims direct about suing google for compensation :-)

      I’m certainly happily playing games now at 1080p on a monitor. If someone started to sell 4k resolution monitors I’d think WTF for? Why would I pay more for that? Although I suppose some do game higher and bigger, I don’t think there’s a huge call. TV / film could be made at 4kx4k but the TV broadcasters and cable companies are barely there with 1080i yet. They haven’t got 1080p, I doubt they’ll go higher any time soon.

      If a headset is going to cost significantly more and require a 2x or 3x more powerful graphics card than whatever the graphics card de jour is for rendering directx version 12 (or 13 or 14) at 1080p, it’s only going to be niche at best.

      If various people can create a product that’s more on par with buying a monitor and works well with the reasonable middle of the range gaming PC, then I’d probably consider buying one and I can imagine plenty of other people would too.
      (although presbyopia has caught rapidly up with me the last year or so, but thanks to the modern trend of reading everything on a PC monitor I haven’t bothered with glasses. I’m not sure I’d want the hassle of wearing reading glasses and a headset)

      Although quite a few seem to be throwing time and money at R&D (and buying rift kits) perhaps it is viable now even at these lower resolutions – certainly a few who tried the rift seemed to think it was a game changer.

      • MAbrash says:

        You’re in luck – HMDs are all focused at infinity, which means that presbyopia is not an issue. Myopia is, however, and since my experience is that the majority of geeks are nearsighted, that matters.

        –Michael

  9. Aaron Martone says:

    That latter part of your article is what really has piqued my interest in VR. It could very well become the driving force behind giving the PC gaming industry a much appreciated kick in the pants. Gamers have been a large cause to the effect of GPU architecture advancements, and sometimes it takes the craze of the latest gadget or technology along with the adoption by the masses to propel advancements in those fields by leaps and bounds.

    I’ve personally always felt that immersion was going to be paramount in the next ‘evolution’ of gaming. I’ve always loved simulation games, and the best ones out there attempt to make you feel as if you’re in the game world they are emulating. I can only imagine what a VR can (or will) do to help sell that experience to the consumer.

  10. William says:

    Hey Mike,

    How does this discussion apply to AR? Is AR mainly projection based on googles or does pixel density also come into play on screens that provide some sort of pass through vision? In previous articles you’ve expressed the pros and cons of AR emerging before VR, could this be another pro or does AR suffer the same issues?

    • MAbrash says:

      Good question! Pixel density would apply to AR pretty much the same way it applies to VR, except that while you would want wide FOVs with AR, you probably can’t have them for quite a while. Oculus has shown that it’s possible to get to 90 degrees in VR right now, but I’ve never seen an AR HMD that can get near that FOV. And even if it could, it probably wouldn’t be suitable for wearing while you moved around, which you would want to do with AR, unlike VR. So whatever AR glasses we see in the near future will probably have much more limited FOVs, on the order of 30-50 degrees, so they’ll have something like 5X the pixel density of VR HMDs – but at the cost of narrow FOVs.

      –Michael

      • George Kong says:

        You might be somehwat familiar with the tech Michael, but Innovega

        http://innovega-inc.com/how-it-compares.php

        Is making pretty bold claims on light weight high FOV glasses. I think this could be a promising off-shoot of AR technology; once the rest of the AR/VR ecosystem ratchets up, something like this would play well with an enthusiast market, until the non-contact based tech catched up on the FOV spec.

        On a personal level, I wear glasses right now – and would happily switch over to prescrption contact lenses with this tech… as long as the wearable glasses were as fashionably light weight as shown in their visualization.

        • MAbrash says:

          I am familiar with Innovega’s technology, and it’s very interesting. There’s obviously a big question about what percentage of people would be willing to wear contact lenses; responses I’ve heard range from yours to, “You’ll never get me to put something in my eye!”, but I agree that there is considerable potential there, especially in terms of FOV and weight. I’m looking forward to seeing what products emerge from their technology.

  11. Tyson Kubota says:

    Great post, Michael.

    I’m wondering about the possibility of varying pixel density within a single VR display. It seems that your peripheral vision would accept a lower resolution than the center of your field of view, which would typically be the main focus of attention. …so in a spherical projection of the screen, putting the largest pixels at the edges of the sphere and the smallest near the center might afford a higher perceptual quality while keeping the overall pixel count down.

    Here’s a rough sketch of what I mean, where each white circle represents a pixel.

    I have no idea whether current display manufacturing is able to accomplish this, but it might lead to interesting technical possibilities!

    • MAbrash says:

      This is clearly the theme comment of the day!

      The Rift actually already does this to some extent; because the lens is distorting (which is corrected for in software before displaying each frame), the pixel density is higher at the center than toward the edges. However, your eye isn’t always looking at the center of the lens; it can easily move 20-30 degrees off-center, and then what? With the Rift it’s not a problem because the pixel density doesn’t vary that much or that rapidly, but with a pixel density gradient high enough to really make a difference in terms of rendering load and overall pixel count, it would be critical for the high-density area to always land on the foveal area – and it’s not clear how that could be accomplished.

      –Michael

  12. Flavio Meibach says:

    I’ve been thinking that you really need a higher density of pixels where your eyes are focusing on. Maybe, with an eye tracking mechanism it would be possible to optimize the rendering resources to the general area the user’s eyes are looking at. Sort like VR’s LOD or Mipmapping.

    You still would need an 8K-by-8K display, cause physically moving pixel density around the field of view would be next to impossible, but you could avoid rendering 64 megapixels. Maybe even FPS could be lower outside eye focus area without much noticeable drop in image quality.

    • MAbrash says:

      All true. But rendering load is the second problem; the first problem is getting that many pixels, period.

      Reminds me of the Steve Martin joke about how to turn one million dollars into two million dollars – first, get one million dollars :)

      –Michael

  13. Wai Ho Li says:

    I think Perry Hoberman’s keynote at ISMAR this year touched on an important idea: We are getting displays higher and higher DPI (440+ isn’t difficult) but still stuck viewing them as tiny windows for virtual or augmented content. The FOV2GO project, which Palmer Luckey also had a hand in, is designed to turn phones and tablets into VR goggles:
    http://projects.ict.usc.edu/mxr/diy/fov2go/

    I don’t think getting the resolution up to 8k per eye is that difficult over the next 5 to 10 years. Japan Displays already has a 600dpi+ 3.2″ display that seems to work IRL; colour and contrast unknown:
    https://www.youtube.com/watch?v=CuBevxQG6eo

    Assuming that fabrication technology progresses as it had in the past, it is only a few more doublings in resolution from current gen ignoring the possibility of just stitching more than one display together. The nice thing about HMD displays, assuming we use the Oculus Rift type and not smaller displays like LCoS, is that their yields are quite high compared to large screen LCD TVs. I think in general, manufacturing panels in the “meso” scale is easier than micro and macro products like LCoS and TVs. I am quite optimistic about this :)

    I have a lot of thoughts about the use of GPUs in the context of a system that has the potential to sense a lot about its environment. It is basically a robotic sensing system where the human is the robot moving sensors around; kind of a human-in-the-loop teleoperation scenario where a virtual world is being actuated instead of a robot. This kind of research has been going on quite a lot in sensing (Kinect Fusion being a recent example) for a long time; plenty of literature to draw on there.

    At the moment, we are only doing head tracking; the logical extension is eye tracking. I have worked on gaze locked and gaze contingent systems (side FPGA project where I was supervising an Undergrad student) where rendering depends on the outputs of an eye tracking system. The same has been done in several research labs to limit the bandwidth required to transmit video; just paint the foveal (central vision) region in high resolution and scale back in the other areas. Cheap tricks such as image pyramids (similar to Mip Mapping) seem to work OK for low bandwidth video streaming, but not sure if they are good enough for gaming scenarios. We humans are basically detail-blind outside +-10 degrees of our foveal vision, why waste GPU compute on these areas?

    Assuming the VR community sticks with the Ski-mask like models of goggles, relatively accurate eye tracking, at least accurate enough for saving rendering compute, should be achievable as the system will not move much after initial calibration. Small headworn eye trackers are too expensive now, but I don’t really see a technological limitation given what I have tried in the past assuming the system can live off mains power. These eye trackers are basically camera + IR led systems with some custom processing where that latter is the expensive bit.

    Another cool thing to do would be to ship a full 3D model to the VR goggle instead of just the GPU 2D renderbuffer. This has a lot of advantages such as on-VR-goggle view generation based on a much lower latency, potentially all-hardware, feedback loop that includes head tracking and later on eye tracking, as well as a lot of cool geometrical tricks that are done “client side” on the VR goggle end. The distribution of computation across devices will be a very interesting area of R&D in the near future.

    Regarding the resolution limitation, I think that’s not that big a deal as long as the experience or “gameplay” is compelling. Having done a lot of human factors trials as part of a visual prostheses research project where users are limited to around 600 pixels of resolution in different spatial configurations, with pixel intensity ranging from 1-bit (black and white) to 4-bit grey scale, many users (usually gamers, including myself) are hooked on the difficulty of recognizing objects or accomplishing tasks such as aligning objects despite the lack of resolution.

    Maybe we will get a return to more gameplay to help cover up the lack of graphical details, especially mechanics that require a lot of visual search and tight feedback loops between view movement and action? Cognitive load is a good way to hide that lack of visual details.

    • MAbrash says:

      Lots of good thoughts there. The only comment I’ll make is that if you want to do rendering on the HMD, you might want to think about exactly how many watts of power you’re planning to dissipate right next to someone’s head. Could be a little warm :)

      Also, the bandwidth requirements for a 2D framebuffer are predictable and can be guaranteed. Not so for a 3D description.

      –Michael

      • Wai Ho Li says:

        Agreed about heat and HMDs; I guess the computational split between HMD and desktop/laptop/mobile-device will need to balance a lot of parameters.

        I think fairly fixed bandwidth 3D is probably doable although the hardware pipeline may look very different than a 2D frame buffer. Tree structures (quad, oct) or surfels maybe useful as multi-scale approaches can be dynamically adjusted to fit certain bandwidth requirements. Very tempted to go work out how much data is needed for a “dense-enough” 3D model to be shipped to an HMD…

  14. Tiffany Pagni says:

    I read your article yesterday about how black is impossible in AR. I think the solution is obvious, just not simple. If black being the value of zero is a problem, then don’t make black zero. At first I thought to reverse the values of black and white so that black had the max value, however, then you would lose white. So that only gets you so far. Then I thought to actually add and extra value to the color pallet so it would really range from #0000000 to #FFFFFFF and have black start at #1000000. You would just have the program only select values from #1000000 to #FFFFFFF. That way you always have a value. Am I making much sense? Probably not. Am I basically suggesting you re-write the color pallet system to suit your needs? Yes. ;)

    I hope my crazy ramblings were some kind of help. LOL

    • MAbrash says:

      The problem is actually that the RGB intensity of black is zero. That is, black is represented by the absence of photons. That means that whenever you draw virtual black over a real world scene in AR glasses, the real world scene will show through unchanged; therefore, there is no way to draw virtual black. You can represent black with whatever value you want in the framebuffer, but it’ll still be zero photons when it’s drawn, because that’s what black is.

      –Michael

      • Sean Thompson says:

        I can imagine a transparent LCD that goes opaque, we’ve had self opaquing glass for a while now, the ability to control it relative to the size of a pixel may certainly be possible.

  15. Dexter says:

    I don’t really think 4K and 8K are as far off as many people think.
    For one, Intel has recently announced that they are making a push to 4K rather quickly between 2013-2015, with “Premium” devices planned to support the resolution soon: http://www.tomshardware.com/news/Intel-Higher-Resolution-Displays-Coming,15329.html
    Pushing said hardware into mass market beyond the expensive solutions already available out there: http://en.wikipedia.org/wiki/4K_resolution#List_of_4K_monitors_and_projectors
    This will also be true for smaller tablet devices, for instance SHARP is already ramping up production on 10″ 2560×1600 displays: http://www.sharp-world.com/corporate/news/120413.html
    And has just released some “Premium” 4K displays for the Japanese market: http://www.engadget.com/2012/11/27/sharp-pn-k321-4k-igzo-lcd-monitor/

    For the other the TV market is heading that way, and as it seems sooner rather than later, UHDTV has long been standardized with 4K and 8K display prototypes by SHARP/Samsung/Panasonic being shown off between 2008-2011.
    The last Summer Olympics were test broadcasted in UHDTV @8K which they are calling “Super Hi-Vision”: http://www.guardian.co.uk/media/2011/aug/28/bbc-3d-vision-london-olympics

    Some of the first test broadcasts are also already going on satellites in 4K, for instance:
    http://i.imgbox.com/adtKNFeI.jpg
    http://i.imgbox.com/aclZ8dZ6.png
    http://i.imgbox.com/abfHJtQ7.jpg
    Of course there’s still problems with broadcasting and compression being worked on, but I think a change to 4K content may be sooner than one would believe (2-3 years).

    It’s really sad that a push for higher resolutions has been held back as long as it already has, I remember Carmack monitor back in 1995 and IBM releasing their first shortly before they closed shop on producing hardware components.
    Oh yes, and here was an interesting rant regarding resolution on PCGamer recently:
    http://www.pcgamer.com/2012/06/29/why-the-macbook-pro-makes-me-angry/

  16. Kamila says:

    New Sharp 6.1 inch screen coming soon 2560×1600/ 498 ppi -http://youtu.be/_5zOEf692vY
    I think we live in times when great resolution small screens has their 5 minutes – tablets, smartphones and now very promising VR projects on horizon can make developement process of higher quality pocket displays happen faster than predicted. I really hope so.
    All my gaming friends are much into VR, amount of Rift backers form kickstarter seems to confirm that common interest. Looking foward to see what Valve is up to :)

  17. Having recently played around with the PS3’s 3D capabilities on my normal PC 3D screen (1080p, 24″, 120Hz) I can say that it wasn’t the drop to 60Hz (30fps per eye) over HDMI that caught me out but the resolution drop. Playing Motorstorm, which tries to maintain a 1024×720 rendering but dynamically adjusts down for performance reasons in 3D and has some post-process AA, the difference in perceived polygon and sprite edges was enough to ruin the 3D effect. This has not been a problem for me at 1080p (even without any AA) in PC titles where I have only experienced such difficulties in resolving the world due to rendering glitches (shadows not projected in the exact same directions for both eyes or only rendering at all on one eye) and that gives me some cause for concern about the first few generations of VR headset. As you say, a split 1080p panel is all we’re likely to get for a while and aiming for a high FoV (well beyond my monitor’s visible extent) brings down that pixel density further.

  18. Getting AA right is going to be critical for quality with VR devices. The way to maximize image quality on these 640×720 or 960×1080 per eye displays is to shade at either 2 to 4 samples per pixel on a rotated grid. Then run a gaussian (for performance) resolve filter which uses the 16 (for 2x) or 24 (for 4x) nearest samples. The resolve needs to compute the gaussian filter weights per resolved pixel based on the distance from the sample to the resolved pixel. On top of this with the Rift, the image-warp should be done during the resolve. This involves warping the filter kernel distance calculations also. To improve upon this, take into consideration the display’s color grid, and compute the above function per color channel per pixel.

  19. R@ndom says:

    Your example pictures are B&W, that is not the case with modern games.
    Mass market had recently welcomed new CoD, which essentialy rendered 960×540 on PS3.
    Source image for VR could be 2560×1600, before its supersampling into 2×640×800.
    So yeah, in terms of pixels/degree its a setback, but for overall picture detail and motion reproduction supersampled 640×800 does not seem worse then current gen.

  20. Phil S says:

    From a physics point of view, the maximum resolution of the eye could be found using Rayleigh’s criterion. It does depend on the size of the aperture in the eye, which of course changes depending on the light level. Basically it is all about the resolvability of two point sources (two pixels). Hopefully if you aren’t already familiar with this, you’ll read up on it soon! It puts a more formal definition on the number of pixels per degree you need! Apologies if you’ve been using this already!

    • MAbrash says:

      Phil, thanks for the info. I haven’t dived into this beyond looking up a few posts about the approximate resolution of the human eye, because we’re so far from having resolution high enough that this matters that it’s purely theoretical in the HMD space right now, but it’s something I should look into for completeness.

      –Michael

  21. Chris Rojas says:

    To the point of using lenses to create the illusion of higher density in the center and shifting that center. Have you or your team looked into using liquid lenses to shift the center based on eye tracking as well as for reducing weight and correcting for any differences in a user’s vision? I’ve talked with other researchers about such a system and it seems like a natural fit that I haven’t seen anyone suggest specifically for HMDs.

    • MAbrash says:

      Interesting. I don’t know if that would work, and there’s always the issue of latency from eye tracking, but worth some thought.

      –Michael

  22. Sean Thompson says:

    Good post! Thankfully pixel density is actually easier to raise for smaller panels than it is for larger panels and displays. I’ve some fortunately not THAT vague hope for 4k per eye displays in the not terribly distant timeframe (a decade? 7<x<14 years?) But this leaves no bearing on how adoption will fare until then, and thus how fast such a thing might be produced. Certainly displays for cellphones have hit their limit of what we might truly care for them to be. I largely see the continued rush past 300+ PPI to be useless for such displays, and wouldn't be surprised if such dropped off fairly soon in favor of other areas of improvement. The question of what, exactly, will drive us towards the kind of high PPI displays needed for great VR is a serious question then. If Oculus Rift doesn't start enough interest, it may be a while until we get to that 4k number.

    • MAbrash says:

      Well said – that is exactly my concern. Oculus is riding the mobile display wave, but without the massive R&D mobile displays get, and the huge production numbers, costs go up amazingly quickly. There needs to be some mass market to drive development of HMD displays, and if it’s not VR itself, that will limit what’s available to work with to stuff that’s also suitable for mobile.

      –Michael

  23. Richard says:

    Here is short 10 minute programme about real prototype 8k displays in Japan. And other displays near the limit of human vision, including ones on sale apparently.

    URL of article:
    http://news.bbc.co.uk/2/hi/programmes/click_online/9774380.stm

    Thanks for your blog Michael, it is very interesting.

  24. cr0sh says:

    Regarding resolution, something I remember back in the 1990s with low-resolution HMDs and early VR was a concept called “looking past the pixels”. Back then, you were lucky to have a 320 x 240 LCD or CRT display for both eyes, let alone each one. Even pro-level HMDs maxed out at about 640 x 480 (though there were a small few that went beyond this level).

    I was in my late-teens/early-20s, and was enamored with the idea of virtual reality. I hacked Power Gloves, I played with Rend386, I wrote a series of articles on low-cost VR (which can be found on my website – they are very dated, though). I hacked on the Victormaxx Stuntmaster and hooked it up to my Amiga. All low res, very ugly. I spent way too much money playing VR games at a local Virtuality pod arcade.

    Anyway – back to “looking past the pixels” – this concept, which I don’t know if it has been explored much in the research literature – was the idea that despite the low resolution of displays used, the brain would “fill in” the gaps after you played for a while, as long as you didn’t concentrate on trying to see the pixels. You weren’t defocussing your eyes or anything, you were simply ignoring the lower resolution and after a while – subjectively – you would perceive the virtual world at a seemingly higher resolution than what was actually displayed.

    To myself – subjectively of course – this is what I remember when I think about playing Dactyl Nightmare way back then. The displays of the original bulky Visette (given its size, that was really a misnomer!) were very low-resolution; I honestly think that once the Rift gets into peoples hands and on their heads, they very well might experience the same subjective jump in quality of the display – provided they aren’t trying to look at the pixels, but rather concentrate on what is around them in the game. The brain will make up the rest (we know this is how the brain works anyway – what you see isn’t really what you see!)…

    • MAbrash says:

      “Looking past the pixels” is a good phrase, and it’s what I hope will happen. The question is what happens when there’s something to compare to, in the form of computers, tablets, and phones. In my case, I tend to switch from a 40×30-degree FOV at 1280×1024 to a 90×90-degree FOV at 640×800, and it’s very hard to look past the pixels given the A/B comparison. But it may be that switching between an HMD and a monitor or tablet isn’t perceived as an A/B comparison by the brain, since they’re very different experiences.

      –Michael

      • cr0sh says:

        I think part of it has to do with immersion as well – once the FOV gets past a certain point (about 45 degrees horizontal or so), things begin to shift.

        I own a Cybereye HMD; it has a fairly high resolution per eye, but the FOV is small (something like 20 or 25 degrees per eye) to make the display sharp. Looking past the pixels or not, you’re staring down a couple of toilet paper tubes with that HMD. My experience with a Forte VFX-1, though, was slightly different. It’s FOV is somewhat larger, but the LCDs are lower resolution; even so, playing DN3D (or Quake) isn’t a terrible experience, IMHO. I think it’s because of the larger FOV.

        Once you get to something with a really large FOV like the Virtuality HMD (which had an FOV way beyond even the Rift) – even with the lower resolution it wasn’t terrible (now, don’t get me wrong – I wouldn’t want to be stuck in that thing for longer than 15 minutes tops). I think there is some strange subjective shift that happens when you fool the brain into thinking that what it is seeing is “reality” by using a large FOV for greater immersion, in that resolution isn’t as big of a deal – that the brain can “fill it in”. I am sure this is merely subjective though, and likely on an per-individual basis, too.

        Of course, given that we now have many options of low-cost and near ubiquitous high-resolution 3D large displays, it may be that my opinion might change once I get the Rift (or better – when I build my retro rig for playing DN3D and Quake using my VFX-1). I might end up having the same issue as what you describe – an inability to “look past the pixels” due to not having seen them in a long while.

        Maybe I just need to go back to playing old games on my TRS-80 Color Computer system for a while…

        • MAbrash says:

          I’ve heard the immersion magic happens around 80 degrees, but haven’t had anything between 90 and 40 to test that theory with. Anyway, I agree things shift somewhere in there, and certainly it is easier for the brain to fill in with immersion. I’ve noticed the coarse resolution largely because I’m switching between two HMDs with more than an order of magnitude difference in pixel density, which normal users won’t. However, they will switch between that and their monitors and mobile devices, and I’m not sure whether that will have a similar effect or not. I hope not, but I’m waiting for the data.

          –Michael

  25. STRESS says:

    Very good article nails the problem on the head pretty much. And it shows the lack of progress in HMD over the decades as well. It hardly has doubled in total resolution in more than 10 years that’s pretty pathetic. I guess one of the major reasons why everyone moved away from HMD in the serious VR community.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>