The first video game I ever wrote ran in monochrome, at 160×72 resolution. My next four games moved up to four colors at 320×200 resolution. The game after that (Quake) normally ran with 256 colors at 320×200 resolution, but could go all the way up to 640×480 on the brand-new Pentium Pro, the first out-of-order Intel processor.
Those sound like Stone Age display modes, now that games routinely run in 24-bit color at 1600×1200 or even 2560×1600, but you know what? All those games looked great at the time. Quake at 640×480 would look pathetically low-resolution now, but when it shipped, even 320×200 looked great; it’s all a matter of what you’re used to.
That’s relevant just now because the first generation of consumer-priced VR head-mounted displays is likely to top out at 960×1080 resolution, for the simple reason that that’s what you get when you split a 1080p screen across two eyes, and 1080p is probably going to be the highest-resolution panel available in the near future that’s small enough to fit in a head-mounted display. At first glance, that doesn’t seem so bad; it falls short of 2560×1600, or even 1600×1200, but it’s half of the latter, so it’s in the same resolution ballpark as monitors. And besides, it’s way higher-resolution than any of my earlier games, and in fact it’s higher-resolution than anything that was available for more than 15 years after the PC was introduced, and, as I noted, those lower-resolution graphics looked great then. By analogy, VR should be in good shape at 960×1080, right?
Alas, it’s not that simple, because when it comes to resolution, it’s all relative. What do I mean by that? There are two very different interpretations, both applicable to the present discussion. We’ve seen the first one already: how good a given resolution looks depends on what you’re used to looking at. 160×72 looks great when the alternative is a text-based game, but less so next to a state-of-the-art game at 2560×1600. This first interpretation applies to VR in two senses. The first is that VR will inevitably be compared to current PC graphics – clearly not a favorable comparison. However, the second is that, like my early games, VR will also be judged against previous VR graphics in the PC space, and that’s a favorable comparison indeed, since there are none. For the latter reason, if VR is a unique enough experience, people will surely be very forgiving about low resolution; the brain is very good at filling in details, given an otherwise compelling experience, as happened, for example, with Quake at 320×200.
Another way to think about resolution, however, is relative to the field of view the pixels are spread across. The total number of pixels matters, of course, but the density of the pixels matters as well, and it’s here that VR faces some unique issues. Let’s run some numbers on that.
My very first game ran on a monitor that I’d estimate to have a horizontal field of view of maybe 15 degrees at a normal viewing distance. At 160×72, that’s about 11 pixels per horizontal degree.
A 30” monitor at 2560×1600 has about a 50-degree field of view at a normal viewing distance. That’s roughly 50 pixels per horizontal degree, and approximately the same is true of a 20” monitor at 1600×1200.
The first consumer VR head-mounted displays should have fields of view that are no less than a 90 degrees, and I’d hope for more, because field of view is key to a truly immersive experience. At 960×1080 resolution, that yields slightly less than 11 pixels per horizontal degree – the same horizontal pixel density as the CP/M machine I wrote my first game for in 1980, and barely one-fifth of the horizontal pixel density we routinely use now.
And that’s only the horizontal pixel density. The vertical pixel density is the same, and in combination they mean that a first-generation consumer head-mounted display will have about one-twentieth of the two-dimensional pixel density of a desktop monitor. As another way to understand just how low a wide field of view drives pixel density, consider that the iPhone 5 is 640×1136 – two-thirds as many pixels as the upcoming head-mounted displays, packed into a vastly smaller field of view; at a normal viewing distance, I’d estimate the iPhone has roughly 100 pixels per degree, so overall pixel density could be close to one-hundred times that of upcoming VR head-mounted displays.
It is certainly true that the brain can fill in details, especially when viewing scenes filled with moving objects. However, it would be highly optimistic to believe that a reduction in pixel density of more than an order of magnitude wouldn’t be obvious, and indeed it is. It’s certainly hard to miss the difference between these two images, which reflect the same base image at two different pixel densities:
And that’s only a 4X difference – imagine what 20X would be like.
If there were no monitors to compare to, low pixel density might not be as noticeable, but there are, not to mention omnipresent mobile devices with even higher pixel densities. Also, games that depend on very precise aiming may not work well on a head-mounted display where pixel location is accurate to only five or six arc-minutes. For that reason, antialiasing, which effectively provides subpixel positioning, will be very important for at least the first few generations of VR.
That’s not to say that the upcoming VR head-mounted displays won’t be successful; a huge field of view, together with high-quality tracking and low latency, can produce a degree of immersion that’s unlike anything that’s come before, with the potential to revolutionize the whole gaming experience. But I can tell you from personal experience that the visual difference between a 960×1080 40-degree horizontal field of view head-mounted display and a 640×800 90-degree HFOV HMD (both of which I happen to have worked with recently) is enormous – what looks like a blurry clump of pixels on one looks like a little spaceship you could reach out and touch on the other – and that’s only a ten-times difference.
So I’m pretty confident that we’ll be begging for more resolution from our head-mounted displays for a long time. Obviously, that was also the case for decades with monitors; the difference here is that every day we’ll encounter much higher pixel densities on our monitors, our laptops, our tablets, and even our phones than on our head-mounted displays, and that comparison is going to be a challenge for the consumer VR industry for some time to come.
Given which, the obvious question is: how high does VR resolution need to go before it’s good enough? I don’t know what would be ideal, but getting to parity with monitors in terms of pixel density seems like a reasonable target. Given a 90-degree field of view in both directions, 4K-by-4K resolution would be close to achieving that, and 8K-by-8K would exceed it. That doesn’t sound all that far from where monitors are now, but actually it’s four to sixteen times as many pixels; there’s no existing video link that can pump that many pixels – in stereo – at 60 Hz (which is the floor for VR), not to mention the lack of panels or tiny projectors that can come close to those resolutions (and the lack of business reasons to develop them right now), so pixel density parity is not just around the corner. However, if VR can become established as a viable market, competitive pressures of the same sort that operated (and continue to operate) in the 3D graphics chip business will drive VR resolutions, and hence pixel densities, rapidly upward. VR could well become the primary force driving GPU performance as well, because it will take a lot of rendering power to draw 16 megapixels, antialiased, in stereo, at 60 Hz – to say nothing of 64 megapixels.
Believe me, I can’t wait to have a 120-by-120-degree field of view at 8K-by-8K resolution – it will (literally) be a sight to behold. But I’m not expecting to behold it for a while.