How About a 40-Megapixel Smartphone?
By Stephen P. Atwood
In today’s second keynote, “Enabling Rich and Immersive Experiences in Virtual and Augmented Reality,” Google’s Clay Bavor discussed several aspects of his company’s strategy to develop immersive VR/AR applications and enabling hardware. Google has rather firmly focused its efforts on an architecture that utilizes commercially available smartphones. Though Bavor mentioned dedicated VR headset development and showed one image of a notional device, overall the company is about making smartphones work for VR applications.
There are challenges, including latency and resolution. Latency creates a discontinuous experience during head movement, and resolution limitations create effects similar to having poor vision in real life. Both of these challenges can be addressed, as Bavor explained, but what really got my attention was his announcement that with an un-named partner, Google has developed a smartphone display that has 20 megapixel resolution per eye! That’s presumably at least a 40-megapixel total display and it’s OLED as well. That’s the good news.
The bad news is that in order to supply content to that device at required frame rates of 90 to 120Hz, the raw data stream approaches 100 Gb/sec. Yikes! That’s not going to happen tomorrow, although in our current issue of Information Display, we have a related article about high-performance video data compression titled “Create Higher Resolution Displays with the VESA DSC Standard.” Maybe that’s a path forward.
Google’s path forward is to discuss what it calls “foveal rendering,” which is basically an approach that uses iris tracking to determine where your eye is focused, and then renders a small region in the center of your vision at full resolution. The rest of the scene is rendered at lower resolution. Presumably if you looked at the same space long enough, the rest of the periphery would also fill in at high resolution. Clay also alluded to the need for an algorithm that could anticipate where your eye would be moving, akin to how a surfer anticipates a coming wave and gets up to speed as the crest arrives. Similarly, any algorithm performing this type of advanced processing would presumably need to anticipate what the observer is going to do; otherwise the reaction time and subsequent latency would ruin the magic of the experience.
Whether this gets to commercial viability in the near future or not, it’s really exciting to think about the various challenges that were overcome to make even a few prototypes at this resolution, and how this furthers the very-high pixel density capabilities already in place. It’s clearly an exciting target with a killer application. Time to start paddling -- the next wave is coming.