Main content

AR/VR has an image problem. Holographic light fields could be the answer

George Lawton Profile picture for user George Lawton June 20, 2024
Summary:
Most Virtual and Augmented Reality vendors pretend that better stereograms will solve the nausea problem. Swave is betting holographic light fields are a better approach, and they can be small and low power, too.

virtual reality

The Virtual Reality and Augmented Reality (VR/AR) industries have an image problem. One aspect is that the current generation of heavy, large, power-hungry, expensive devices is not exactly engendering a mass market. Apple has reportedly dialed back work on a more advanced version of Vision Pro after slow sales and high returns.

Aside from public perception, it also has an actual image problem that causes nausea or cyber-sickness, particularly for older folks and those trying to do work on virtual screens for extended periods. Vendors have imagined that these problems can be solved if only they increase the resolution, framerate, or tracking fidelity. Few have considered solving the mismatch between how our eyes focus (accommodation) and coordinate at various distances (vergence).

Mike Noonen, CEO of spatial computing specialist Swave says:

The display technology that all of these systems have has not been well suited for 3D applications that behave the way the human vision system expects them to behave. One of these phenomena is vergence-accommodation, where things you expect to be at a certain focus aren’t. That’s the source of people feeling seasick or nauseous. The various attempts people have previously employed have been cumbersome at best and ineffective.

Swave is commercializing a low-cost, low-power, and high-resolution holographic rendering engine that could change that. The startup is a spin-out of the Interuniversity Microelectronics Centre (imec), which coordinates interoperability research for the semiconductor industry. Swave’s three essential innovations include a custom holographic processing chip, a nanopixel display that can be mass-produced on a low-cost CMOS process, and a set of real-time holographic processing algorithms.

These can be baked into a 50-gram pair of glasses (battery included) with a $50 bill of materials, daylight bright display, and all-day battery life. The display bounces a collection of red, green and blue lasers off the phase change chip with billions of nanopixels reflected off a half-mirrored coating on the glasses. Future versions could be reflected off walls or vehicle windshields. Development kits are expected later this year, and commercial products will be available in 2026.

The illusion of depth

There are many ways of creating the illusion of depth for still and moving imagery. The most common way is to generate separate images for each eye that mimic the output of two cameras spaced slightly apart. In the mid-1800s, stereoscopic glasses were all the rage for displaying 3D views of the world captured with special pairs of cameras. They take advantage of the fact that one aspect of perceiving 3D results from the angles our eyes converge to see objects at different depths, called vergence.

Another aspect of perceiving depth lies in the way the ciliary muscles stretch the lenses of our eyes to focus at different distances in a process called accommodation. When these muscles lose their range of motion, glasses can adjust the focus to a range we can accommodate.  This is where light fields come in. Unlike an image rendered in pixels, a light field also carries all the physical properties of light that might be reflected off objects at different distances.

There are many ways to create a light field. A mirror, for example, allows you to adjust your focus to different objects in a room, unlike a picture that highlights objects at one depth while progressively blurring others the more they are closer or further away from the focal point. In 1908, Gabriel Lippmann discovered integral imaging in which images captured with a fly-eye array of lenses could render a 3D light field when seen through the same lens array. However, much information is lost when the film dots are transformed into intensity and depth. Modern lenticular films, popular for advertising displays, use lines of such lenses that show some depth but use so few views that they are a little better than stereograms.

In 1947, Dennis Gabor discovered another way of capturing light fields into holograms using diffraction gratings that captured the amplitude and phase of light rather than just the intensity.  Noonen explains:

This is looking to replicate the real world by shaping light in a 3D manner, so you are actually translating 2D pixels into voxels that are pixels in space. By having this 3D rendering, you get rid of these artefacts in vergence and accommodation that cause a claustrophobic feel. The reason this has been challenging is partly computational, but also to render a hologram itself, we need a very tiny pixel pitch on the order of half the wavelength of light.

Swave’s technology uses a new technique for mass-producing dynamic phase change materials to create 250-nanometer pixels that are half the wavelength of light. Swave has also developed a real-time holographic processing algorithm that enables dynamic depth and a dedicated chip to reduce processing costs.

Other vendors are finding creative ways to shape traditional imaging techniques built on micro-LEDs, liquid crystal on silicon (LCoS), and digital light process (DLP) engines into light fields. These other technologies also start with pixels that take up over ten times as much space as the Swave tech. Resolution is lost when going from 2D pixels to 3D light fields, further reducing quality. Noonen says about a hundred pixels are required to create a single voxel.

A lot of power is also lost in the process. Noonen says that traditional approaches may use a hundred to a thousand units of power to generate the equivalent unit of brightness. Hence, the big batteries and short runtimes.

A spectrum of markets

The 3D display market has been evolving into three broad categories, including:

  1. Fully immersive virtual reality worlds like Meta Quest,
  2. Augmented reality information overlays such as the Ray-Ban Meta Smart Glasses, and
  3. Mixed reality overlays for placing virtual objects over the real world, such as Apple’s Vision Pro.

Noonen says Swave initially focusing on the augmented reality use case, which is more straightforward and less complex. This could include use cases like creating a floating computer screen, showing directions, indicating which box to pick, or showing a manual during a repair. Down the road, the firm is working on more advanced graphics processing that will help with the mixed reality use cases that could help visualize how changes might look in a physical space, provide better guidance in the flow of work, and support some really cool games.

My take

Pointing at the convergence-accommodation gap in VR, AR and their friends, feels a bit like the little boy who noticed the emperor looks pretty naked in his magical new and expensive ‘clothes.’ It’s mind blowing the billions of dollars that Meta and Apple are spending on this stuff without really sorting out the seemingly obvious disconnect between the way humans actually see the world and the haphazard shortcuts we take to create better illusions.

It probably does not help that we have grown accustomed to staring at computer, mobile, and TV screens at a fixed distance for long periods. One consequence is the hundreds of billions spent on eyeglasses, contacts, and LASIK surgery. Various market researchers estimate we spend about $180 billion on these vision crutches that help us see better without fixing the underlying problem.

Dynamic glasses that help us learn to actually see better, like physical therapy, would certainly grow the VR/AR addressable market. I don’t expect the first true holographic light field displays to fix this problem from day one. But it sure would be cool if they could give us a little more variety and exercise in how our eyes work as we interact with computers and mobile experiences.

Loading
A grey colored placeholder image