Holography gets up close

5 mins read

Near-eye displays tuned for holograms look to solve one of VR’s big problems, as Chris Edwards explains.

Virtual reality is the hardy perennial of the technology sector. It has made it through numerous winters and endured long enough to convince Facebook to rebrand, along with the technology, around the concept of the “metaverse”. But it’s an application that still faces many problems, not least those posed by the display technology on which it relies.

Using conventional 2D displays in a stereoscopic configuration is far from ideal when trying to render synthetic 3D imagery. There are two issues at work. One is the response lag: when the graphics fail to keep up with rapid head or eye movements.

The second is more subtle. This is the vergence-accommodation conflict. Stereoscopic displays have a focal plane fixed on the display surface itself. Though the brain can interpret objects being at different distances from the eyes based on the difference in position seen by each of them, this creates a conflict. This makes using them far more fatiguing than interacting with objects in the real world.

One answer is to move to a true 3D-rendering technology in the shape of real-time holography. This uses interference patterns to create the same light field at the user’s eye as the one that would be produced by the real objects were they in view.

“There is so much more in an image when you perceive it in three dimensions. The depth cues used by the brain: holography does them all because it produces the light that would be there,” says Tim Wilkinson, professor of electrical engineering at the University of Cambridge.

Almost six years ago, Microsoft Research built the HoloLens, a prototype of a wearable near-eye display that has inspired much of the work in this kind of VR technology.

The key to the device was to use an arrangement of lenses and mirrors folded into a relatively small space that bounce light off a spatial light modulator (SLM). This acts to alter the amplitude or phase of light from a laser at the pixel level. The result is an altered wavefront that represents the hologram that the user’s eye sees.

As with stereoscopic displays, the setup uses eye-tracking to adapt the focus of different objects in the scene, so they blur naturally as that changes. Though the simplest setup is to use a single laser, using three lasers in an RGB configuration provides the ability to render scenes in colour.

The SLM’s ability to control light is key to the quality of the pixel array itself although the nature of holography means stuck-pixel defects and similar problems are nowhere near as troublesome for yield as with panels designed for 2D use. Wilkinson says in experiments he once had a projector where every fifth row did not work properly. “But you could see no effect in the replay field.”

The reason lies in the way that holographic rendering relies on Fourier transforms to manipulate the light field into a recognisable image. In principle, light scattered from every pixel makes a contribution to every light ray in some way that is projected into the eyebox, the name for space where the eye can be located while it’s viewing the rendered scene. It is the sum of all those interfering light cones that produce the observed output. As a result, individual pixel errors largely diffract out of view, and wind up just being a source of background noise. That’s the good news.

The bad news is that SLM quality and resolution has yet to reach the level required to render photorealistic scenes. And the processing power needed to create those scenes, using conventional methods, is beyond most portable computers or smartphones.

There are other problems, Jonghyun Kim, Nvidia senior research scientist, explained at the company’s technology conference. “The laser causes speckle, and the SLM suffers from low efficiency and crosstalk.” The SLM tends to create high-order shadow images and adds noise to the image that results in poor contrast. Filtering for these last two adds weight to the headset.

Improving SLM images

Over the past few years, software techniques have emerged that can improve the perceived quality of the SLM images. “Recent computation capabilities with AI have completely changed the display game,” Kim claims.

An approach used by Stanford University researchers, working more recently with Nvidia, is to use camera-in-the-loop measurements to calibrate a machine-learning model that performs real-time corrections when the image is rendered. This has improved contrast somewhat and reduced the speckle from lasers, although recent work with LEDs has suggested these might be able to support holography.

The loss of coherence makes the interference less predictable but software corrections such as those developed by Nvidia and Stanford make consumer-level headsets viable.

Contrast remains a problem. One answer may lie in the use of Michelson holography which uses an additional SLM to provide an additional source of lightwave interference. To render dark areas, the second SLM’s lightwaves are shifted to those from the primary SLM. However, this leads to its own set of aberrations: ripples emanating from points in the image. Again, Kim sees machine learning driven by camera-in-the-loop calibration as being a way round this problem.

However, all this computation comes on top of what is already a very compute-intensive process.

The traditional algorithms for computer-generated holography rely on brute force processing of every point in the eyebox, which can tax even flagship GPUs for video-rate operation.

Work by Liang Shi and colleagues at the Massachusetts Institute of Technology has again used machine learning as possible workaround. The AI model they developed can run on iPhone-class hardware in real time though critics argue that their approach is not full holography but a processing of conventional 3D rendering with Fresnel optics to simulate depth and focus blur. That limits the field of view but in a system where the eyebox is small, this may not be an issue in practice.

Like the Stanford team, Shi and others are working to incorporate SLM correction into their algorithms.

Machine learning is not the only way to cut processing overhead. University of Cambridge-spinout VividQ has developed fast holography rendering techniques optimised for augmented reality, where there will typically be only a few virtual 3D objects in the scene, in contrast to the MIT system where a full virtual space is being rendered.

Though VR provides an obvious market for wearable holographic displays, AR may be more amenable to what hardware is capable of today. The idea is that these headsets would support industrial designers and service technicians, by incorporating virtual objects in a real scene without the vergence-accommodation problems of conventional transparent headsets. VividQ sees automotive head-up displays, projected from the top of the dashboard, as a potentially large adjacent market.

If a market for wearable holography does develop, hardware changes will still be needed, not least the availability of SLM at price points that can support consumer-level products. Though SLMs are at their core liquid-crystal displays, the ideal phase-only reflective form developed for holography remains an expensive R&D-level tool.

“Amplitude-only SLMs are basically all the displays we use today: LCDs,” says Gordon Wetzstein, associate professor of electrical engineering at Stanford University. “Phase-only SLMs are typically certain types of LCoS displays that use the tunable birefringence properties of liquid crystals to control the phase delay of a coherent wave. I don't think there is anything fundamentally more challenging in designing and manufacturing phase-only SLMs, it's primarily a matter of scale. There just isn't as much demand today for phase-only SLMs.”

If a market opens up at least for AR that can use relatively simple holographic rendering technology, it may be enough to push more suppliers into the market for SLMs. That in turn may provide a path towards more extensive VR systems.

They have heavily entrenched competition in stereoscopic 2D but if VR again fails to take off because of its continuing image problems, that may provide the opening holography needs.