Eye strain, motion sickness, and fatigue are frequent physical complaints that limit the time that can be spent in a VR environment.
According to the team, most current 3D VR/AR displays present two images that the viewer's brain uses to construct an impression of the 3D scene.
When looking at an object, a person's eyes focus on the object, converge or diverge, then accommodate for distance.
The stereoscopic 3D images, however, are displayed on a single surface but are slightly offset to create the 3D effect. The person’s eyes have to work differently than usual, converging to a distance that seems further away, but focusing on an image that is centimetres from their face.
To overcome these stereoscopic limitations, the researchers created an optical mapping near-eye 3D display method, which divides the digital display into subpanels. A spatial multiplexing unit (SMU) is said to shift the subpanel images to different depths with correct focus cues for depth perception.
But unlike the offset images from the stereoscopic method, the researchers claim the SMU aligns the centres of the images and blends them together, making a seamless image.
"People have tried methods similar to ours to create multiple plane depths, but instead of creating multiple depth images simultaneously, they changed the images very quickly," assistant professor Liang Gao said in an OSA news release. "However, this approach comes with a trade-off in dynamic range, or level of contrast, because the duration each image is shown is very short."