Solving the Metaverse Technology Puzzle

What you will learn:

  • What technological obstacles need to be overcome?
  • How position tracking and placement of virtual objects impact development.
  • How 3D scanning, time-of-flight and HMI will solve key issues.

The metaverse should transform our very concept of virtual reality. Before he can do that, however, he has some very real problems to solve.

Interoperability is a major issue, as many mega-players compete to establish platforms and operating standards. Display and optical solutions need to become more convenient. Without solving size, weight and appearance issues, VR headsets will continue to be used primarily for games and specialized applications.

There are also technical issues to overcome. From a purely operational perspective, two of the biggest challenges for developers are position tracking and virtual object placement.

The Metaverse appears to be orders of magnitude more complex than traditional virtual environments. It will include purely imaginative worlds, as well as those that mirror their real-world counterparts in terms of clarity, detail, and size. For the metaverse to be useful, users inhabiting a virtual environment must know exactly where they are positioned and where they are moving, regardless of distance or speed.

Placement of virtual objects is the other critical challenge. Metaverse applications will need to understand the surrounding environment to accurately place items. With position tracking, precise placement of objects will be essential. Satisfying one or the other is not enough – for the Metaverse to succeed, it must accomplish both.

3D technology will be vital

Fortunately, hardware innovations are on the way to support metaverse tracking and positioning. One of the breakthrough areas is in precise scanning technology for digital duplication, a pursuit encountered through 3D scanning.

3D scanning can be performed with depth cameras based on different technologies. Take structured light, for example. It is a technique in which a light pattern is projected onto an object and then read by a 3D imaging camera. The camera detects distortions caused by differences in distance to recreate the image with high fidelity. 3D scanning digitally duplicates a person or object, allowing them to be inserted into the virtual environment.

Another positioning innovation is 3D time-of-flight (ToF), which allows metaverse applications to track real-world objects for precise placement as they move. ToF emits pulses of light into a scene which are then reflected and read by a sensor. With ToF not only moving objects can be recognized, but they can also be digitally integrated into the virtual environment. Everything from tools and vehicles to animals and even people can be identified.

Unlike LiDAR (light detection and ranging) – another type of 3D scanning – ToF depth-sensing cameras provide full-field information. Conventional LiDAR relies on lasers to scan from top to bottom and left to right. Producing a full field with LiDAR is extremely expensive, making it prohibitively expensive for most metaverse applications.

Display technologies are evolving to support 3D. Many types of devices that are supposed to appeal to Metaverse users have been around for years. Yet, at this point, most have significant limitations.

Google Glass and Snap Spectacles have had, at best, mixed success due to cost, functionality, and privacy concerns. And even the most advanced VR headsets only have a narrow scope. While popular with gamers and other niche users, they cut the wearer off from the real world, making it difficult to interact with anything other than the virtual setting.

Ergonomics and Safety

Satisfactory performance and usability standards will be essential for wide acceptance of the Metaverse. Human-machine interfaces (HMIs), data rates, and rendering, among others, will be needed to create seamless interactions with the metaverse.

For displays, most experts agree that 60 pixels per degree of field of view is an essential minimum for rendering video that matches real-world conditions, a requirement that is met by the best optical devices.

Other types of HMIs will also soon undergo an incredible transformation. Last November, Meta’s Reality Labs division demonstrated a prototype haptic glove with ridged plastic pads that allow the wearer to “feel” surfaces. With the bar raised for sound, sight, and touch, metaverse dwellers are sure to enjoy increasingly immersive experiences.

Yet even the most advanced technology manufacturers have found that comfort and capacity are key to the acceptance of portable devices. VR/AR (augmented reality) glasses will need to be light enough to be worn all day, attractive enough to be attractive, and capable of delivering a dazzling user experience, all at affordable prices.

A breakthrough, called surface relief waveguide technology, can meet this need by providing transparent waveguides on high refractive index glass. This solution supports imaging applications in VR and MR (mixed reality) displays as well as 3D sensing and even automotive head-up displays.

Finally, developers need to address privacy and security issues. Although data security is a challenge, 3D scanning is poised to help mitigate privacy risks. Unlike 2D equivalents which record facial images, 3D scanners only record anonymous data from “point clouds” for authentication. No images or other humanly identifiable recordings are created.

Balancing law

The next few years will surely be crucial for the realization of the metaverse. Innovators will benefit from seeing the journey as a series of small steps, with each victory building on the latest breakthrough.

For each piece of meta hardware, developers will need to find a balance point between form factors, data quality, computing power, power consumption, and bandwidth limitations. Overcoming these obstacles will make the Metaverse, the ultimate virtual experience, a practical reality.