Here’s how Apple is making AR objects far more believable

Apple is using machine learning to make augmented reality objects more realistic in iOS 12, using smart environment texturing that can predict reflections, lighting, and more. The new feature, being added in ARKit 2.0, uses on-device processing to better integrate virtual objects with their real-world counterparts, blurring the lines between what’s computer-generated and what’s authentic.

Currently, if you have a virtual object in a real-world scene, like a metal bowl on a wooden table, the wood’s texture won’t be reflected in the metal bowl. What environmental texturing does is pick up details in the surrounding physical textures, and then map that to the virtual objects. So, the metal bowl would slightly reflect the wooden surface it was sitting on; if you put a banana down next to the bowl, there’d be a yellow reflection added too.

Environment texturing gathers the scene texture information, generally – though not always – representing that as the six sides of a cube. Developers can then use that as texture information for their virtual objects. Using computer vision, ARKit 2.0 extracts texture information to fill its cube map, and then – when the cube map is filled – can use each facet to understand what would be reflected on each part of the AR object.

READ MORE HERE >>>

Leave a Reply

Your details
  • (Your email address will not be published in your comment)

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>