The Future of Mobile 3D Scanning is Software — and Real Time

It’s Mobile Wold Congress in Barcelona this week. Interestingly, for a 3D Scan Expert a mobile technology event like MWC is becoming just as valuable as industrial trade shows like Formnext which focus on manufacturers of industrial 3D printing and 3D scanning hardware. Especially because I cover 3D capture solutions for both professionals and consumers.

But while MWC is also largely perceived as a hardware release show, an increasing amount of the actual innovations are found in software. This is arguably most noticeable for mobile augmented reality (AR) which seems to have become the new buzz technology after a few years of mobile virtual realty (VR).

Software-only is starting to make sense

For a short while, the idea was that for reliable tracking in AR, a phone needed special hardware. That’s why Google developed Project Tango which put a tiny depth sensor next to the regular smartphone camera. But from March 1, Project Tango will no longer be supported. That’s no surprise because the idea of building depth sensors into smartphones for AR has been surpassed by the concept of doing it with just software with the introduction of Apple’s ARkit for iOS and Google’s own ARcore for Android.

But while Tango was marketed as AR technology, many people, including me, where also interested in the ability to make 3D scans with the depth sensor. That’s obviously still a niche and it’s not unlike stand-alone depth sensors like Structure Sensor and Intel RealSense that are used mainly for tracking purposes but can also be used for 3D scanning.

The idea of building depth sensors into smartphones for AR has been surpassed by the concept of doing it with just software.

But dedicated sensors cost money. And both the add-on and built-in versions make mobile phones considerably less portable for a feature many owners won’t use a lot. That’s why apps that promise to do 3D capture with regular 2D photos combined with smart software — a technology called mobile photogrammetry — have become popular in recent years.

Mobile photogrammetry’s Achilles’ heel

But there is a catch with mobile photogrammetry (and it’s not 123D Catch, because that popular app was discontinued): I get many questions from people that downloaded (almost) Free Photogrammetry software, shot a bunch of photos and didn’t get the results they where hoping for — or no results at all. While on the other side, I’ve handed an iPad with Structure Sensor and the itSeez3D app to children and they made their very first 3D scans in within minutes.

In a world where social sharing either happens right after the moment or even in real time, 3D capture doesn’t really fit in — yet.

So In my consulting work and talks I often explain that while photogrammetry is awesome, it’s also hard and has a steep learning curve. It’s essential to know how it works so you can take the right amount of photographs, from the right angles, with the right camera settings. And even if you have become a photogrammetry Jedi master, the long processing times make for a nerve-wrecking experience.

Even in my own experiences, I do a lot of 3D scanning en photogrammetry in my studio but I almost never capture stuff in 3D while traveling or when I’m outdoors in general. And when I do shoot photos — or more often 4K videos which are great for photogrammetry — I always just do the capturing in the moment and do the processing later. This means I almost never share my captures on social networks because the moment has passed. So in a world where social sharing either happens right after the moment or even in real time, 3D capture doesn’t really fit in — yet.

This is about to change because developers are starting to use the great cameras and fast mobile processors (that are increasingly fine-tuned for graphics performance) for 3D capture instead of making perfect photos and powering AR.

For the first time mobile photogrammetry is working more or less in real time and through user interfaces that are designed well enough to make 3D capture understandable for newcomers.

Sony brings real time 3D capture to Android

The best example of this is Sony’s 3D Creator app. It was introduced with the Xperia XZ1 last year and marked the first time that a smartphone brand pushes 3D capture in their main marketing. The new Xperia XZ2 that has been released at MWC this week can now also capture 3D with its front-facing camera, allowing the creation of 3D selfies.

AR is great for guiding consumers to make good 3D captures.

Post-MWC update:

I officially got scanned at the Sony booth at MWC. Below is a video of a Face scan, which takes less than a minute from start to end result:

And here is the result on Sketchfab (which the app can directly export to) for your viewing pleasure. It’s the full quality output of the app.

If you dig a bit deeper the 3D Creator app is more than just a fun feature. Sony has a dedicated 3D Creator YouTube channel with video tutorials that explain how to make the best captures of various subjects. While this does prove that 3D capture still has a learning curve when it comes to things like ideal lighting conditions, the visually guided capturing process is a leap forward in ease of use. It also makes clear that AR is great for guiding consumers to make good 3D captures, something I also discovered when reviewing HP’s Sprout G2 all-in-one PC.

Of course it’s unfortunate for many that 3D Creator only works on Sony’s flagship phones. But that choice does make clear that software-only 3D capture currently requires a specific combination of hardware and software to function in (near) real time, something that’s even harder on the fragmented Android platform.

Capture small objects with Qlone, if you have a printer

iPhone users have been able to do mobile photogrammetry with Trnio for a while now. But while that app does offer basic tracking guidance for the capture stage, processing is done in the cloud. And if you’ve ever tried it you know that the multiple waiting times for file transfers can be frustrating.

Qlone was the first app to offer real time 3D capture without special hardware. It has been available for iOS for a while and a preview is now also available for Android.

Tracking alone is very demanding for even the latest flagship phones let alone doing running photogrammetry algorithms at the same time.

But to make this technology work, the app relies on a grid that you need to print. This AR Mat as its called not only limits the possibilities of making spontaneous 3D captures, it also limits the subjects to small objects since most people don’t have a large format printer — if they still have a printer at all.

Post-MWC update:

I dropped by the Qlone booth at MWC last week to see the app in action on an iPhone X. Here’s a video of the process:

And here’s the final result from that very scan session:

It’s not hard to imagine that the necessity of a printed pattern will become obsolete in the future now that ARkit and ARcore can do marker-less tracking in real time. But that way of tracking alone is very demanding for even the latest flagship phones, let alone running photogrammetry algorithms at the same time.

The present and future of 3D capture and social sharing

It’s clear that the first steps of real time, software-only 3D capture is now becoming a realistic task for smartphones. Although this will not give the detailed geometry you get from dedicated 3D scanners, the combination of a low-poly 3D mesh with a high-res photographic texture is good enough for many purposes.

In fact, for consumers it’s probably better that 3D captures are forced to be low-poly because this makes them compatible with viewing in VR and AR and sharing on social networks. For example, Facebook introduced 3D Posts last week, which allows you to share 3D models directly to the News Feed. The Sony 3D Creator app I mentioned earlier is actually one of the first apps to support this direct-sharing feature.


Post-MWC update: The embedded Facebook post was made right after scanning at the Sony booth at MWC using the new direct-to-Facebook export feature:

Interestingly, Facebook’s 3D Post format is currently limited to just 3MB in file size — including the texture image! This might not be a big issue for simple 3D models created with Minecraft of Google Blocks but it would require an awful lot of optimization of even the most basic 3D captures. According to this review of the Xperia XZ2, even Sony even has to decimate the output of 3D Creator to publish directly to Facebook. Heck, you can even share 15MB GIFs on Twitter — since 2016!

An even more interesting subject is file formats. Facebook strictly supports the new glTF 3D file format which very few 3D scanning and editing programs can currently export. And even within that file format there’s a list of restrictions such as texture resolution. I’m well aware that glTF should become the first real standardized 3D file format but it currently isn’t. Luckily it’s easy to convert OBJ to glTF.

So while 3D Posts has a lot of potential for the future, for professionals it’s currently more convenient to upload your 3D scans to Sketchfab and share them on social media through there since that service supports over 50 3D file formats and even the free plan supports uploads up to 50 MB. And their viewer embeds natively into the time lines of both Facebook an Twitter.

Truth be told, I did just get a flash back to that exciting moment when Tumblr announced 1MB GIF support in 2012. For consumers, 3D could perfectly well become the new GIF. I recently had a lot of fun with Google’s new AR Stickers on my Pixel 2 and that made me realize that making fun videos with AR is a prefect place for user-generated 3D content.

Wrapping up

Mobile photogrammetry is great because it put 3D capture technology in the hands of everyone with a smartphone. If you think about what that has done for digital photography in the past, it’s hard not to get excited. And with smartphones getting incredibly powerful and having dedicated chipsets optimized for processing complex visual tasks it’s now becoming possible to perform photogrammetry in (near) real time and produce usable 3D models.

And when even a large smartphone manufacturer like Sony jumps on a wagon, 3D capture might just have a chance to reach consumers in the near future. I really hope they will keep developing this app because Microsoft had ambitions plans for mobile 3D capture too but has been silent about it ever since it was revealed.

But mobile photography wouldn’t have become successful if it wasn’t so easy to share photos online and directly to social networks. This all comes down to the power of standardized file formats like JPG for 2D stills and MP4 for video (and let’s not forget GIF). Luckily glTF is on a good path to becoming the standard format for 3D and it’s nice that Facebook has embraced it for its new 3D Post format. But restricting a feature to just one format (that’s not yet supported by many 3D creation solutions) and a size limit of 3MB (that even a single 2D photo from most smartphones exceeds) doesn’t offer a lot of flexibility for experimentation.


I will be in Barcelona on Wednesday to visit MWC and chat with people in the mobile phone industry about the future of mobile 3D capture.

Follow me on your favorite social network for updates from MWC 2018. Maybe I’ll even share some stuff in 3D!

  1. Hi Nick-
    I enjoyed this foward-looking article- thank you!
    It seems as though mobile scanning is about to reach the tipping point.

    Have you ever written about “shape from shading” software? Do you know of any software that is available to the consumer?
    Many thanks,
    TJM

  2. Do you think mobile phones will ever have the capability of accurately capture a person’s teeth?

Leave a Reply

Your email address will not be published. Required fields are marked *

Contents