The 3DRV journey is all about capturing the stories of people using 3D technologies. But a big part of the trip is also exploring the process of “reality capture” that lets me turn images of objects and people into 3D models. This process is known as photogrammetry.
Let me share the more esoteric, perhaps complicated, definition from Wikipedia:
“Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points… [It] may employ high-speed imaging and remote sensing in order to detect, measure and record complex 2-D and 3-D motion fields (see also sonar,radar, lidar etc.). Photogrammetry feeds the measurements from remote sensing and the results of imagery analysis into computational models in an attempt to successively estimate, with increasing accuracy, the actual, 3-D relative motions within the researched field.”
Phew. If you are still with me, I can make this way simpler. In order to use this definition, and process, let me explain what I understand and give kudos where it is due; Autodesk and the Reality Computing team have created the software to make all this easy and fast. The software is Autodesk ReCap and there is also an app called 123D Catch that makes this possible with just a smartphone camera. The Autodesk ReCap team likes to summarize the idea of taking something of the physical world and making it digital as: Capture, Compute, Create. They do it with laser scanning and with photogrammetry, two different methods, but I’m focused on the latter in this post.
Photogrammetry is a big word; I prefer photo scanning (even though it doesn’t cover everything about that more technical term).
First, you can use a regular digital camera, a GoPro, or a smartphone (we were loaned an iPhone 5C and 5S for the trip from Cricket Wireless and their coverage/service has saved us in remote locations) to capture the photographs that software will allow you to stitch together into a 3D model. If you have ever used the panoramic function on a digital camera, you have a rough idea of how this will look.
Second, you take a bunch of photos of an object or person. There are many tips available that help you create the best 3D model, but the better your camera, the better the 3D result. You can capture most objects or even a person (if they hold very still) with this “reality capture” process.
Third, the software does the rest. You upload the photos to the ReCap service or 123D Catch and it will stitch those photos together so that you now see the photos in a full three dimensional perspective. It is similar to Google Streetview where you can plan around an entire location – you make your own “streetview” around the object. ReCap will allow you to do some or all of it manually -- to pick the actual locations or spots that overlap one another, but most of us will not do that and let the software do the heavy lifting. The free account allows up to 50 photographs, more than sufficient for consumer and small business usage.
Let’s briefly talk about “compute.” Data from the physical world captured via your camera is uploaded to the cloud (it takes a lot of computing power; more you’re your typical desktop/laptop can handle) and the ReCap Photo service does the work. The desktop version of ReCap handles laser scanning data, but you need the cloud for the intense work of matching and stitching photos, at least for now.
Finally, for most uploads, you will get back the 3D model in less than an hour. So if you often start from a blank screen and create an object, stop it. You can photo scan your way to a great model that you can modify, tweak, change, to speed up your design process. You can get to the “create” phase much faster this way. The model might remain digital for a game or comic, or you might make it physical again with a 3D printer. Keep us posted on your photo scanning “reality computing” adventures by marking it with a #3DRV.
Here is a great video of the process using a GoPro and a UAV drone:
Follow along at 3drv.com! #3DRV ■