Pete Kelsey is becoming quite famous here at 3DRV. He makes regular appearances on the 3DRV journey now, well, because he’s willing, memorable, and has lots of good stories while he’s training us. I’m still catching up from our visit that resulted in the Dirty Socks and Drones post a few weeks ago. If you’re looking for the foibles of travel, head over to that post.
As you might remember, Pete gave us some UAV/drone training with the 3D Robotics IRIS quadcopter. Pete provided the large acreage needed to help train a new RC pilot, that is, me. Thankfully, and I’m sure Pete is sighing with relief; our second training session was far less dramatic. Both were productive and beneficial, of course.
Our second training session was not on flying drones, but scanning with the FARO Focus3D X 330 laser scanner. Due to schedules and travel times, we picked one of the more historic buildings in Bozeman – a new Starbucks café! The reason we chose it is simple – the manager said yes.
So, we scan the building from four different positions. We’re not trying to get the entire building, but just a couple of corners and areas for this first effort. The scan part is relatively easy as FARO has made it pretty user-friendly. We set it to “outside, far distance,” which captures everything within 300 meters. We’ll edit out what we don’t want later. Each scan takes about 12 minutes.
An hour later, we’re inside Starbucks learning how to import the files into Autodesk’s Reality Computing software. ReCap is the one we’re focused on for the first training session. Pete guides us through the whole process, starting with the import steps. Importing takes some time because the files are large – having gathered millions of data points – to form the point cloud.
The real work begins when you “register” the scans. This is the process where we pick the surfaces within the scan data to connect the four scans to one another. I’m pleased to report that it is quite easy and the software gives you a score that helps you decide if that registration is “good enough” or if you need to pick other surfaces or objects. Your goal is to tie the scan images together.
The easiest way for me, a beginner, to explain this process is to compare it to a panoramic photo – prior to the technology that now exists, if you wanted a long, landscape style photograph you would take 8 to 10 photos, for example, panning across a scene. You would then take those prints and lay them side by side to find the right overlap spots to form one image.
ReCap lets you do similar with simple clicks. Your goal is to connect all the scans by placing colored digital markers that match or connect in each scan. So if you see the Starbucks sign in one scan image, and you find it in the next scan image, you would mark that same area in each separate scan.
The colored markers would be green, blue, then red – so you would have 2 greens, 2 blues, then 2 reds. If you have not picked areas that match well enough, the software tells you that you have a poor match. If you pick well, then you get a better score and you can accept that match and move on to your next image. Essentially, the software is stitching those images together much like you would lay out a panoramic photo manually.
- Scan Image 1
- Scan Image 2
- Scan Image 3
- Scan Image 4
It takes time in each scan project to do this matching or stitching process known as registering the scans. Each time you complete a registrations of two scan images, you then go through a refining process, which allows you to approve or accept the matching points that will make Scan Image 1 and Scan Image 2, more or less complete and it moves them to one side of the viewing panel.
Then you go through and create matching points in Scan Image 3, then Scan Image 4, until you have created one set of images that reference one another.
Once you register all of your scans, you tell ReCap to “index” them and it does its magic of stitching it all together into one unit. This takes some time, which is why you’ll now appreciate my earlier post on the power of HP and Nvidia together in one laptop (in my case) or desktop: Test, Test, and Test Those Laptops And Workstations: HP Fort Collins. You need serious crunching power to do this sort of work.
These four scans are almost one gigabyte of scan data that is crunched into one image, not unlike a Google Map view where you move around an area – the big difference here is there are millions of underlying points making for a very detailed and true 3D image. You can look at it in a photo-like view or open up the point cloud version which allows you to do all sorts of work.
Once they are all indexed, and your laptop is nice and toasty warm, you launch your project and get to see views like you see in this short video below. I’m moving around the scan data as Pete continues to instruct me. We’re getting more competent with each scan and each ReCap project we start.
In a future post, I’ll show you how we’re stitching together photos. Plus, I’m starting to do some product design work with Fusion 360 after being inspired by Paul Deyo’s post: Creating Bike Lugs in Fusion 360. Fun times ahead, in and out of the RV. Thanks for reading, viewing, and learning along with us! Check out the gallery photos just below of things people are making in Fusion. Pretty inspiring.
And yes, I am still working on the post, via Pete, about his and Shaan Hurley’s trip to Hawaii to share the cool project 3D scanning and documenting the USS Arizona.
Follow along at 3drv.com! #3DRV ■