Hello everyone – my name is Charles Walbridge, and I’m the Lead Collections Photographer at the Minneapolis Institute of Art, which we call ‘Mia’. Mia is an encyclopedic art museum with a collection of about 90,000 objects from around the world. We’re on Sketchfab at sketchfab.com/artsmia and our website is at artsmia.org.
Here at Mia we’ve been learning about 3D scanning for cultural heritage for the last several years, and we’ve put most of our research into photogrammetry techniques – with photogrammetry, we can capture tiny objects like netsuke, human-sized sculptures, and even whole galleries:
We’re excited for the possibilities that 3D scanning and sharing have for museums – we think it’s a great way to share the art with people who can’t get here. We know it’s a great way to expand the experience of being with the art in the gallery. And we think it’s going to be great for documentation and conservation of the art.
We’ve learned a lot about cultural heritage photogrammetry from our friends at Cultural Heritage Imaging – at CHI they concentrate on accurate, repeatable results and solid data sets. They’re a tiny nonprofit, and a great resource for photogrammetry, both on Vimeo and in their in-person training for cultural heritage professionals.
In this blog post I’d like to share our turntable photogrammetry workflow, including a 3D focus-stacking technique that helps us get more accurate models.
In the photography studio at Mia we have a robot turntable and swing arm combination that we use for small and medium-sized objects, sized from about 3 cm to 1 m. For larger objects like human-sized sculptures, we’ll do our photogrammetry in the larger photo studio or in the galleries.
We use an older Canon camera (a 5D Mark 2) with the turntable. The turntable is about 80 cm in diameter, and is rated for about 200 kilos, but we’ve never tested that limit and we don’t plan to. The pedestal that supports the swing arm can rise and fall about 30 cm, so you can change the point around which the camera pivots. In this photo, the camera will pivot around the square part at the middle of the vase, but if we wanted to pivot around the top of the vase we could raise the pedestal another 10 cm.
Our turntable is manufactured by photorobot.com, a Czech company, and we use their control software (Basip) to move the turntable and the swing arm, to take the photos, and to name the photos. We generally take 36 photos at the lower swing-arm angles and reduce that number as the camera moves over the top of the object. We use Capture One to process the images from CR2 to high-quality JPEGs, and to batch apply metadata, exposure, white balance, and chromatic aberration correction, but not lens correction – PhotoScan will do that.
We photograph the empty set from every camera angle (every swing arm height) so PhotoScan can knock out the background, if necessary. That’s only nine angles for the images of the top of the object – each empty background serves as the mask for each row.
If the underside of the art is interesting, and if the art will tolerate it, we’ll make a smaller set of images of the underside. We’ll usually build a separate ‘chunk’ in PhotoScan with these photos, and merge the chunks later.
Here’s how that workflow looks in PhotoScan, with one additional step: because the Standing Arhat is tall and thin, we’ll focus stack the captures, but in a photogrammetry-friendly way:
In the above photos, the first image has its focus centered around the arhat’s waist; in the second and third, the center of focus is near his shoulders. In the fourth image, the center of focus is on the arhat’s head. When we change the focus we also move the object a few degrees before starting its full rotation. That way PhotoScan can separate the camera positions in space – it won’t be confused with overlapping photos.
In this screen shot PhotoScan is showing its calculated camera positions for the second focus setting (in pink), and you can see how the cameras are offset a bit from the ‘focus 1’ images, in blue, in the fourth row from the bottom.
Because the arhat is so tall and thin, we changed focus a third time to make sure we had good focus on the top of the arhat’s head.
Because it was relatively easy and the art would tolerate it, we laid the arhat on its back and did a short series of photos to capture the underside. The camera’s rotation around the object centered around the very bottom, and not the center, of the sculpture.
The camera positions highlighted in pink are from when the sculpture was on its back. The blue camera positions are from the series of captures when the arhat was laying on its front.
Altogether, we took 330 automated photos of the arhat and its setup to make the model; we’re generally only building medium-resolution dense clouds in PhotoScan, because our deliverable models are either one million polygons (for our curators and conservators) or between 60,000 and 100,000 polygons for sharing on Sketchfab. We’re finding with these techniques that the model cleanup in PhotoScan is very minimal, for sculptures without much self-occlusion. For more complex models we use ZBrush, Meshlab, or Blender – or all three – for post-processing.
In the near future in the Mia photo studio, we’ll be experimenting with very small objects, and we’ll test photogrammetry and laser scanning. Our photogrammetry results for simple, small objects have been promising – here are a couple of netsuke captured with photogrammetry:
Here’s Shoki and Oni again:
And here’s a curled rat:
We’ll share our results with the Sketchfab community when we can – thanks for reading!