My name is Tom Goskar and I’m an archaeologist and audiovisual specialist at the Curatorial Research Centre. I’ve been working with 3D data in archaeology since 2000, and I’ve been lucky to have used 3D technologies to make a number of significant archaeological discoveries, including at Stonehenge and across Cornwall—both prehistoric rock art as well as early medieval inscriptions and decorated stones.
The Curatorial Research Centre develops and provides educational, research, and facilitation services and products. Our work is underpinned by a unique philosophy and methodology centred in the curatorial space—between knowledge creation and communication.
Beyond the pretty render
The presentation of beautifully rendered 3D data is now wonderfully simple on Sketchfab. The talents of artists and 3D scanning professionals and enthusiasts can shine brightly. We can share incredible-looking objects, monuments and even whole landscapes using physically based rendering (PBR).
Sketchfab’s Augmented Reality (AR) tools even allow any model to be viewed on your desk or dining room table. When I started out with 3D techniques in archaeology, these were genuinely the things I dreamed of!
As an archaeologist I have always seen 3D scanning as a powerful way to record and survey, study and learn, and digitally preserve something. Presentation has to me been a useful by-product which is perfect for interpretation and accessibility. Sketchfab allows for the entire ‘pipeline’ of capture, study and presentation to be communicated globally.
Scanning gets easier
It’s now getting easier and easier to scan anything. With the 2020 introduction of LiDAR sensors on the iPad Pro and iPhone 12 Pro, millions of people already have a simple 3D scanner in their pockets. The underlying geometry still has a long way to go, as subtle details aren’t yet captured, but it will get there.
Photogrammetry is becoming more accessible too. With both affordable and free software out there, anyone with a recent laptop can have a go. For now, the results are mostly better than smartphone LiDAR in terms of detail and resolution. You can even use photos taken with the camera on your smartphone (tip: shoot using an app that allows RAW/DNG format) to create very densely detailed models.
Here we will look at scans beyond photorealistic texture maps and delve into measuring and enhancing them.
Photogrammetry – measurements and scale
Photogrammetry data is not usually—at least not at first—scaled in real world units. It might look good—with luck just like the object you scanned—and is correctly proportioned. You can’t just start measuring. So what next?
Some photogrammetry software, such as RealityCapture, provides tools to convert random units into real measurements. But a popular choice is the Standard edition of Metashape, which can be bought for a one-off fee (currently $179), with no pay-per-input ongoing costs. It’s a powerful software. However, for archaeologists like me the downside is that Metashape, or the open source Meshroom, doesn’t include an accurate tool to convert models’ real world units (in my case, metres) or take any measurements.
But all is not lost. If you included a scale or took a series of physical reference measurements using an external editor such as the open source CloudCompare or MeshLab, you can transform your model back to those real world units. Then the analytical fun can begin.
Converting to real world units in CloudCompare
Let’s scale a mesh in CloudCompare. In this example it is a churchyard memorial tomb in Penzance, Cornwall, UK. You can download it and follow along, download one of the many Creative Commons licensed models on Sketchfab, or work with your own scan.
Get your calculators ready.
Open your model in CloudCompare and use the point-to-point measurement tool to measure your scale or other element with known dimensions. Here we have a memorial tomb from a local churchyard. The top slab is exactly 1.8m long (handy tip: keep a tape measure in your camera bag!).
So, in this example I open a model and use the Point Picking tool’s point-to-point measurement option to measure a scale or known distance that is visible in the geometry. Be careful here—we’re working in 3D space so once you’ve made your measurement, rotate the model a little to check you haven’t picked the wrong points.
The tool will give a distance in units and could be any number large or small. These are relative units and we need to scale the model to reflect real world units. In order to do this we need a scale factor.
The slab is 6.323777 units in length. The real distance is really 1.8 metres. So we need to scale 6.323777 to 1.8. This is done by simple division of the real world distance by the random unit distance to get a scale factor.
- Distance divided by Units = Scale Factor
- 1.8 / 6.323777 = 0.284640018141057
- Scale factor = 0.284640018141057
The maximum precision in CloudCompare is 8 decimal places so we will round the Scale Factor down to 0.28464002.
Now that we have the Scale Factor we can move to actually scaling the model.
To scale the model, make sure it is selected and click: Edit > Scale / Multiply (note that this tool may be called “Multiply / Scale” in some operating systems).
Paste the scale factor (in our example this would be 0.28464002) into Scale(x) and click OK.
Our tomb is now correctly scaled. Check by re-measuring using the point-to-point measuring tool. It will probably be difficult to select the exact vertices or polygons but you should be able to see that your selection is roughly the right measurement. Check CloudCompare’s scale tool at the bottom right of the window—this should now be accurate, too. At this stage save your model, perhaps adding ‘scaled’ to the filename.
Now that we have scaled the model to use real-world units—in this case one unit = one metre, we can start to study the scanned object. In archaeology, we often use cross-sections to show an outline and use them for illustrations in a report. Let’s take a cross-section of the memorial tomb.
Select the cross-section tool in CloudCompare. Remember to make sure the model is selected in the DB Tree panel.
Drag the chunky 3D arrows on the gizmo tool to control where you would like to ‘cut’ the model. Pull each side in until you have a thin outline.
If the model is ‘watertight’ (it has what is called a convex hull – like shrink-wrapping) you can create a polyline (a continuous line composed of one or more line segments) of your cross-section. The polyline can be exported to CAD or design software for presentation.
If the model isn’t completely watertight and has holes (you can’t scan what you can’t see), then you can export your cross-section as a new point cloud which can still be exported as an image for presentation.
Experiment with the Slices options in the Cross Section tool to either extract a single slice or to make slices at set intervals. The latter is especially useful for creating landscape contour lines or lines for boat lofting—use these to help recreate real replicas of real boats.
To take a screenshot of your cross section: Display > Render to File
At this stage, you can now take any extra measurements that you might need from the cross-section or the scaled model. In the Point Picking tool you can click the disk icon to save measurements and label them.
We can go a step further and start to enhance the surface using CloudCompare’s filters. The writing on the memorial panel is badly eroded. If the scan has enough resolution and detail, we can use ‘distance scaling’ to colour by distance.
Select your model so that you can see the yellow bounding box and click Tools > Projection > Export coordinate(s) to SF(s). ‘SF’ stands for Scalar Field.
In this example, we need to make sure that we select the axis that is facing us. You can see in this screenshot that the axis gizmo in the bottom right shows Z (blue) pointing up, and Y (green) facing right. The red (X) axis isn’t visible as it’s facing directly toward us. This is the direction we would like to colour by. Make sure in the dialog box that the axis facing you is selected.
Your model should now have psychedelic false colours. Scroll down the Properties panel to adjust the colours and their display ranges. Uncheck the ‘Normals’ box in the Properties pane, change the ‘Current’ color range to grey, and start to pull the Display ranges arrows to effectively create an interactive depth map.
In this example, we can now start to read the eroded memorial inscription. The name Henry Boase Esq “of this town” (Penzance) and the date 1827 can easily be read. Until writing this tutorial I didn’t know whose memorial this was, as I couldn’t quite read the inscription in the available light when I scanned it in the churchyard. Making discoveries from data is exciting stuff for archaeologists!
With more time and enough resolution perhaps the whole inscription could be read, and even the depth of the lettering could be established.
Now you can convert your scans into real world units, be it for study or 3D printing exact scale replicas, or to quantify fine details.
Remember to take lots of reference measurements when you scan (note them down, take photos outside of your photogrammetry photo set with tape measures visible), or use a scale (or object of known size) so that you can check your measurements for accuracy later.