Combining Manual and Digital Archaeological Illustration Techniques in 3D

Back to overview

About

My name is Hugh Gatt, I’m an archaeological illustrator and digital artist and have just completed my Masters in Digital Heritage with the University of York. I’ve been working with photogrammetry for a few years now, and prior to that I studied Digital Art and Game Design at the Royal Melbourne Institute of Technology. I’ve always been passionate about my art, but it has only been fairly recently that I started working with archaeological illustration and 3D modelling for cultural heritage. Following a placement with the British Museum in my first year of a BA in Archaeology at Cardiff University, I became fascinated by the intricate linework and complex visual information of traditional pen and ink archaeological illustration, and have since been developing my skills as an archaeological illustrator.

As part of my Masters dissertation, I’ve been exploring how the fidelity of a photogrammetric 3D model impacts on its ability to transmit interpretations to the viewer. As part of this, I’ve been experimenting with incorporating traditional artefact illustrations with the 3D model, creating a hybrid output that combines the 3D materiality of a photogrammetric model with the intricate interpretive linework of a traditional artefact illustration. Alone, photogrammetry and other 3D capture technologies can have difficulty in accurately capturing fine details on the surface of an archaeological artefact, which are necessary to create a detailed, accurate interpretation of said objects’ past histories. This is why I’ve been working on blending the accurate 3D forms of photogrammetry with the high level of surface detail captured through RTI (Reflectance Transformation Imaging) and the interpretive graphical analysis of traditional archaeological illustration. Through this, I am attempting to increase the ability of the digital 3D model to convey the diagnostic details necessary for a visual understanding of the artefact while also reinforcing that this 3D visualisation is an interpretive representation, and can not be seen as being purely objective.

Workflow

My photogrammetry workflow is fairly straightforward. I use a range of cameras for photogrammetry and RTI, depending on the specifics of the project. Primarily, I use a Nikon D7200, though I also use a Nikon D3300 and a Pentax K-5, depending on availability.

As for software, I use Agisoft Metashape for photogrammetry, 3ds Max for any 3D modelling or animation and Unity for realtime visualisations as well as a quick way of examining work-in-progress materials. For the illustrations, I use technical pens supplemented with brushes.

The primary challenge that I’ve encountered with my methodology is accurately incorporating the RTI data with the photogrammetric model. The current process involves adding RTI source photographs to the photogrammetric generation, in order to situate the RTI camera in 3D space relative to the model. From this, I can take the model and camera locations into 3ds Max and project the RTI normal map onto the surface of the photogrammetric mesh. However, if the camera is not aligned perfectly with the correct field of view, the projection will not align with the mesh surface. Usually this can be fixed by careful alignment in Metashape, but I have had to manually align the camera in 3ds Max and tweak the projection in Photoshop on occasion.

Photogrammetry

When possible, I prefer to position the object on a turntable within a lightbox. At the University of York we have a nice set of photography lights which makes this much easier, though I have had to make do with whatever diffused lights I can find for a few models.

Typically I’ll record an object in a few passes with the camera fixed on a tripod, adjusting the angle to ensure a good coverage. I also use a delayed shutter to prevent any jostling in the source images.

RTI Incorporation

In addition to the photogrammetric images, I have been incorporating RTI surface normal data to produce better surface details. The University of York has an RTI capture dome, so it’s a pretty quick process to generate the RTI across the primary faces of the object.

RTI dome

RTI works by recording an object dozens of times from a fixed location while changing the position of the primary lightsource between shots. From this, you can create a 2.5D image of the subject by extracting the lighting information from each image. The final result is a photograph of the object where you can reposition the lightsource, highlighting fine details.

The difficult part is in properly integrating the RTI with the photogrammetric model. By including photos taken with even, diffused lighting in Metashape, I am able to situate the RTI image in 3D space, relative to the 3D model. This usually works fairly well, especially for relatively flat surfaces, but Metashape has had difficulty in aligning shots taken with a large depth of field. This is a bit awkward, as the really important bits to capture, such as the edges of a handaxe, are then very difficult to properly align.

If everything goes smoothly, I am able to extract a Normal map from the RTI image and then project this map onto the mesh UVWs, effectively adding the crisp surface data onto the low res model in the form of the detailed normal map. First though, the texture needs to be calibrated to the contours of the mesh, by combining it with a set of surface normals rendered from the mesh onto a flat plane oriented to the RTI image. This normal map contains the large details which were captured through photogrammetry. By inverting the red and green channels in Photoshop and blending them with the RTI normal at 50% opacity, we are able to extract the fine surface detail from the RTI normal map without including the basic forms of the object.

Left: RTI normal map. Middle: Inverted mesh normals. Right: Combined texture.

Illustration

To illustrate the models, I use a combination of digital and physical techniques. First I will sketch in the rough forms of the object, plotting out the locations and directions of any flaking or other detail. This is performed digitally in Photoshop using an old drawing tablet, and is constantly checked by applying the sketch to the model in Unity. This process allows me to visualise the sketch as it applies to the 3D model; one of the downsides of drawing the object to fit the model is that the illustration is always going to be somewhat warped, as it is only when applied to the model that it works. Once everything has been sketched, I print out the plan and trace over it with archival ink and permatrace.

Working with traditional pen and ink allows me to introduce a lot more gestural expression into the final illustration, compared with performing it all digitally. The final illustration is then scanned, edited and applied to the final model alongside the finished normal map.

Tweaks and Double Checking

Throughout this process, I am constantly checking and double checking everything to ensure that all of my work properly aligns with the contours of the model and the surface of the original object. When discrepancies arise in the textures, I’ve found that the Puppet Warp tool in Photoshop is the best way to get everything back on track.

Looking Forward

The software for producing photogrammetric models and other 3D visualisations is easily accessible, and while it might seem quite daunting trying to learn how to use a 3D modelling package from scratch, there is an enormous library of tutorials and guides all over the internet that can help you get started.

More importantly though, I feel the best way to get started with photogrammetry is to think about why you are producing the models. Photogrammetry, as well as photography, are never purely objective. There is a huge number of artistic and archaeological decisions that are made when you are recording an object, and this artistic intention should be made more prevalent in your work. Photogrammetry is a medium for artistic, as well as scientific, interpretations about a subject to be generated. As such, be creative with your 3D work! Try new things and incorporate ideas and influences from a wide range of sources, including purely artistic, in your models.

While my project started out as a way to create more embodied, interactive illustrations, I found that there were several parallels between archaeological visualisations and Minimalist art, with an emphasis on the reduction of complex ideas to simple forms and the reliance on external information to produce a greater whole. By incorporating a wider range of influences in our visual work, and acknowledging its artistic value, I hope that heritage organisations can create a whole new range of stimulating archaeological knowledge.

Hugh’s Website

About the author

Hugh Gatt

Graphics Officer at Britannia Archaeology


Leave a Reply

Your email address will not be published. Required fields are marked *

  • Avatar noel carroll says:

    You have hit upon a good thing here Hugh, an excellent use of mixed media. The combination of illustration and photogrammetry allows the viewer a greater understanding of the manufacturing process.

Related articles