Hi, my name is David Fletcher. I have been working in the video games industry at Sony Interactive Entertainment for 18 years as an environment and technical artist. About eight years ago we started looking at photogrammetry as a tool for creating game assets more quickly with a particular focus on organic and complex objects like rocks and trees.
Starting Out And Drones
At the same time as experimenting with photogrammetry at Sony, I began to use it in my spare time to record heritage locations near me in Cambridge and London. I have a non-professional but deep interest in history and archaeology and really enjoy the overlap between computer graphics and heritage. I began to share my 3D scans on my personal website with a desire to distribute the photos and 3D models as freely as possible. Later I found sharing them through Sketchfab to be far quicker and the viewer to give much higher quality results so I now publish all my models here.
My early models were put together in either Agisoft Photoscan or Autodesk Recap/Photofly and involved little or no hand clean up. They were also lower resolution in triangle count and texture size than today, which reflects advancements in my own processing hardware as well as the capabilities of WebGL and Sketchfab’s viewer. I also used to own a drone and attached GroPro camera and produced some scans through the operation of those. Having children and becoming a stay-at-home dad, as well as drone usage becoming more antisocial, put an end to my drone photography. I now concentrate on ground level and monopod scanning.
My photogrammetry work is ad hoc and hobby-focused; the places I tend to photograph can be difficult to access due to remoteness or difficult to photograph well due to crowds and traffic. I began to think of this as Guerrilla Photogrammetry, something done quickly, often and always looking to improve. I keep a custom Google map of all the locations I would like to scan and delete each marker as I do so. This helps me add in locations that I hear or read about without worrying about when I might be able to visit or forgetting to do so. I follow large numbers of historians and bloggers on Twitter who often mention locations I’d never heard of but think would make interesting 3D scans. I also add to my map lists of heritage features that would make a great Sketchfab collection such as London Metropolitan drinking fountains or the remaining sections of London Medieval and Roman wall.
Amara West at The British Museum
A few years ago I also began working in my spare time on a project at the British Museum, under the direction of Neal Spencer, to document the archaeological excavation of the ancient Egyptian town of Amara West in Sudan. This is a fascinating project that has involved the merging together of hundreds of separate room scans into one giant scene that is then published using the Unity game engine. In time it will hopefully become more publically explorable in its entirety.
On Sketchfab my most popular models have proven to be interiors of historic buildings and my ability to work with large datasets has facilitated this. I’m particularly proud of the work I undertook scanning rooms at the Charterhouse in London. I worked closely with Kayden Rodger who was visitor host there at the time. She provided excellent access to the locations during quiet times and also wrote the historical information to attach to the models. This allowed me to tailor the models to exactly what they wanted to show off about their buildings and heritage.
St Giles Cripplegate
I also really enjoyed working on the scan of St Giles-without-Cripplegate Church in the Barbican, London. I had previously scanned St Peters Church in Devon and St Bartholomew the Great in London, which taught me a great deal about how to photograph churches for photogrammetry. They can be dark and difficult to access but at St Giles I was very lucky to get full access to any part I wished to visit with the help of Jake Kirner, the church administrator.
I find that a key part of the photogrammetry process is being good with people. Being friendly, polite and ‘not a bother’ really helps to make sure you can spend the time you need to get good and thorough photos. With Jake’s help, I managed to get into the store and utility rooms under the church as well as the off-limits staircases and chambers in the tower. I think viewers of the final model particularly enjoy seeing these spaces revealed. For the custodian of the building I think it is valuable too. They may not have time or budget to give these spaces the love they need, so recording them in 3D may preserve a moment in time that would otherwise be lost. A record of simple things such as what old machinery was lying around in a usually ignored room may prove useful in the future when it is no longer present. It may tell a story of the usage of the location that we don’t yet know is important.
It’s worth noting that I had to process the central nave of the church in two halves due to the number of photos requiring more than 128GB RAM to process at normal quality level in Metashape.
I’m hugely interested in the Thames foreshore and I keep a careful eye on the tide times so I can visit and photograph there as often as possible. The foreshore is a place of archaeology in action where sites and finds are being exposed, eroded and lost in real-time. With that in mind, photogrammetry feels like the perfect tool for aiding in documenting these fast-disappearing heritage assets. I feel particularly proud of the work I’ve done scanning locations here, such as the Victorian slipways, and will do all I can to help Thames Discovery in their brilliant and invaluable research.
How I Work With Big And Messy Data
Within the last year, I’ve massively increased the quantity and quality of my 3D scans. These improvements came about through two advancements.
Firstly, I built a very powerful computer for processing my 3D scans in Agisoft Metashape and Reality Capture. The computer is an Intel i9-9900X with 128GB of RAM and 2 x 1080ti graphics cards. It has several standard SSD hard drives and also NVMe drives for fast data read and write. The large amount of RAM is essential for processing locations shot with thousands of photos. For capturing the photos I use a Sony a6000 mirrorless camera and mostly a 10-18mm wide-angle lens. This is a good value camera for the quality and the lens works really well in low light interiors. In an ideal world, I’d use a Sony a7sII for shooting interiors because its low light sensor would work especially well.
Secondly, I have improved my workflow for manually editing models and iterating on the processing result. These are my steps for producing complex room interior models.
- I photograph the interior room by walking up and down shooting photos straight forward. I then repeat but with the camera pointing three quarters up. I then repeat with the camera pointing three quarters down. I then photograph arcs and circles around key features in the space like pillars and statues. Whenever possible I try to get up high to shoot down into the space. In churches, this might mean climbing into pulpits, balconies, and triforiums.
- I download all the photos onto my processing computer and sort into this folder structure:
- If there are fewer than 2500 photos I process in Reality Capture due to my license having that restriction. If more photos, I use Metashape. I currently prefer Reality Capture because it supports processing raw (ARW) photos and often gives higher quality results, particularly for thin features. Metashape works well for large data sets with multiple sections that need aligning and combining. Sometimes I’ll even process in both and then mix together whichever bits of mesh I think are of the highest quality.
- Once the model is ready I export it as an obj and jpg texture and then import it into Autodesk Maya. I group the model and then centre and rotate it so it is orientated correctly. It is important to do this on a group above the model and maintain this transform so that it can be undone later.
I delete any free-floating and unneeded bits of mesh that aren’t representative of the source. I then delete the areas of processing noise where the photogrammetry software has been unable to calculate depth properly such as on plain white walls, windows, and shiny surfaces. This leaves holes in the model which I fill using a mixture of the ‘Bridge Edges’ and ‘Fill Hole’ tools. This is time-consuming and fiddly work but essential for producing clean results.
For very large areas of distorted geometry (such as poorly scanned or shiny floors or pews in churches (my personal nemesis)), I create new geometry using cubes and planes to merge into the model. I make sure these have enough subdivisions for the photogrammetry software to UV and texture correctly later. I have also sometimes used the sculpting tools in Maya to smooth and ‘fix’ areas of mesh. ZBrush can be used for a more advanced version of this process.
The more that this hand editing is done, the more I recommend noting where you have done it in the Sketchfab description. This will help viewers of your model understand what is pure scan and what is artistic interpretation, something that is key in the analysis of heritage data.
- I untransform the model using the group node described earlier, re-export it as an obj from Maya and then re-import the model back into Metashape or Reality Capture. I then Unwrap it (RC only) and then recalculate the texture. This allows the filled holes and new geometry to have correct texturing. This is the key iteration loop that allows me to improve my models beyond just what the photogrammetry software is capable of.
- The model is then reexported, straightened in Maya again, checked for any more errors and then exported as a new obj. This final obj and texture are then uploaded to Sketchfab.
Pushing The Limits
I have also been experimenting with what is possible with some of my models. Recently I wondered whether a photogrammetry scan of a building as large as St Paul’s Cathedral (inside and out) would be possible. Luckily for me they allow photography at St Paul’s and buying a ticket allows you to come back as often as you want for one year. I, therefore, visited the cathedral on many occasions for a short amount of time just after morning opening. I wandered around like a tourist, each visit concentrating on a particular area. I shot photos upwards, forwards and downwards. I often had to wait for periods for other visitors being out of shot as I didn’t want to photograph anyone else for both their privacy and so as to get the best scan possible. You have to be very patient for this to work! I then aligned each set of photos in Metashape and then aligned the sets together using hand placed markers.
The quality so far isn’t as good as I would like but I will try to improve this in time. The crypt of St Paul’s is very dark and hence my handheld and quick shooting approach has sometimes led to blurry photos which obviously don’t align well. Again I’d say in an ideal world, using a camera with a super sensitive sensor like the Sony a7sII would be ideal (if anyone wants to donate one to me you’d make the biggest improvement to the quality of my scans!). I still need to photograph the exterior of the cathedral but I’m waiting until winter when the surrounding trees have lost their leaves. Also, a key problem is that items such as pianos, chairs, etc. often move around between separate scans so it is important to look out for those. I’ll be returning to the interior again soon to shoot more areas and I’d love someone from the cathedral to help me with access to areas usually only accessible on private tours, if at all possible.
I’m also looking at how to improve the user experience of the location on Sketchfab. I do try to set up the scene to work in VR. The Museum of London Docklands has a recreation of an old London neighbourhood. In this area, I scanned a tavern interior. Since they also have an audio recording of what the tavern may have sounded like, after scanning I recorded a short movie in the space and then extracted the audio. The audio was added to the Sketchfab model and I think it really helps recreate the experience (particularly in VR) that the museum was trying to achieve.
I love cross-section drawings and have started to recreate these in 3D scan form. Years ago I scanned the exterior of The Jewel Tower, London and recently I returned and scanned the interior. Putting the two scans together allowed me to then cross-section the model and animate it opening using Maya. I cut the model in half using the Multi-Cut tool (set to delete faces). I duplicate the model first and then create a couple of locators to mark where the cut line will be, thereby creating each of the two halves. For filling the negative space inside the cross-section I use the ‘Bridge’ and ‘Fill Hole’ tools. I love the final result and will be doing more of these.
Modern Era Heritage
I have now also turned my attention to surviving ruins from World War 2. I’ve started mapping pillbox locations with the intention of scanning plenty of those. Pillboxes are small fortifications usually made from thick concrete or brick, often situated near rivers, roads, and railways and were designed to defend against a potential Nazi invasion of Britain. For example, I recently visited Woolwich on the Thames to scan some pillboxes, one of which is beginning to crumble into the river.
I will continue to scan around London as frequently as I can, both indoors and out. If you have any tips for where to look next, please get in contact.