Reposted with permission from Pete McNally’s Digital Human Experiments, we learn how this high quality scan was taken to the next level via animating it.
Took me a while to get this update together! This time, I’d like to share an experiment in single camera photogrammetry. I was very inspired by some of James Busby’s recent work on Sketchfab, so much so that I went back to try scanning with another human subject. My Dad dropped by the house one evening and I asked him to sit for me outside, just as the light was fading. I was fairly sure the effort would be wasted due to the failing light, but photogrammetry has surprised me before, so I carried on. I shot about 45 photos in raw, handheld on my phone, a Samsung Galaxy S8. From there, I processed the raw files in a free version of DxO, removing vignetting, chromatic aberration, shadows and highlights where possible, but keeping distortion. Here is the mesh output on normal detail in Reality Capture.


I laid out UVs on the low poly mesh in 3DSmax and baked albedo, thickness and normal maps from there, over then to Knald to generate high frequency detail normals, AO and cavity maps. Substance Painter and Photoshop were used to paint out shadows and highlights and fill in gaps in the textures, and hand paint specular and glossiness maps, to control which parts of the skin would look oily. I used Marmoset Toolbag 3 for look development, check out some of the textures below.


So I ended up modelling a tight fitting wooly hat over the top of the head and textured it traditionally, and I was a lot happier with this. Here is the head model screen grabbed from Sketchfab:

Bonus: Blooper with displacement on the wooly hat ?
Thanks, Pete!




