Johnson Martin and Duncan Irschick from digitallife3d.org take us through their amazing work capturing 3D freeze-frames of live animals using photogrammetry – and then making them move again!
About the Digital Life Project
The Digital Life Project was started in November 2016 as a non-profit initiative within the University of Massachusetts at Amherst for one simple goal: to create accurate, high-resolution 3D models of life on earth. Prior to its inception, the art and science of creating 3D animals was largely the domain of 3D animators. While this practice will remain a vital practice, the possibility of scanning real animals remained largely untapped.
A digital “ark” of living organisms would provide an invaluable resource for educators, scientists and conservationists interested in recreating and studying life in all of its manifestations. Imagine a website or VR experience that could provide millions of living animals at the fingertips of students. Or consider how scientists could use digital animals as a 3D museum of sorts, an exciting possibility in an era when traditional museums are financially challenged and are often unable to meet their basic mission of documenting life on earth.
This idea’s fruition can be traced to recent developments in 3D scanning the last few years that have made it viable to capture living animals through the process of photogrammetry, which until recently was impossible with living animals. One of the main pillars of this is the Beastcam™ technology developed at the University of Massachusetts at Amherst which represents a series of custom multiple-camera devices designed for capturing a variety of living animals, from small insects to large sharks and sea turtles.
Pushing the Project to the Next Level
Recently, we’ve been interested in pushing this project to the next level. We first began this process in August after one of us (Duncan Irschick), the project director, contacted the other (Johnson Martin) about developing a more powerful resource through the development of physically accurate animations based on the scans that were already being produced by the project. Prior to this development, many of the 3D scans were often static hemispherical views of organisms.
Since then, we’ve been working on developing the proof-of-concept 3D model and animation presented in this post with the intention of basing its look and movement on scientific data. The idea behind this approach is to apply scientific rigor not only to scan real organisms but also use data to reconstruct their movements. Thus, artists can use the more accurate representation of an animal in their projects, and educators can use the model as demos knowing that they faithfully represent how an animal looks and acts. The advantage of this perspective is that it allows scientists, educators, conservationists, and animators to all use the same model as a resource, rather than catering to a specific user group.
Developing the Proof-of-Concept
For our first model, we chose to use one of Digital Life’s most popular photogrammetry models on Sketchfab, the Tokay Gecko. The animal had been previously captured using the Beastcam ARRAY technology. The system of 30+ cameras enabled very high resolution photo-capture (image below), and provided us with substantial 3D data to begin the process.
The images from the capture were processed and reconstructed using Autodesk ReMake before being published on Sketchfab.com and the Digital Life website. ReMake was a great choice for us and gave us the ability to work with higher resolution scans than non-commercial hobby applications such as 123D Catch.
The 3D scan was an ideal starting point for the project and furthered our goal of using as much real-world data as possible for the project. From there, we began processing the model into a full-fledged animation.
Looking at the scan, it was immediately clear that we’d need more data for modeling and texturing the underside of the animal. Luckily, Big Mama, the Gecko captured in the scan, was still in Duncan’s lab at the University of Massachusetts Amherst. So with the help of several UMass students in his laboratory, we photographed the ventral side (underside) of the animal to reconstruct the underside. We also used video footage of her climbing up a glass surface as reference for reconstructing her movements. From there, we began the process of re-creating the model using the 3D scan and the images we took.
Above are some of the images and videos we used as reference for realistically re-creating the animal’s body shape, scalation and color in the 3D software as an aid to the initial 3D scan. We used images taken through a glass sheet to accurately re-build the the ventral side and used other top-side images to tweak the color of the texture map and fill any missing or blurry parts of the dorsal (top) side.
We used Blender, the open source 3d creation suite, for the majority of the project, using Adobe Photoshop and ShaderMap 4 as supplements for texturing and baking. We chose Blender as our primary tool because of it’s flexibility as a software package and availability as an open source project. The process went essentially as follows below, not including a few failed methodologies.
- Retopology: Remodeling the existing geometry to be better optimized for animation and creating whatever parts are missing, such as the underside of the animal, tail, and toes.
- UV Mapping: We knew that the model would need very clean UV mapping (the coordinates used for applying textures) once we got into texture painting, so we made sure to spend plenty of time optimizing the map in Blender.
- Baking: From there, we started the process of baking the color and normal maps to the new UV map from the old model’s UV map. This preserved most of the detail lost in retopology, so we didn’t feel that a displacement map was necessary. After baking, we took the color map into Photoshop to remove any residual lighting from the scan data.
- Texture Painting: There was a lot of missing detail from the underside that needed to be filled with the reference photos taken earlier. Also, some minor errors from baking needed to be fixed. We also took the color map into ShaderMap to develop the missing normal info on the underside.
- Neutral Pose: We created a simple FK rig to move the Gecko from its original position into a more neutral pose for further rigging and animation. Afterward, some tweaks to the texture maps were needed to fix some seams that had become visible.
- Rigging: The FK rig developed for the neutral posing was too primitive for the level of animation we needed. So, we developed an IK rig for each limb and toe. It may seem excessive, but due to the unique way Tokay Geckos’ peel their toes from the outside in, we needed as many bones as we could get.
Animating the Gecko
Animating the model proved to be one of the most challenging pieces of the project. We knew from the beginning that we had placed a lot of responsibility on ourselves by presenting our work as scientifically accurate. Fortunately, one of us (Duncan) has worked on gecko locomotion for over 25 years, and had a wealth of data and knowledge to apply towards creating lifelike motions. With this in mind, we went about this by working with as much real data as we could get our hands on. This included online reference footage and recorded footage from the animal in the lab. All of this information proved to be invaluable for accurately recreating the movements of the animal. These data proved invaluable in recreating the movements faithfully, such as the curling of the toes, which geckos use to engage and disengage their toes from a surface.
Sketchfab was our main resource for collaborating on the model because it allows artists to easily upload models directly to the website (with the permission of the model holder) and immediately ask for feedback within a couple of minutes. This made the often tedious back-and-forth feedback cycle for animation artists almost effortless and certainly quickened our progress.
For our final animation, we chose to animate the Gecko crawling upwards on a vertical glass surface. We chose this movement because it highlights what makes Geckos so unique compared to other reptiles, which is their ability to climb steep and smooth vertical surfaces with ease, an ability researched by Duncan’s lab extensively in the past.
The Tokay Gecko
We finished the animation by the end of October and uploaded the final model to Sketchfab for publication. Sketchfab has served as the best platform for us to publish since it’s already such a thriving 3D and VR community and allows users to download our models for noncommercial usage in 3D, educational, and VR applications.
The final model was a big achievement for us. It proved that we could feasibly take a 3D scan of a live animal, and accurately “bring it to life” in a way that preserved its movements and overall shape and posture.
We hope that this model can be used by many others for a wide range of uses. Perhaps more importantly, we are excited about the process of live animal scanning and full-body 3D reconstruction with accurate reconstruction of movements. Once we have perfected this method, it raises the prospect of literally transporting an organism into our laptops for further analysis. With such models, scientists can model the locomotion of various species in a new way, and test hypotheses about movement, such as through altering the limb dimensions or other body parts. Educators, on the other hand, can use the model as a tool for students to educate them about animal body form, movement, and biodiversity. For example, with this 3D gecko, educators can demonstrate to people how geckos climb in a new interactive way. Finally, 3D artists and animators can use the model as a reference for creating live like animals for non-profit films, or other multimedia presentations.
We also believe this model is a step forward in bridging the disciplines of 3D modelling and animation and functional morphology. By allowing scientists to take advantage of the expertise of the artist and the artist to take advantage of the knowledge of the scientist, a better result can be achieved by each.
Now that we have a solid proof-of-concept and workflow for the process we hope to move forward by creating other similar animations for a wider range of organisms. Eventually, such animations could form the basis of a “Biology Lab of the Future” in which people everywhere can access and learn about animal form and movement. We’ve already started moving forward by applying this workflow to the Sea Turtle project that the Digital Life team has been working on the past few months and hope to have more news to share on it in the coming months! The possibility of “recreating” life in all of its amazing complexity is now within our reach, and we are excited to continue to explore this space.
If you want to learn more about the Digital Life project and everything we’re doing in the realm of conservation and education visit our website or any of our social channels: digitallife3d.org / Sketchfab / Facebook / Twitter / Vimeo