The Making of Playroach

Back to overview

3D Artist Christian Lipski stops by to talk about how he created his Playroach scene, which is a composite of 3D scan and 3D modeling. This article was originally published on Christian’s website.

A few months back I decided to create a 3D scene based on one of my Inktober drawings. I anticipated it would take a lot of time to finish it, and of course it took even longer than my initial estimate. Now that I’m finished, people have started asking all sorts of questions about the project. I decided to write up a high-level tutorial: I won’t get into too many technical details – for all relevant sub-topics there exists at least one in-depth tutorial on the internet.

I’ll do my best to give links to all of them.

There are three main goals I had in mind when I started out:

  1. Create a high detail reconstruction (good enough to be printed in human life-size) of a very small and challenging object (<5cm).
  2. Combine the 3D scan with traditionally modeled objects and achieve a homogeneous look of the scene.
  3. Keep track of my progress and make sure to be able to finish it. I counted the hours I spent on each individual step to give you an idea how the workload breaks down into individual tasks.

Concept (8h)

The inspiration came on October 15th 2017. I participated in the Inktober daily drawing challenge. If you don’t know what Inktober is, watch Jake Parker’s explanation on youtube.com

You basically commit to draw one piece of artwork each day with ink and post in online.

I got really pumped when the suggested topic for October 15th was #filthy.

I recently got into the habit of drawing insects again, so I spent the whole evening drawing a human-size cockroach that enjoys reading his “Playroach” magazine in the middle of his filthy apartment (TV is still running, half-eaten dinner is still on the table).

My Inktober sketch from Oct 15, 2017

At first I just wanted the cockroach to look as disgusting as possible, but I somehow grew fond of him and decided to invest more time into the “Playroach” concept. The next day I added a simple background and colored the whole thing with Spectrum Noir ™ markers in early November.

Colored “Playroach” concept artwork from November 2017

The idea of doing a digital makeover started developing, so in early 2018 I started with rebuilding the whole scene in 3D: I really liked the concept, it was the Inktober drawing I got the most feedback and emotional reactions for. Also, it seemed to be an ideal topic for my photogrammetry experiments.

Initial Research (10h)

The most critical part of the whole project was the actual scan of the cockroach. From past projects I knew that my camera gear would not work that well with small objects. I invested in a used Tamron Macro lens for my Canon 1000D. I also bought some extension tubes for my lens. Those provided the best value for money when it comes to reconstructing small objects, everyone who’s interested in similar work and does not want to buy a whole new lens should try these. Another alternative are screw-on collecting lenses. These produce very good-looking macro photos, but can’t really be recommended since the resulting images don’t conform well to the pinhole camera model that is used by RealityCapture and other software. I ended up using the weakest collecting lenses for the close ups of the cockroach’s head, but got some serious alignment problems and artifacts in the texture. After ordering all this gear, I did some small photogrammetry sessions on very few images just to see if I can expect usable results. In the end I convinced myself that this project will have success.

The cockroach arrived in a test tube, picture was taken after I accidentally broke off a few limbs

Capture Stage Construction (6h)

I previously used a rather big 10″ revolving table for my photogrammetry projects. This time I bought a small aluminum version on Amazon. I wanted to be able to see the cockroach from all sides without picking it up and moving it physically. So I designed a cockroach holder in Blender and got it printed on Shapeways. One benefit of it was that it actually provided some scale reference for the photogrammetry software.

I used some hot glue to hold everything together and also attached a needle to hold the cockroach in place. I put the turn table on movable L-brackets that allowed to rotate it around another axis. This way I could change the pitch and yaw of the whole setup without physically touching the camera or the cockroach.

Capture stage, you can see the cockroach on its holder on the right, the EOS 1000D with the extension tubes, the macro lens and the polarization filter in the center. In the background is my lighting soft box with its own polarization filter

Cross-Linear Polarization Lighting (6h)

Next on my list was lighting: I was interested to see if cross-polarization photography helps in reducing the amount of specular highlights in the scene. It turned out be be a vital tool in my project. This method is relatively inexpensive, you need a screw-on filter for your DSLR and linear polarization filter for your lights. Unfortunately I only had my very big soft boxes for lighting, the filter I bought did not cover the full area so I crafted a contraption out of cardboard (see photo of setup above). For the next time, I would definitely experiment with different lights. The soft boxes were quite bright, but could not be placed too close to the cockroach (because of their size). I would recommend trying out other options such as LED ring lights around the lens. If you’re interesting in using linear polarization for your own project, here’s a good hands-on tutorial.

Photography and Focus Stacking (25h)

One of the most important steps in my processing pipeline was the use of focus stacking to produce sharp and clear images from all directions. I previously read a few blog posts about it but found this tutorial of a hamster skull to be the most thorough and helpful:

The reason I had to rely on this technique is that capturing such a small object with such a big optical system produces a lot of out-of-focus blur. Even when choosing a very small aperture, the blur is quite substantial. In addition, other harmful effects occur: You will have to deal with unbearably long exposure times and diffraction artifacts.

Examples from my early focus stacking experiments. Top: One of the many input images, The fly is in focus, the aluminum background is blurred. Bottom: By combining in-focus areas from all images, all pixels seem to be “in focus”

In the end I set the aperture to values around f16, the ISO to 100 and I kept the exposure time variable so I would get a well exposed image. Usually that would be somewhere between 1 and 5 seconds. As mentioned earlier, I shot with my Canon EOS 1000D, the Tamron macro lens and one or several lens extension tubes. In a few close-ups I used an additional screw-on collecting lens. I controlled the camera from my PC using Helicon Remote. That proved to be a very helpful tool to control all of these different settings. The only annoying thing was that the focus setting could not be reliably set to an absolute value, but would only accept increments and decrements as input. Since the step size was somehow not constant, every few photos I had to manually readjust the focus on the lens. If I am going to fully automate the full procedure I would do some more research to fix that issue. For every perspective I had 5 to 20 photos with different focus settings. These could be easily fed into Helicon Focus for batch processing. Helicon would reliably produce a stacked version of each set of images: It kept one image as reference and pasted in sharper regions from other images. When each image was in focus in at least one of the inputs, I would end up with amazingly detailed and sharp results.

Cockroach on its holder

Close up of lower head and upper limbs as after focus stacking

One annoying accident happened when I placed the cockroach on the capturing stage: When I squeezed it to hard with the pliers, some of its limbs broke of. Luckily, I could later on replace the missing limbs with their right or left counterpart.

Photogrammetry (7h)

Since the Focus Stacking had quite an impact on the duration of the capture process, I first tried it out on a smaller set of images and got good results for the 3D reconstruction (around 40 images of the upper half of the cockroach). Although the stacking procedure produced some artifacts around the edge of the cockroach, RealityCapture did a great job of discarding those regions and only reconstructing the actual foreground areas. Another benefit of RealityCapture is that it allows you to annotate the scene with reference points (aka control points). Since I knew the 3D locations of some of the corners of my capture holder (I made the blueprints myself earlier in Blender), I instantly had the scan aligned with my original coordinate system and could reconstruct the cockroach at the right scale and orientation (I used centimeters as my unit of choice).

If you’re interested in photogrammetry, just give it a try, it’s fun! RealityCapture has a demo licenses available. I didn’t follow any particular tutorial (since I had quite some experience working with it), but youtube is full of beginner-friendly introductions. Just grab a rock from your back yard and give it a try. One good start that shows you the complete workflow from capturing the images to publishing on Sketchfab is this one:

If you plan to get serious with photogrammetry, you should also read the photogrammetry guide from unity3D. They also presented their workflow at SIGGRAPH 2017:

Although I had several thousands images of the cockroach, only 299 made it into the final processing stage. One reason was of course that I had to condense multiple images into a single focus stacked version. Also, about 1/3 of the images were captured without the polarization filters, so I had to discard those. In the remaining set of images I discovered that I must have accidentally touched the cockroach halfway through the recordings: In one half of the polarized images the rotation of the insect relative to the capture stage was slightly different and one of its limbs was also moved by a millimeter. I simply let RealityCapture run once on each half and continued with the image set that gave better results. The final result looked pretty amazing, but had still a few artifacts, some parts with remaining specular highlights, transparency, lacking coverage or subsurface scattering just did not get reconstructed well. So I decided to include some manual cleanup into my workflow.

I scanned at multiple distances from the cockroach, especially for the head and upper torso I needed more details

Close-up of the output of RealityCapture

Cleanup Geometry – Chopping up, Fill Holes, Sculpt Details (30h)

The original scan imported from RealityCapture. The mesh showed great detail, but I wanted to go further and clean up the surface geometry later on in the process

First I decimated the mesh to a size of 5 million vertices so it could be handled at decent speed. I used RealityCapture’s internal decimation tool for that. The next step was to chop up the model into its individual parts: The head, the antlers, the neck, the wings, the body and the individual limbs (19 parts in total). I wanted to do this to make the rigging and posing later on easier (and back then I thought I might even animate it). I imported the mesh into Blender and used the boolean modifier to dissect it into the different body parts. I also used some simple sculpting operations to introduce some overlap between the different adjacent body parts.

19 parts of the cockroach (shown on the final, clean geometry). Missing parts got later on added by cloning their left/right counterpart

Here’s a good tutorial on boolean modifiers from BlenderGuru:

For each individual limb I spent some time to recreate details that got lost in the scan. Knowing that I would destroy the UV mapping, I enabled Blender’s dyntopo sculpting to get the best results in all areas.

Geometry after fixing holes and sculpting out details. I used the original photos as reference to recreate surface details

To do that, I followed Zach’s tutorials on scultping:

Retopology (26h)

The next step in getting clean geometry and UV maps was retopology. Again, Zach’s tutorials helped me out a lot:

I did the retopology at a quite course level and the used two levels of subdivision modifier and a shrinkwrap modifier to create a clean, hi-res, tight mesh.

Cockroach limb after manual retopology with only 388 faces

After applying 2 levels of subdiv and shrinkwrap modifier, some details of the sculpt were restored. The remaining surface details were later recreated as a normal map

If you’re interested in this topic, I also recommend this flippednormals episode:

They briefly mentioned automated tools for retopology, I personally tested out Instant Meshes and found it to be very helpful for certain objects. In this scene, I used it for the chili con carne in the aluminum tray.

UV Mapping and Baking Textures (12h)

This part of the process was actually the most frustrating one. I did some manual UV unwrapping (by defining seams, using the unwrap tool and rearranging UV islands). After that, I simply had to bake the color textures of the photogrammetry model onto the new, clean meshes. Since I had quite a lot of mesh parts to do this for (19 in total) and it took some compute time for each, I tried automating it with a script. The python code was surprisingly hard to write , but in the end I got it somehow working on a toy example. When I let it run on the actual data set, it didn’t get me very far. I didn’t want to waste any more time on this and just did it all manually using the bake selected to active option in the Blender Internal Renderer.

After baking textures from the original scans to the retopo’d and manually UV mapped models the layout was much cleaner and allowed for 2D post processing directly in GIMP. Here you see the original texture for the head on the left and the manually UV mapped one on the right

Material Design (22h)

Now that I had the RGB texture from RealityCapture on a clean UV layout, I could make some adjustments directly in GIMP. There where some ghosting artifacts on the head (I could see certain features double), so I simply copy-pasted image parts from the original photographs. The same method also enhaced some blurry parts on the eyes. I removed the rest of the baked-in highlights and ended up with a decent albedo map. I imported the different body parts of the cockroach into Substance Painter and set the base RGB texture as a fill layer. It was surprisingly hard to find a good tutorial on this. Here’s an outdated one, read through the youtube comments to understand how to make it work with more recent versions of Substance:

Using the output model of RealityCapture high-res model, I baked normals maps onto the new mesh. When having a closer look at the photographs I realized the cockroach has dozen of small white spots that seem to be dents on its surface. In GIMP I created a grayscale mask for these spots and used that mask to create additional heightmap in SubstancePainter. The last piece was the roughness map. I inspected how the shininess on photographs related to thickness and curvature of the surface and recreated that in the roughness texture of the cockroach. I also set the roughness of the dents to 1.0.

Tangent normal map and roughness map of the head

Now I could export albedo, roughness, normals and ambient occlusion through Sketchfab, for the wings I also did a transparency layer. In Blender I set up the materials with the PBR textures. There’s a good tutorial by Alex Treviño on how to do the cycles node setups based on Substance Painter output.

What I like a lot about Substance Painter is the ability to directly upload your model to Sketchfab. It does a great job at setting up all materials. You just have to tweak a few other settings once it’s uploaded (background, lighting, etc) and you’re done. You can see the finished cockroach model over here:

IRay render from SubstancePainter at 1920×3000 px with Depth-of-field

Armchair (20h)

I won’t go into too much detail about the other objects in the scene. It’s mostly traditional 3D modeling and there are tons of better tutorials on the internet. For the chair I constructed a rough base geometry. The actual leather surface is done in a cloth sim that makes it stretch nicely over the underlying geometry. After I baked the cloth sim, I removed all geometry that’s not visible and added the metal buttons. For the cloth sim I mainly followed this tutorial and played around a bit with gravity, force fields and the cloth sim properties. In the end I used the ‘denim’ setting.

I added some final touches by sculpting out some creases. In the end I was quite happy with the result.

The next step was using Substance Painter for material design. This time I could really go overboard with applying and tweaking multiple of its smart materials. That part was really fun. Since Substance allows you to work in low-res and can later on switch to or export in hi-res it even runs even on my laptop. Again, I uploaded the model to Sketchfab at a reduced texture resolution.

Posing of Cockroach (8h)

Now I could do some very simple rigging and posing of the cockroach. The limbs where an easy case, since they were cleanly separated from the body. Rigging of the main body was a little harder and I didn’t do a great job, but I just needed it for a single pose and would just clean up a few vertices that moved oddly. The great thing about posing a rigged character vs just deforming a loose triangle soup is that you can easily change proportions of individual body parts. This way I could blow up its head and upper torso a bit to match my concept sketch. I think there’s a lot of value of having an efficient workflow to clean up, rig and pose your 3D scans. I spent a lot of time on it and feel there can a ton of improvements to the way I did it. But it’s definitely a necessary step when you want to pose or even animate characters and objects.

My attempt at rigging the cockroach was not good, but good enough to get it into the right pose for the render

Blockout (2h)

The blockout of the scene came at a pretty late stage in the project, mainly because I saw the 3D scan of the cockroach as the biggest challenge and risk and wanted to be done with that first. Anyhow, next time I am about to do such a project I plan to do it much earlier. It does not take too long and gives you a lot of clarity for proportions, visibility and lighting. I realized the lamp on the ceiling felt a little weird for the overall composition, so I took it out. I also realized the quirky pose of some of the objects didn’t translate that well to the 3D world, so I decided to give the TV and the table a more realistic shape.

The blockout revealed that the lamp hanging from the ceiling was too distracting

Playroach Magazine, TV, Folding Table, Tray (30h)

I’ll just go over the remaining objects very quickly. Doing the Playroach magazine was actually a lot of fun, I tried to mimic the style of a traditional adult magazine and even wrote a small (rather nonsensical) article you can see on the back of the centerfold. The bug pictures I used are all CC0 images from pexels.com

At 10k x 10k image resolution magazine text is actually readable

I used a cloth sim to have a realistic deformation of the centerfold that’s hanging out. I used Substance to give the cover the right amount of glossiness.

The TV set was modeled after some stock photography I found online. This piece was again great fun to texture in Substance Painter.

The folding table was modeled after pictures from an online catalog for patio furniture.

For the TV dinner I scanned an actual aluminum tray. To make it more easy to scan I put on some acrylic paint to have better texture and less metallic reflections. The chili con carne was a also quick 3D reconstruction of my dinner that day.

Tapestry, Window (10h)

Modeling the background was quite straightforward. I designed the pattern in Blender using reference photos and added roughness and normals in Substance. The textures for ceiling and wall come from TextureHaven.com. In order to create the shape of the window, I created a very quick 3D reconstruction of my bedroom window using only a few pictures. The reconstruction was rather ugly but served its purpose as a reference for size and shape of the window.

Tapestry and window

Lighting (10h)

When starting with the lighting I was quite unhappy with the initial state of the scene: All different elements did not look like they were belonging together in a single room. The most annoying part was the tapestry that had a very distracting and visually dominant pattern. Since I didn’t shoot for a realistic lighting but rather wanted to capture the mood of my illustration, I freely experimented with different strong lighting variations. Halfway through I accidentally made a render with a black background that looked quite cool, but I managed to stay true to the original concept. I found a gold-lavender-green lighting mix that I liked. The golden light seemed to emanate from the Playroach magazine and also illuminated the TV and the table from the side. The lavender light was predominant on the rims of objects and the green light was more of a boost for the green armchair’s color. All of this is not really realistic but served its purpose.

Lavender rimlight added some volume to seemingly flat areas and increased the general creepiness

Compositing (5h)

There were several elements I was still not happy with and I decided to solve them in compositing. First, the lavender rim lights were not as prominent as I wanted them to be. I rendered them out it in a different renderlayer, converted them to gray scale, played around with some color curves and used them as a mask for a dodge composition of the original foreground with a flat lavender color.

Node setup for compositing, I used three renderlayers: rim light (middle left, dodge blend), foreground (upper left, alpha over blend) and background (lower left)

The background was still too dominant, especially in the upper part of the image. I rendered it out separately and used a gradient texture to make it faint to a light gray in the upper half of the image. I also added some contact shadows where the foreground touched the background and increased the influence of the AO path with an additional multiply node. During all of this, I did nightly renders at 20% resolution to get a feeling of the final result. Since my tiny laptop could not really manage the scene at full texture resolution, I worked with textures that were downsampled by factor 8 and had to hope the final render at full resolution would look ok, too. For that purpose I kept different folders with different texture resolutions next to my projects (“ultrahigh”,“high”,“medium”,“low”, each downsampled by factor 2). Before starting to work in Blender, I would simply rename one of them to “current” and Blender would fetch the resolution that I wanted to work in. I don’t know if there’s a cleaner way to do it within Blender (I assume there is), but with a small python script I wrote it wasn’t that hard to to.

Rendering (2h)

Since I realized my poor laptop would not handle a full 10000×10000 render with the high resolution textures, I decided to give Blendergrid a try. First it felt odd to spend on money on something that I was used to have “for free” before, but in the end I had a great experience. Even though something went wrong with my render initially, I had immediate (super friendly) support and all issues were quickly resolved. I received my render on time and can only recommend Blendergrid for your own projects.

Words of Wisdom

I think I achieved what I wanted to prove: You can mix and match 3D models that were 3D scanned as well as modeled by hand. It’s still not a straightforward workflow. Apart from the technical hurdles I feel that a lot of people are still skeptical about using photogrammetry as a tool to produce 3D content. I guess they feel that it’s some form of cheating, but also that the results are sub-par and not as clean and usable as anything that is done by hand. Looking from my perspective I simply see photogrammetry as a new tool to create 3D assets or produce references for manual 3D modeling. I think everyone involved in 3D modeling or texturing should give it a try! Vice versa, I can only encourage people from the photogrammetry community to consider how to produce cleaner and more usable assets. There have been some cool tools popping up for delighting, retopology, etc. It would be great to have a more straightforward workflow that lets you focus on artistic decisions.

The whole project was also a great planning exercise for me. I kept track of the hours I put into it and learnt a lot for my future projects. It was very tempting to invest some more time into topics such as rigging, lighting and compositing. I kept reminding myself that those were not the main focus of the project and I forced myself to always advance to the next stage of the project as soon as I got decent results. Otherwise, this project would have never been finished. I just added up the amount of time I spent on Playroach and realized it’s over 200 hours. Any hour more would have easily jeopardized the whole project (since I thought more than once about abandoning it).

One thing that I want to create awareness for are software licenses. I made sure to buy all licenses and not to take any “shortcuts” to save money. I know it hurts financially, especially when it’s not for a commercial project where you might get that money back. I used to work at a smaller software company where every sold license was reflected in my salary. Especially in the creative digital industry, small software companies produce great tools for a very small market, so please make sure to keep those software developers well fed 😀

If you have comments or questions, please feel free to get in touch with me: if it’s about Blender, join the discussion on the Blenderartists thread, otherwise get in touch through Facebook, LinkedIn or your favorite internet communication channel.

-Christian

 

About the author

Christian Lipski

2D/3D/analog/digital art enthusiast working & sleeping in the SF Bay Area.



No Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    "Post comment" will create a new comment that can be read by anyone who visits this website and has access to this topic. Do not include sensitive data like IDs, credentials, or non-public information.

    To remove a comment, contact the Sketchfab Community Team.

    Related articles