For Part 1 of this tutorial, click here
Creating a 3D model from the photographs using Agisoft Photoscan:
So, we’ve just taken about 400 photographs of a building, how do we turn those into a 3D model that we can share on the web? In this tutorial we’ll use Agisoft’s Photoscan, an accessible, inexpensive, and well-supported piece of photogrammetry software. Photoscan is available on Mac, Windows, and Linux operating systems, and comes in two flavors: Standard and Pro. The Pro version introduces a few features such as the use of markers, georeferencing, and orthographic photo export, that will be particularly useful to those using it for land survey work. Agisoft also offers educational discounts on their software, which makes it particularly popular in university geography, archaeology, and anthropology departments!
We won’t delve into the variety of settings and combinations available within Photoscan. Instead I’ll outline a basic project workflow that includes the settings I’ve found to work best for aerial imagery.
The Photoscan interface
All of the steps needed to create a 3D model from photographs are organized in progression under Photoscan’s “Workflow” menu, seen below:
They are, in order, 1) Adding photographs to your project, 2) Aligning the photographs, 3) Building a dense point-cloud, 4) Creating a 3D surface mesh from the point-cloud, and 5) Creating a texture for your mesh. This process is, of course, greatly simplified: there are a number of actions you can take along the way to tweak the results and optimize your model.
Let’s go through the steps in more detail:
1) Adding the photographs: To add photographs you’d like to use to make a 3D model, you can use the “Add Photos…” button in the workflow menu to bring up a file browser where you can select the photographs, or you can simply select the photos and drag them into the photoscan “Workspace” pane on the far left. When you add photographs, Photoscan will create a “Chunk” and a display the number of photos you’ve added.
2) Aligning the photographs: Selecting “Align Photos…” brings up a dialogue box with parameters for alignment. I always use “Highest” for accuracy setting, and “Generic” pair preselection. Aerial imagery tends to have lots of usable features – photomodeling software likes photographs of grass, dirt, roads, and other “organic” non-repetitive surfaces. For “Key point limit” and “Tie point limit” under the “Advanced” tab, I usually use 40,000 and 10,000 respectively.
When you click “OK” in the “Align Photos” dialogue, Photoscan will find similar points across all of your photographs, and reconstruct the camera positions in 3D space. It will also generate a “sparse point cloud” that will consist of 50,000 to 200,000 or so points, depending on your settings and number of photographs:
It might not look like much at first, but if you use the rotation “gizmo” (the transparent sphere with red, green, and blue lines on it) to rotate your view a bit, you can probably start to make sense of which way is up and down. If your UAV doesn’t tag the photographs with GPS data in the EXIF, the model will not have inherent orientation.
Now is a good time to set the bounding box, so that you don’t waste computing time on parts of the scene that you’re not interested in. This can be done using the “Resize Region” and “Rotate Region” buttons along the top row of the Photoscan interface. I typically set the “Region” to encompass the part of the point cloud that contains the object or area I want to make the model of. This saves on processing time, and reduces the amount of manual editing work you might have to do later.
3) Build Dense Cloud: This is possibly the most computationally intense portion of the Photoscan workflow. In the “Dense Cloud” dialog box, you can select options for quality and depth filtering. If you have under 32 GB of RAM in your computer, I would not recommend going above “Medium” quality. I find that Medium quality is typically plenty of resolution for most of my needs: High and Ultra quality take exponentially more time, sometimes as long as 2-3 days of processing for projects with hundreds of 36-megapixel photographs.
A word of warning: Once you click “OK” in this dialog box, Photoscan will use almost all of your computer’s RAM – making it very hard to work on anything else. I usually let the dense cloud build overnight, or when I know I won’t need to use the computer.
4) Build Mesh: Once you have a dense point cloud, probably numbering in at least the millions of points, you can build a 3D surface mesh. I always make sure to have “interpolation enabled” checked in this dialog. Depending on the size of the dense cloud, the mesh can take anywhere from 10 minutes to a few hours to build. The result will be a 3D model with vertex coloring derived from the photographs. Before building the texture, you can use the selection tools to trim any unnecessary parts of the model away, use “Close Holes” to fill in any small gaps in your model, use the “Gradual Selection” dialog to remove small clumps of floating and extraneous geometry, and use the “Decimate Mesh” function to reduce the mesh to a manageable size for upload to Sketchfab.
These are each important steps that go a long way toward making your model presentable and easy to work with, so I’ll go over each one:
- Photoscan has a rectangular marquee and a lasso tool to select and delete parts of the mesh (or dense cloud, or sparse cloud). You can use this to give your model clean edges, or delete any areas you don’t need, using the “Delete” key.
- “Close Holes”, found under “Tools -> Mesh -> Close Holes…” Here you can use a slider to close small holes, medium holes, or all holes in the mesh. Rarely do I use the 100% closed option.
- You can remove small floating bits of extra geometry with the “Gradual Selection…” dialog under “Edit -> Gradual Selection…” Adjust the slider until the unconnected bits of geometry floating around your main mesh are all highlighted red, and then delete them.
- You can reduce the size of the mesh, for easy upload to Sketchfab, and easy loading of your model on mobile devices, by using the “Decimate Mesh” function. Here, you can reduce the number of polygons in your model from 2-3 million, down to 300-400,000 polygons. This can be found under “Tools -> Mesh -> Decimate Mesh…”
5) Build Texture: Now that we’ve cleaned up our model a bit, and reduced it in size, we’re ready to texture it. Textures are where heavily-reduced models get a lot of their detail. The textures are based on the photographs you took, so the resulting model will have any shadows that were there while you were photographing. Use the “Build Texture…” option under “Workflow” to access the dialog. Here you can set the size of the texture, and the number of textures you want generated. I typically use 2048, 4096, or 8192 and between 1 to 10 textures, depending on the number of photographs in the project.
Now we’re ready to export our model and upload it to Sketchfab! Photoscan has a one-click upload to Sketchfab option. It can be found under “File -> Upload Model…”. Alternatively, you can simply export the model as an OBJ, FBX, or other file format and upload that to Sketchfab.
To add some final polish and presentability to your model in the Sketchfab viewer, go into “Settings” -> “3D Settings” (see below) to adjust the orientation, materials, and lighting.
This brief tutorial has hopefully given you the knowledge you need to get started making 3D models with a UAV. Of course, before flying anywhere, check your local laws and regulations. If you’re interested in more detail on any aspect of this process, from aligning multiple models, to checking accuracy within Photoscan and outputting a geo-referenced orthographic photo or model, a great place to start is Agisoft’s website.
Finally, a couple more examples of UAV-generated 3D models on Sketchfab!
The exterior of ‘Tomb 105’ at Monte Alban, Mexico:
The exterior and interior of ‘Tomb 7’ at Monte Alban, Mexico:
Thank you, Matthew. Favorite drone scans? Any questions? Leave them below!
Can I use video footage to take still pics from and use those as the photos for my 3d map
Yes – You need to extract the individual frames and use those as the inputs in the photogrammetry software. Keep in mind that video frames won’t be as high-resolution as photographs and may include some motion blur – both of these factors will impact the accuracy and resolution of the resulting 3D model.
thanks for that Mathew
I have uploaded a survey, but when viewing on mobiles or tablet versions of sketchfab i am told that the mobile hasn’t been optimised for these platforms. Is this just related to the size of the model or is the an optimisation function available in sketchfab or photoscan?
Hi, my name is Paulo and i´m newest in Agisoft image Process.
For several days i´m doing test to rich the best results processing my photos without markers, but my differences are around 1meter in level. what i´m doing wrong?
I saw very carefully the workflow that you are using is this “Tutorial” and you didn´t perform the “Camera Calibration” – when did we use this option?
Shooting in different heights and angles, is it preferred to add all in one chunk or combine the chunks later? And georeferencing GCP is best done after possibly merging?
I usually put all the photos in at once. If Photoscan cannot align all of the photographs, I will split the unaligned photos into a new chunk and align there, then merge the chunks.