Capturing 3D Photogrammetry Using Underwater Drones

Back to overview

About Blueye Robotics

Blueye Robotics was founded in Trondheim, Norway in 2015 to develop an underwater drone that would give more people access to the underwater world.

The Blueye Team believes that by empowering people to view, explore and learn about the ocean, people will connect with it and care for it too. This is the reason why the Blueye founders set out on the mission to create and to share the ultimate underwater experience through great product design. Our wish is to let anyone enter the ocean and take part in the element we all come from.

Blueye has its roots in the highly renowned Centre for Autonomous Marine Operations and Systems (AMOS) at NTNU in Norway.

Exceptional user experience and attractiveness have been the overarching design goals since the beginning. Together with world-leading marine engineers and award-winning industrial designers, we have in three years created what we strongly believe is the world’s best underwater drone, the Blueye Pioneer which started shipping at the end of 2018.

Exploring The Wrecks Along The Coast Of Norway

The Blueye team all have a strong interest in exploring the underwater world. Throughout the development of the Blueye Pioneer, we have done extensive ocean testing in the fjord outside our offices in Trondheim. Hundreds if not thousands of hours have been spent underwater testing the Blueye Pioneer and exploring and documenting the many ships and plane wrecks that lay on the sea bed in the Trondheim-region.

This well preserved Short Sunderland lays at 70 m depth right outside our office:

In the beginning, we would mostly edit videos and share them on social media, which is a great way to show the capabilities of the drone as well as sharing some of the interesting history hidden beneath the surface. Later we discovered 3D photogrammetry and started testing how we could utilize the drone to capture footage for 3D models from both wrecks as well as geological structures or marine life such as corals or clam colonies.

3D models are a great way to get more out of the footage we capture and allow us to document the wrecks in even greater detail. On deep dives, the visibility is often limited and it can be hard to get the full overview of a wreck or even a part of a wreck using just video. Compiling a model of a larger section of a coral colony or a wreck gives you a much better overview than the video alone.

Workflow For Creating An Underwater Photogrammetry Model

Our workflow is, in short, the following:

  1. Record the whole scan as a video
  2. Manually extract the best evenly spaced pictures using VLC screen capture
  3. Reconstruct in Agisoft, remove outliers and lose parts
  4. We use Blender for any further clean up or artistic work

But there are some more details than that when it comes to conducting photogrammetry in water. Working in a liquid adds a couple of interesting problems not experienced when in air. Imagine capturing photos in dense fog and snowfall at night. The fog comes from the attenuation of light in water, which means that light is absorbed as it passes through water, and red gets absorbed more quickly, which renders everything around you blueish and eventually black if you don’t bring your own lights down.

underwater drone

3000 lumens of LEDs helps when going below the reach of natural light, and creates a nice backdrop.

How about that snow then? Well, water in the wild isn’t like filtered tap water, it always contains some particles, be they algae, other microbes or just plain old dirt. These particles reflect light surprisingly well when in the deep and our own lights are on, causing what is known as backscatter. The end result is that you have to choose between image quality or image coverage. Go closer and there is less stuff destroying your image, or move away and see more.

Backscatter caused by reflecting particles, we usually call it the “Star Wars” effect when moving forward.

Photogrammetry does help with the above problems; we can go closer and stitch together a larger view of higher quality and with geometrical data that is easy to explore and present after a dive. Our ability to get very close to objects is perfect for capturing high quality footage without backscatter. But it also creates problems in the photogrammetry workflow, particularly for point matching and texturing.

A normal image capture for photogrammetry tries to see the whole object at once and then move around it to get it from all angles. Seeing the whole object as well as a structure background or table surface helps the matching and reconstruction algorithm a lot, especially when dealing with thin objects. The problem for us it that we normally don’t see the full object since we want to minimize backscatter in the image; the background is generally also pure black or blue. The fact that rusting and growing underwater structures tend to have a similar texture does not help this situation. This causes the software, Agisoft Photoscan in our case, to often mismatch points, which sometimes produces warped and folded models as the end result. We solve this by breaking down the models into chunks that align well together and then glue them together again by having a few overlapping pictures. This is the same as having a top/bottom or front/back scan but with many more chunks to get it all to work together. Smaller, less complex models usually don’t experience this problem as much and being smart about the scope of a scan when you have a 100 m long wreck or coral structure is maybe the most important part of capturing a successful underwater model!

Capturing details like this stern on the cargo ship Uranus makes it easy to get high quality output.

Texturing is another problem that is hard when doing deep water scans and relying on only our artificial lights. The image quality, color and exposure is strongly dependent on the distance to the object as we carry both lights and camera with us. This sometimes shows in our models as darker and brighter areas or parts that are much bluer than the rest. Manual cleanup of the textures is possible but time consuming. We generally just try to pilot the drone as well as we can and live with the results.

So how to pilot to get the best results? Go slow. That is our number one recommendation; we are even optimizing the control system of our underwater drone for being able to move slowly around objects! On a more practical note, we are having better results when going up and down an object than side-to-side due to the 16:9 image format. So a normal procedure for a scan is to first get acquainted with the object of interest and note between what depths you want to scan it. Then go up to the noted depth and move about a third of a frame sideways before you go down and again move a third of a frame sideways, repeating until the full model is captured. Note that it might be needed to capture the object in overlapping subparts if it is complex or large.


We are still learning a lot about this fairly unexplored topic of underwater photogrammetry using underwater drones. Things aren’t as straightforward as in the air but the reward is sometimes much, much greater as we are able to “bring up” objects from depths that we just can’t physically visit due to the enormous pressure down there. We believe that the main obstacle for underwater photogrammetry is accessibility to capture these objects. It’s typically either too deep or too expensive to send a diver for a dive even if the actual capture can be done within a couple of minutes. This is where we want to help people out by offering an easy to use tool that can be deployed quickly from a dock side or small boat. We believe that the need for 3D photogrammetry models ranges from archaeology and biology to ships, underwater constructions and other marine assets and our goal is to make a tool that gives these new areas an eye under the surface that can be used for photogrammetry and other types of documentation.

Boring for most of us but maybe an incredibly important model for others.

A small note on a more direct cultural heritage theme: we have mostly only explored wrecks around Trondheim, which is a tiny part of Norway, which in turn is a tiny part of the world. We take pride in documenting what we can but know that there are a lot more wrecks and artifacts waiting to be rediscovered and properly documented even just in our own little fjord, let alone the whole world.

Video of our latest exploring dive, MS Rauenthaler who sank in 1943 killing 19. She now rests at 160 m depth and is inaccessible by divers.


Jonas Follesø is the Chief Technology Officer at Blueye Robotics, leading a team of 15 talented engineers. His background is from software development, with more than 12 years of experience from the Software Consulting business before joining Blueye in 2016. In his spare time, Jonas is an active scuba diver, exploring the ocean both in person as well as digitally using the Blueye Pioneer.


Jakob Andren works as a control system engineer, working on the software running on the Blueye Pioneer ensuring optimal behavior in water making the drone easy to operate. He holds a master’s degree in physics and has been with Blueye since 2017. Jakob has learned Blender in his freetime is now enthusiastically exploring underwater 3D photogrammetry at Blueye and has created most of the models we have published on Sketchfab.




Blueye Robotics / YouTube / Facebook / Instagram


About the author

Blueye Robotics

Your eyes below the surface

No Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    "Post comment" will create a new comment that can be read by anyone who visits this website and has access to this topic. Do not include sensitive data like IDs, credentials, or non-public information.

    To remove a comment, contact the Sketchfab Community Team.

    Related articles