My name is Alan Gordie, I’m a software engineer and artist and I’ll be fifty years young next year. I was very fortunate to have grown up with computers in my life since the 1970s when my father purchased our first TRS-80 Model I with 4K of RAM and a cassette drive. I taught myself to program in BASIC on that computer, which then led to other computers and other languages over the years. But way back in the summer of 1979, I also played a very early flight simulator on that computer and it primed my imagination for virtual flights over digitized worlds. A few years later, seeing the movie TRON and, at the same time, dealing with concepts of digitization of real-world objects into the data realm made an impact on me. Of course, we now live in a time when we are able to access 3D scans of a neighboring planet from the comfort of our home, or a moving train, from a PC or mobile phone, enabling us to explore the real, and the unreal, at increasingly greater levels of fidelity.
Despite being a self-taught C/C++/Python programmer by trade, I’m also an auto-didactic in the areas of computer graphics programming and visual art and fashion. Fortunately, I have had good luck here and there during the past twenty years and have been able to work in fields that overlapped with graphics and data visualization. One of those included designing hardware independent cross-platform printing systems with WYSIWYG design tools and robust file formats to compete with PDF nearly 20 years ago. Another fun project was being a principal involved in an early street-view startup (VisRE, 2004-2008, now defunct), in which we captured 360-degree 20fps mjpeg and derived depth maps from side-scan images, among other neat things.
Additionally, I was fortunate to work with a few commercial mapping vendors, building out open-source map tile rendering stacks, tile indexing and lots of other neat things that never saw the light of day, including indoor street view style capture experiments in large spaces. These were great learning experiences, paid the bills and the clients were happy. Also, I’m generally available and always on the lookout for interesting opportunities. Please get in touch!
Enough about me, let’s get into these Mars HiRISE models.
“HiRISE (High Resolution Imaging Science Experiment) is the most powerful camera ever sent to another planet…” –USGS
First, a huge shoutout and thank you to everyone involved with HiRISE, but mostly NASA and the University of Arizona Lunar & Planetary Laboratory Principal Investigators for their efforts to capture Mars in such striking detail with their imaging instruments, and also for their efforts to share this data.
Special mention and thank you to the Mars Global Surveyor and MOLA experiments that operated from 1996-2001 and which I credit with sparking my interests in early GIS and height data processing nearly two decades ago.
In short, HiRISE is imaging Mars at resolutions up to 30cm per pixel, using this data for high fidelity stereo photogrammetry and sharing the data, raw and derived, in ever more robust ways. Find out more on NASA’s site.
These DTMs are generated by the HiRISE team using stereo photogrammetry to create triangulated meshes. Those meshes are baked to raster heightmaps and stored with map projection metadata for alignment with other NASA data products, one of which is the original stereo pair which can now be used as a texture on those heightmaps.
A few gotchas, though: the images are HUGE, there are inconsistencies in the projections, and we want these on our phone and we want them now!! 🙂
I wanted to make relatively lightweight meshes for some of these datasets, something that would work on mobile via Sketchfab, which provides a somewhat ubiquitous platform for 3D viewing. An immediate benefit, then, is quick browser-based access to a 3D textured mesh with enough resolution to determine further areas of interest and to help drive where I devote future resources into making higher fidelity models of smaller regions within these overview areas.
However, one problem when making textured meshes from the DTM products is that the full resolution of the raw color images is typically 4x greater than the underlying heightmap raster images and so we’re looking at images with 60k-80k pixels on the long side. Also, these files are typically stored as .JP2 (JPEG2000) file format, which is not supported by many tools. Thus, along the way, I created and stored lightweight and more widely usable versions of these images, first as GeoTIFF before the geometry processing, then as .PNG for publishing.
I decided to produce these 3D textured models with a constraint of 8k pixels on the long side of the output texture images and doing so with minimal generation degradation during any “baking.” I also wanted the underlying reconstructed mesh to be relatively low polygon count, ideally in the ~200k polygons range.
The Plan & The How
I now had a goal and wanted to attempt to make one of these meshed models matching the target constraints and, if successful, make more than one. That’s where Houdini enters the picture. It’s my current tool of choice for working with geometry and they are a great company with great tools.
First, I produce .tif and .vrt versions of the input heightmap and color imagery. (See Technical Challenges below for those particular gotchas.)
Next up, I use QGIS to downsample the original imagery and store as GeoTIFF, using half resolution for the height data. This can also be done via gdal_translate, or via Python with import gdal, if that is your cup of tea.
Then, using Houdini as my tool of choice for all things 3D data processing, I fed these images into a SOP network (geometry nodes) in Houdini and a COP network (image nodes) for working with the intensity texture. The heightmap .tif was converted to a voxel heightfield, then to polygon, then progressively refined down to around the 200k polygon target, which provides a nice polygonal model with reasonable vertex normals considering the small file sizes achieved, then saved as .obj for later import to Sketchfab.
I punted on the re-baking initially and instead focused on high-quality resampling to get the original input image down to 8k with no padding and, thus, no UV changes. This means no generational degradation, but it also means no easy mipmaps. Thus, some color correction and scaling in the COP network, then I saved it to png for publishing.
Not many tools can handle the negative altitude values found in most of the Mars HiRISE DTMs, so I needed to use gdal_translate with -scale to rescale the negative ranged values to positive ranged values while converting to GeoTIFF. It made for a simple pre-process.
Also, many .JP2 files that I encountered were typically not in alignment with the .IMG height data. To remedy this, I used gdalsrsinfo and gdal_translate to extract the WKT projection info from the DTM .IMG and used that to create VRT layers wrapping the original JP2 data with correct projection info.
A future revision of this work will include pow2 textures, high quality baked normals from the original resolution heightmap and subdivision of the overview areas into optimal tilesets to allow for ~50mb per tile browsing via Sketchfab. If anyone is interested in helping or would like to know more details, please get in touch with me, I’d love to collaborate on this with other people.
A growing collection of Mars HiRISE models can be found here.
This data is now being used as the basis for a small arcade style flight simulator in Unreal Engine 4.
Here are a few minutes of exploration in a work-in-progress project