Read on to find out how the team at the Natural History Museum London built a custom 3D interactive using the Sketchfab API…
Introduction
I’m Dhia Shakiry, a creative web developer at the Natural History Museum, London. I have a generalist skillset having worked on backend, frontend and gallery hardware integrations but am now mainly focused on the front-end, building websites, mobile apps and interactives with javascript.
A few years ago we migrated from using Flash for our gallery interactives software to HTML5 where I’ve been leading on developing a range of templates and animations that can be re-used to produce gallery interactives quickly.
I’ve always had a passion for 3D graphics, having been an avid PC gamer since my early days. As browser support and performance for 3D rendering has been improving over the past few years with the standardisation of WebGL, I’ve always looked for opportunities to get involved in 3D content projects, The wealth of amazing natural history specimens really lend themselves to the close inspection that 3D models allow, and 3D data is increasingly being used in the study, preservation and showcasing of our collections.
WebGL/ThreeJS interactives
The museum had commissioned one-off externally-developed 3D interactives before, but nothing in-house or with web technologies. So back in 2015, when the opportunity came up to develop some 3D content as part of 2 permanent exhibitions (the Stegosaurus and The Human Evolution gallery) I jumped at the chance and developed 2 HTML5 touchscreen interactives using ThreeJS / WebGL (with AngularJS as a framework). These allowed the visitor to interact with the 3D subject matter and view interpreted content about a specific specimen feature.
Both interactives had unique challenges, the Stegosaurus model came to us as a high resolution model in 256 separate meshes that had to be loaded individually into the WebGL scene (the openCTM 3D data format got round the browser memory issues I encountered here), the Human evolution interactive featured 2 WebGL scenes next to each other, with mirrored touch camera controls.
The interpretation for both interactives required highlighting different parts of the specimen, this was achieved by loading the 3D model as separate meshes each with their own material definitions, so that the material properties (i.e. opacity, colour) can be modified in a targeted fashion. Additionally, I used ThreeJS’s raycast intersection and camera view objects to decide when a particular child mesh/specimen feature was being clicked on or viewed so that the appropriate interpretation content could be triggered.
In both projects, my lack of experience with 3D tools was a real bottleneck in preparing the 3D assets for loading into the HTML5 application. This, along with the time consuming task of setting up the Three.js lighting, camera positions and getting a good balance between model resolution and performance meant both projects took longer than they should have, even though I was happy with the final result. The custom development also meant we were largely restricted to the kiosk platform they were developed for (Chrome kiosk mode, a specific screen resolution and PC with a dedicated GPU), as performance and responsiveness on other channels (web/mobile) were not part of the projects’ scope.
Sketchfab API interactive
Our first exposure to Sketchfab and it’s Javascript API was for an interactive for a Dodo exhibition at our museum at Tring, The interpretation team had permission to use a 3D dodo skeleton model from the Durban Natural Science Museum and Leon Claessens, which was already on Sketchfab:
I was tasked with developing a 3D interactive for this in a tight time-frame and in collaboration with the museum’s Imaging department who were also already hosting content on Sketchfab, we came across the Sketchfab API and realized we could use it to quickly develop an interactive with the custom user interface that public-facing interactives require. (larger buttons and text for touchscreens, relevant interpretation content, locked down browser navigation to prevent the user leaving the application etc.).
Having played with Sketchfab’s 3D editor I was impressed by how quickly a scene could be setup, with model positioning, materials, camera positions and lighting easily editable through the interface instead of having to be coded as in my previous Three.js interactives.
The interactive itself was a very simple single-page jquery application , with a screensaver and main view with the embedded Sketchfab window generated by the API, the default Sketchfab controls (annotations, controls, settings etc.) were disabled via the API’s property settings.
Large HTML buttons were added to trigger the relevant camera positions in the Sketchfab window via API calls and finally some error handling added so that the application would attempt to reload the externally hosted Sketchfab content if there was a network issue with the interactive PC.
The installation was on a standard low form factor PC and a 24” touchscreen, using Chromium portable (a standalone build of Chrome browser) running in fullscreen kiosk mode and set to autostart on PC bootup.
The dodo interactive was a successful prototype that showed how much quicker and more efficiently 3D HTML5 content could be created when using the Sketchfab API. Once the requirement of needing a networked PC to load the 3D content was met, the benefits of the polished Sketchfab model viewer and editing functionality were obvious. The Sketchfab team were also very responsive to my questions when I first started exploring the API, which really sped development up.
I’ll now hand over to Ben Capewell, who built our next, more complex Sketchfab-powered interactive.
A rather shaky video of the subject if this article ~ Tom
Building a Sketchfab Gallery Interactive
Background
Having successfully used the Sketchfab viewer to showcase a 3D model in an earlier gallery interactive for NHM Tring, when we were asked to create a new interactive display for the Whales: Beneath the Surface exhibition Sketchfab was the obvious platform to choose again, especially as the scientist who created the 3D models from the CT scans of the whale specimens was already using Sketchfab to host and share her work with us for review as she was developing it.
The new interactive was to be slightly more complex than the last, with three models to display rather than just one, with the user able to switch between them, viewing accompanying text for each and highlighting two areas of interest on each model, which
The Sketchfab viewer API takes the form of a Javascript script that embeds a Sketchfab-hosted model into a web page using an iframe, so we chose to build the interactive as a simple AngularJS web application consisting of four screens in two types: an introductory home screen and a specimen screen type to display each of the three models. Each specimen was to have two points of interest that could be selected to display some explanatory text and orient the model in such a way as to highlight the feature.
Architecture
The requirement to host three different models in an application with a slick user experience presented a new challenge, because models take a few moments to load into memory, which is enough to be a barrier in an application where users may want to switch quickly back and forth between them. It was therefore not an option to reinitialise the Sketchfab API each time a specimen screen was loaded, and consequently the obvious approach of making a single instance of the Sketchfab viewer part of the specimen screen template was not suitable.
The solution we found to work well was to create one Sketchfab viewer for each model, all loaded and initialised on application load, which live outside of the ephemeral home and specimen screen HTML containers and therefore existing for the lifetime of the application, using CSS absolute positioning so they appear in the right position within the design. These are then shown and hidden as required as the user switches screens.
Animation
Part of creating an engaging and slick user experience was to add simple transitions between screens, which was easy to do thanks to AngularJS’s ngAnimate module. However, because the three Sketchfab viewers were required to live outside of the main page elements and be shown and hidden independently, they were not affected by the animation of the HTML elements forming the interactive’s screens. Essentially this meant that the screen transition would animate behind the model, leaving the model in place until the animation completed,whereupon it would suddenly switch to a different model or disappear. In short: visually jarring.
The solution to this problem was to take advantage of the Sketchfab viewer API’s screenshot feature. By intercepting transitions into or out of specimen screens we were able to request a screenshot of the current or next model and then display that image data in an element which perfectly masks the Sketchfab model, but is part of the screen that is about to be animated in or out, thus giving the impression of the model itself being animated as part of the screen it appears on. When the animation completes then the static image is replaced by the real model.
The sequence was thus:
- User taps on link to Home screen or a different model screen
- Capture Sketchfab model viewer display
- Display captured image directly in front of model viewer
- Hide model viewer
- Trigger screen change to animate specimen screen transition in or out
Timeout
One further refinement we wanted to give the interactive was the ability to automatically reset to the introductory screen when nobody was using it so that the next visitors would hopefully find the display ready at the the entry point.
Without the use of some sophisticated sensors this was not really possible to do in a foolproof way, but we decided that observing a minimum period without any interaction with the specimen screen was a sufficiently good indicator. Detecting interaction via our on-page controls was trivial, but we also needed to detect when people are manipulating the Sketchfab model. The Sketchfab API currently doesn’t emit touch events or model orientation change events, but we were able to poll the current model view every few seconds using the getCameraLookAt API method. This returns an object describing the current position and orientation of the model, and by converting this to a JSON string it can be trivially stored and compared with later view information to detect whether it has changed.
Many thanks Dhia and Ben for sharing your work with the rest of the museum community. The Whales: Beneath the Surface exhibition runs until February 2018.