This model is best viewed by selecting the first person mode in navigation setting and slow speed (eg 0 or 1) and navigating at the neighborhood of the small gray cones. It is computed from a video pair taken by a helmet-held 360 camera (Garmin Virb) and walking during 8 minutes. The cones indicate locations and FoV of the camera.
See also a fully textured version (with higher resolution) for several VR headsets at https://maximelhuillier.fr
The main steps of the reconstruction method are in my publications: self-calibration (3DV‘15, CVIU‘17), structure-from-motion (BMVC‘07, IVC‘09), surface reconstruction (CVIU‘18 improved by IC3D‘19), curves are matched (PAMI‘02) and integrated in the surface, texturing (IC3D‘20), coordinate reset (IC3D‘21).
The original textures are divided by 3 (or 4 for the sky). The surface is simplified (edge collapses). The multi-camera model is non-central and global shutter.
Here is an image of the input video
1 comment