Test re. tenability of video (instead of discrete photos) as photogrammetry inputs.
Cuddly Toys were placed in a light tent then captured in video. Video was split into still images then used as inputs in COLMAP which generated this sparse point cloud.
Toys were on a turntable which was manually turned (by me) while I manually altered vertical location of video capture device. In perfect conditions the trajectory of the (video) camera would be a preprogrammed/automated corkscrew path orbiting the subject.
Blender’s Video Sequence Editor was used to split video into discrete JPEGs. (Rendering as JPEG is significantly faster than rendering as PNG).
Video capture: Samsung S7; 1920x1080, 30fps; Open Camera app used to ‘lock’ the exposure settings during capture (i.e. compared to default camera app which dynamically alters exposure); ~45s, 1400 frames.
COLMAP: Feat Ext default; Feat Mat vocabtree256k (‘sequential’ was tried but found inferior to vocabtree).
Source video: https://youtu.be/ktc_7uIamyI
CC AttributionCreative Commons Attribution
Comments