Build the Cities
Created in Feb.2015
The entire music video consists of multiple stages. The basic structure for each stage is a dynamic subdivided cubic cell, which is able to multiply based on a designated distribution pattern.
For generating the animated singing figure (Kerli), the pattern is computed based on depth sequence of the original footage. Speaking of the footage, a Kinect, a camera, plus Depthkit were used to shoot both the RGB and depth footage simultaneously. Since I was not using Depthkit’s built-in visualizer, an additional program was later developed to post-sync the two footage based on the milliseconds tags of the depth sequence.
For generating the cityscapes, I programmed another separate generator to produce images of random aerial views of buildings, using brightness to indicate each block’s altitude. The images were later imported and read by the system the way similar as Kerli’s depth sequence. The mapping of the pattern is also affected by each host cubic cell’s “gravitation mode”, which changes the pattern’s facing direction.
The entire music video is programmed and generated using Processing, with a few slight radiant blur effects done in Premiere during composition.
Demonstrations of multiple development stages:
Dynamic Octree structure subdivision + Free roam camera
Dynamic Bézier curves set subdivision (bound to invisible Octree structure) + Free roam camera
Synchronized animated RGB & depth sequence
Dynamic Bézier curves set subdivision (bound to invisible Octree structure) + Multiplication based on animated RGB & depth sequence