“About four years ago I began to investigate the Open GL potential of MAX/MSP/Jitter. What I found interesting was the way I could take data of any sort and use it to define three-dimensional co-ordinates. In the case of 16: sec the data I chose was video pixels. The easiest way to think about how they have been made is to imagine taking a flexible video screen and rolling it into a cylinder. Now the image is playing on a rounded plane. Then I take any given pixel colour and use a value to offset that from the plane of the cylinder. So red might move 20mm outwards and black 10mm back. That gives you a relief like image wrapped around a cylinder. But instead of using a single image I am accruing the average pixel value of each pixel over the duration of the video. So what you end up with is a form that is representative of all the frames in the video. Sort of like seeing every frame at one moment.
This interested me because it meant we had to see the resulting form as a document of time. Not just the before and after time of the object’s life – i.e. what happened to this object before I saw it and what will happen to it next, but what is happening to it now. ”
James Charlton from an interview with Deborah Lawler-Dormer