This one goes out to all the camera nerds in the world. As I’m sure you’re all aware, DSLR cameras have started to dip their toes into the video world, starting with the Nikon D90 and currently reaching some tantalising specifications in models like the Canon 7D. Meanwhile, RED is promising a ‘DSLR killer’ video unit. Recent research by a team at Oxford University has raised a welcome question: do we need to fork out tens of thousands of dollars to achieve high-quality, simultaneous still and video capture?
By modifying the light capture pattern of current sensor technology, the team have enabled a single data stream to create full-resolution stills and high-speed video capture. The technique is surprisingly straightforward, and may enable advanced research applications – especially in the field of health – but also for us, the consumers!
The key to the breakthrough is the use of a modal pixel-capture technique within a single still frame. Basically, the pixels on the camera’s sensor are divided into groups – say, groups of 16. Within each group, the pixels are instructed to capture light in a strictly controlled sequence, within the normal time frame of a single still exposure. Once the exposure is completed, the data can either be used together – to create a normal, full-resolution image – or teased apart into 16 sequential frames using one pixel at a time from each pixel group. Thus, there’s a declining relationship between movie frame speed and resolution, but that’s fairly normal.
Some basic calculations make me think this could be very cool. The Canon 7D shoots 18 megapixel stills – dividing each still up into 18 high-speed frames would give a movie capture of 1Mp per frame, which is similar quality to 720p video. It can take 8 frames per second, which would be around 150 fps video when added up. However, in reality, that those frames aren’t seamless. There’s a time gap between the end of one frame capture and the start of the next. That means that stills shot at 1/500th of a second give tiny bursts of 8 000 fps video capture.
Hmmm. So a 7D gives me 8 separate bits of ultra slo-mo video capture per second, with nothing in between? That’s not practically useful to a consumer!
Unfortunately, I don’t have access to the full paper, so I can’t see if they’ve found a way around this. The large sensor size of DSLRs means that you could divide the data a long way before the video quality declined to a point where it’s unwatchable – you could pull 52 frames at 480p from a single 7D frame, which is 26 000 fps at 1/500th shutter speed! However, because those bursts don’t link up seamlessly, there won’t be any mega-video firmware updates for our DSLRs any time soon.
The other issue I’m wondering about is with the image quality obtained from the modal image capture method. If each pixel is only ‘switched on’ for 1/16th of the time that the normal still is being captured, then it is getting four-fold less light than normal. To compensate for this, there could be a decrease in shutter speed (undesirable for fast motion capture), widening of the aperture (possible but tricky at high speeds where it’s probably wide already) or pushing up the pixel sensitivity (ISO). The ISO is probably the key here – high ISO noise performance in pro-level cameras is getting pretty incredible these days, so pushing the ISO 4 stops for the purposes of gaining this super-fast motion might be feasible.
I’m left a little wanting in technical knowledge at the end of this post, though – is this really a new idea, or does it simply take the concept of interlacing video and rework it to square groups rather than lines of pixels? Does ultra high speed imaging even matter for a consumer when it’s not continuous? Most importantly, when will I be able to see footage of a barrel at 8000 frames per second, even if it is only for 20 frames?
For making it this far, you get a bonus video of a shorebreak pit at Main Beach a couple of weeks ago…