Lighting Research Results

Well, I can confirm this statement from my experience too (experience: for once, I designed a real-time 3D camera which could operate reliably in dark tunnels, seeing dark-clothed pedestrians right besides bright car headlights, and in another life, I did research extensively the human visual system).

Specifically, there is no single camera known to me which can record, either in raw or otherwise, the tremendous dynamic range a Kodachrome 40 film can display. This film stock was ment to be projected, and the human visual system can easily cope with the large dynamic range a projected Kodachrome 40 image has.

A camera chip with currently 12-14 bpp per color channel can barely cope with that (quantization errors and noise will kick in) - therefore, if you want to record all the information available in the film stock, you will have to take multiple exposures. This in itself is however not “HDR”, by the way.

HDR in the technical sense is the combination of several exposures into a single radiance image. This involves the estimation of the gain curves of the camera/image processing pipeline used. Once you have these gain curves, you basically can recover from the set of differently exposed images the radiances of the scene. This HDR image is however just that - a map of the scene radiances and thus usually an image you will not be pleased to view.

These images look rather dull. This is mainly because the images we are used to have some S-shaped transfer curve applied to them, crushing shadows and highlights, enhancing the contrast in intermediate tone regions.

So an essential second step of HDR-image processing is a tone-mapping step, which transfers the raw HDR-image into something we are used to view.

This is basically also the case when you are working with camera raw images - only that you have a much reduced dynamic range in the case of raw images, compared to a real HDR.

However, the optimal tone-mapping is very much scene-dependent, and I am not aware of any single tone-mapping algorithm which would be broadly applicable.

The tone-mapping step with HDR/raw camera images is also necessary for another reason: our normal output devices have, at least at this point in time, only 8bpp dynamic range per color channel (some even less, utilizing dithering to hide it). Given, there are real HDR-displays available since some time, but how many people own such a thing? Even if these displays hit the general market in a few years from now, I doubt that they will be a match for the old Super-8 projector gathering dust in the corner.

Summarizing, I think for the time being, nobody will be able to recreate a “real Super-8 movie experience” by currently available electronic means. This hardware is just not available.

That’s why I derived my own approach, taking 5 different exposures of a single frame, each spaced a little bit more than one EV away, and combining them via exposure fusion. You can see here some results (and more information).

The good part is that this process is fully automatic (no manual fine-tuning required) and that it recovers the full dynamic range of the Kodachrome 40 film stock.

However, in that course, the appearance of the Kodachrome 40 film is changed. The final image does not look at all like a projected image. And it does not really look like a real Kodachrome 40 image either, I must confess. But I think that is the price one has to pay when working with current technology.

Note that the final output of my pipeline is not a HDR in the strict sense - the bit depth of the output image is just standard 8bpp per color channel. Since I store the original captures (which are jpeg-files, not even camera raw files), I could in principle, once HDR-displays and -video formats become widely available, rerun the pipeline and create real HDR imagery from the captures. But I guess I will never do this, as it would involve again manual interaction during the tone-mapping step.

Well, in closing, I think I need to remark that professional movie material is usually a different beast to consider. Here, the light situation is normally carefully controlled, with substantially less dynamic range used in any frame than in the average Super-8 vacation movie. I can imagine that in this case high-quality cameras and a processing path based on raw camera data can yield good results - certainly, if the scan is made from the original negative film stock.

But scanning hobby-grade color-reversal film with a single exposure per frame, recovering all the details in the shadows without blowing the highlights, when under-exposed sections directly follow some over-exposed sections? I’d be interested if someone is able to show me how to do this. :wink:

1 Like