Dynamic Range and Digitizing

Hello everyone,

I have been working on digitizing some old Super8 home movies for quite some time. I have gone through a number of hardware iterations with varying degree’s of success. I am currently using a Bell and Howell projector that I have modified with a stepper drive and LED lamp. I really like what I am seeing here with the design of the Kinograph. My question however is with regards to capturing all the information on the film frame. I am as interested in preserving the film frame as wella as producing a digital version. I have researched the characteristics of Super 8 film and found conflicting information with regards to resolution and dynamic range of the stock. I do know that a typical digital camera can not capture the full range of a film frame in one shot. I am using a Point Grey Flea USB3 machine vision camera. ( 3 megapixels, 12 bit ADC) with a macro tube extension and a 50mm nikon lens ( what I have on hand). I have been experimenting with using HDR to extract more information from the frame. I have experimented with taking up to 10 bracketed frames and then used LuminanceHDR to combine them into an HDR file. This has sometimes resulted in spectacular results and other times terrible images depending on the particular reel I was processing. So I have recently changed my approach to capturing RAW bracketed images with the intent of creating a good processing pipeline to create the final digital frame. I intend to archive the digital frames in whatever format can preserve the most information and use that to produce the digital video.

Having said all that, has anyone else explored this area in terms of faithfully capturing all the image information in the frame. If so any input would be greatly appreciated.

Thanks

Are you using the internal LUTs for the Flea camera? Certain settings reduce it to only 8bit processing internally and you will lose some of that range.

Professionally, for really dense film stocks we triple flash each frame, varying the intensity of the light source for each flash, and they are combined for the final result. However it is not standard ‘HDR’ processing involved as that often produces odd results.

The other technique is to use a pre-processing LUT at capture time, this allows for more bits where you need them, to better preserve the details you require.

The Flea does not have great dynamic range out of the box though, upgrading to a better sensor would allow you to capture all of the detail, from even quite dense stocks in a single pass.

1 Like

Thank you for your response.

I am not using the LUT. I am capturing RAW data right from the sensor at 12bits, but I will experiment with the LUT. You mentioned a method to combine the images that is not HDR, did you mean exposure fusion or something else?

I’ve read that Super 8 film can have up to 15 stops of dynamic range. How much range would a camera need to capture that in one pass? In terms of the Point Grey cameras I think the most I saw was around 70db.

While I’m not against upgrading my camera, I would like to try and optimize what I have now before buying a new one ( a fair sized expense)

A non-linear LUT will most likely allow you to get a far better result in a single pass. The tested range of the blackfly is around 73db, but it outperforms many cameras that have higher listed DR figures, most manufacturers seem to totally BS thir figures, whereas PGR seem to be conservative with their measurements.

Using a log LUT to shift more bits down into the shadow areas should allow you to grade the capture and maintain better shadow details.

Peter, you touched on something that made me think a bit. That is, capturing different exposures by varying the intensity of the lighting source. @dio449, is that how you were capturing your bracketed images, or were you just taking them at different “ev” values, exposure time, etc?

I would think that different film stocks would allow different amount of light through. Therefore, increasing EV or exposure time wouldn’t necessarily have any effect if the light isn’t getting through the film in the dark areas.

Additionally, I’m not sure what effect increasing the light source brightness would have on its color temperature. I know this sometimes is an issue with lower end studio flashes in traditional photography - as you lower or raise the flash level, the color temperature changes thus throwing off your white balance from shot to shot. In this application, I’d think that to really get a “good” HDR image, you’d need to ensure the white balance of the various exposures first match each other, then once you have the combined frame, you’d then need to ensure it matches from frame to frame. Not sure if light from the sprocket hole could be used as that reference point, though, since it’s not actually passing through the film in a white area.

I am using an external trigger on the camera, which keeps the shutter open for the duration of the trigger pulse. So I am changing the exposure time for each bracketed frame. The light source remains on the whole time so the color temperature is constant for all exposures.

I did some testing with the LUT and it has resulted in a significant improvement in over all image quality. I am now experimenting with different curves to try and quantify the changes so I can optimize.

Good stuff, I thought a LUT might help you make the most of the range the sensor has.
Digitap, yes, that is the problem with using standard HDR algorithms, they aren’t great for moving images, you get a lot of inconsistency in colour and luminance from frame to frame.
There are algorithms that work well, I can’t talk about the ones we developed as they are part of the company’s IP, but there are research papers out there if you go looking that contain some very useful math, that applies directly to the problem.

For multi exposure, we have the LEDs at the maximum brightness to expose the densest film areas, all the time (after seeting the brightness mix of R, G & B to suit the particular film stock) and then adjust the duration of the flash to adjust the exposure. The camera shutter is open for exactly the same time for each exposure, it is the duration of the light that changes. With our lighting system this give consistent colour results, but also ensures that the densest areas of the film still have some photons passing through them that make it to the sensor.
The geeks in our photometry division worked it all out and tested it to verify that we get consistent colour throughout, it did mean a particular mix of LEDs, including two different RED LEDs in the mix to get the full spectra coverage and consistency at different light levels. We had to bin LEDs from various manufacturers as well, as the individual LED performance was often way off what was quoted.

Also, a quick note, high CRI value white LEDs often have colour issues when it comes to sensor response. The CRI methodology it annoyingly flawed. CRI is particularly inaccurate in the deep reds, I wish the industry would switch to CQS, which is way more accurate, but it probably won’t happen.

For more reading, http://apps1.eere.energy.gov/buildings/publications/pdfs/ssl/cqs_rationale_06-10.pdf

1 Like

Thanks for the input.

I have been mostly focusing on extracting the information from the frame, however your RGB flash has got me thinking. Originally I planned to do some color correction after the fact but I would have to assume that adjusting the color of the light must be a superior approach. I do have a reasonable selection of LED’s, I am going to build up a basic flash. I do have other colors other than RGB, would there be any benefit to adding them to the mix to even out the spectrum since the sensor only responds to relatively narrow portions of RGB.

1 Like

I’m late to the conversation here, but I thought I’d share my experience.
Dio, I’m also using a B&H projector and a stepper motor; but I’m using the Raspberry Pi’s camera, which has a poor dynamic range. In my first setup, I was able to capture much more of the range by bracketing using different exposure times (easy using the pi camera’s python API) and combining the exposures using the ‘enfuse’ utility. It is not ‘true’ HDR - the final result is still 8 bits per channel - but the results often look much better, and are very consistent from frame to frame.
I recently discovered that the same algorithm is also [available within openCV v3, as “merge_mertens”]
([High Dynamic Range Imaging — OpenCV 3.0.0-dev documentation).
That’s what I’m using in my system now. By doing the processing in RAM. I find that a triple exposure usually achieves great results, and I vary the ‘spread’ of my bracketing based on the individual film.
In case you’re interested, here’s my code.

3 Likes