Strategies and methods for working with color negative in OpenCV

We have a plan for how we’ll be handling color negative in our Sasquatch 70mm scanner. We won’t be implementing any of that code for a little while though, as our initial scanning will be for color reversal/print film and for black and white. We have a couple jobs slated to start as soon as we have the machine running to our satisfaction, that will keep us busy for a while.

In the mean time I’m starting to think about coding up the color neg handling. Our goal is to do all the color processing work in OpenCV, where we’re also doing stuff like perf detection and frame registration.

I’m curious to see what OpenCV formulas others have come up with to handle scanning color neg to the Cineon standard. (there’s a pretty good reference here, by the way. A bit dated, but the basic info is sound)

Admittedly I haven’t looked into your project, on top of that, it has been a long time since I messed around with Negatives and OpenCV. The last time that I something like that, (although I am unable to find the code I used then), the basics behind what I did was, scrolled through each Pixel and filpped the colors.

As a basic example if I remember correctly:
for each in YAxis
for each in XAxis
NewBlueValue = 256-CurrentPixelBlueValue
NewGreenValue = 256-CurrentPixelGreenValue
NewRedValue = 256-CurrentPixelRedValue

or something like that.

… the documents referenced in this post should be also quite helpful, especially the fourth one.

So we have someone working on this now. The general strategy from the beginning on our scanner (which takes sequential images with R, G and B light and then merges them into a composite) has been to try to match the LED’s wavelengths as closely as possible to that of the film, and to do it with different types of film. But it’s nearly impossible to do this without some overlap so there will be a matrix applied that deals with that.

We are in the early stages of coming up with some test images that we will have recorded to film at Cinelab, nearby in New Bedford, MA. These are kind of specific to our setup. We can use these test films to measure how the camera and light interact, against a known set of values. From the test scans, he’ll develop the matrix. Of course, it’ll be useless to everyone else, but maybe I can post the non-proprietary parts of this process here when we’re done with it. it’s going to be a little while though.

I have done photo negatives only, not moving film. From the limited perspective -electronic/video guy- the capture goal (electronic one) is always to maximize the range of the sensor to the usable content. A couple of reasons to do so, but the less apparent is to have a good signal to noise for each channel.
The challenge with color negative is that the base is part of the light the sensor receives. If the levels are set to preserve the base, the range of content primaries would be lowered. The lower levels can be addressed with the matrix, but not the S/N loss (less bits for the content), which translate to different noise levels for different colors.
That leaves two schools of thoughts. Use the range of the sensor to capture the negative, at a cost to the S/N, or clip the base (the negative base) to achieve maximum range of the content. Deciding on which (for a commercial project) is above my pay grade, but if if was for me I would go for the later.
One way I can think is to do so is to take a piece of blank (unexposed for negative, and to calibrate that blank as maximum for the sensor for each color.
The challenge I had with photography was that the base vary from roll to roll, because rolls development/aging and also with different film brands.
Again, this is an electronic/video perspective, film experts probably have a different take.

When you scan color negative on a lasergraphics scanner, you do a “base calibration” first. This does a couple things, as far as we can tell (Lasergraphics is a bit opaque about the specifics).

  1. The ScanStation uses a white light source, mixed from RGB LEDs. So it looks at unexposed film (between perforations, I believe), and calculates the color change needed to offset the orange mask. That is, it does with LEDs what some scanners do with a physical filter. The light is visibly more blue after this calibration.

  2. It sets the code value of this unexposed film to 95 (roughly 10%). The highlights are allowed to fall where they fall, even if they clip (this is all per the DPX industry standard, though you can override this with other settings, for film that is more dense - like some intermediates). You can also override this (and we do in cases where there’s blowout) using the basic grading tools. We just bring the overall gain down so that the max highlight is set to about 95% of the 10bit histogram scale in the user interface.

Because this is all done on the film that’s currently in the machine (and in some cases, like when there’s keykode on the film it can be automated to use a library of calibrations for specific stocks), you don’t have to worry too much about differences in the film bases from one stock to another.

By calibrating the light source to make the base a “balanced clear” with a maximum of 95% it appears the method is consistent with using the maximum range of the sensor for each color, which would also yield the best S/N for each color, what I referred above as the maximum range of the content.
The one time where that would fail I think is when the content negative is too dark (over exposed I think), in which case -as you describe- overriding it to clip the base and bring the “content base” to 95 percent would again yield the best S/N. There are cases where that override balance is not the same in the content as in the base, for example when the overexposed source light is of a different temperature than the correctly exposed portion, so it would be good to be able to override maintaining the content balance, and providing an override gain over the content balance for these scenarios.
In another threat someone was referring to automatically balancing using light. If the system provides a feature to select the base area to balance for, that can be accomplished. In essence the default area may be portions similar to what you described, with an override to select a specific portion to calibrate when needed.
In summary, keep the content range to the best 0-95 for each sensor, providing a similar quantization and S/N for each channel.

Unrelated, but appropriate to mention is another special case: the old content with a faded die. From what I seen, it results in a change of the curve for the color, the faded die shift the blacks (this is for reversal), but since the whites are whites, I don’t know if this issue can be better if adjusting the light… have to think about it. In this case, the noise would suffer too.
My guess is that if is non-reversal, adjusting for the content would address improve it… but have to think about it too.

I think the way this is done is a little different - the base is still brought to 95 code values, because that’s the DPX standard, but the gamma is different for denser film.

1 Like