The Backlight

@npiegdon - what an amazing work! (I actually also own a few Wolfgang Faust calibration targets and at one point was planning to go the same route as you did. Never found the impetus to approach it…)

The three ICC profiles you derived from your measurements:

reflect a a transition from simple to more complex color transformations. And yes, my tests above were done with a 3x3 matrix only. (And: in bashing LUTs in my post above, I was referring to LUTs created manually, according to taste - not LUTs coming out of a calibration procedure.)

Of course, with a more complex transformation, the optimization becomes more challenging, and with bad measurements, optimization can become numerically unstable. So things can easily screw up. The results of your color error table indicate that your measurements were spot on - with all the experimental challenges you noted… wow.

I did not go along that path because of three things. First, it was experimentally challenging - you mentioned how long it took you to take the calibration images. Second, for my favorite film stock, Kodachrome, I could not find a suitable target. Thirdly, I have quite a lot of other film stock to scan (various Agfa stock, Fuij, etc.) and it is unclear, how good a calibration obtained with one film stock would work on another type of film.

Coming back to the current question - is 3 channel narrowband scanning better or worse than scanning with three broadband filters? It was claimed in the Distor paper that selecting the narrowbands carefully, you can increase color separation in case of severly faded film stock. I think that is without dispute. You certainly can improve the signal quality with respect to color separation that way. But for this approach to work perfectly, it requires you to take care of the specific film stock you are scanning.

I think you did not select the set of narrowband LEDs in such a way that it optimizes signal separation in your raw data. In any case, the influence of an optimal set is probably rather minor, considering the usually broad spectral densities of film layers (cmp. for example bottom-left part of fig. 2 of the paper cited above).

Furthermore, by utilizing a 3D LUT as primary transformation tool, you can increase if necessary color separation even locally, in specfic areas of your color space - a much easier approach than optimizing LED sets (there will be a larger quantization error in case of LUT-corrected data, but that should not be noticable).

I think the question of narrowband vs. broadband is still unresolved. It probably depends heavily on the context you are asking this question. Certainly, for a specific film stock, you can potentially calibrate quite a good LUT out of a set of carefully taken calibration shots. But calibration is tedious work and can fail for a lot of reasons (mostly: small errors when taking the calibration images). On the other hand, we have the broadband approach which uses the generic filters/transformations the camera manufacturer came up with (or: in my case, by utilizing the HQ camera sensor, the one I came up with :innocent:). While this will always be inferior to a film-specific calibrated approach, it might just work ok for many of the other film stocks one encounters.

For my use case, namely digitizing a fairly diverse selection of different film materials, I’ll stick with broadband and a single raw shot per image - simply for convenience.

2 Likes