The Backlight

Well, so do I. And I do not yet have a good answer.

If you are working with standard film stock, there are for example IT8 targets and software available which do a decent job in making sure that the colors you have in your film stock are properly transferred into the digital domain. But even than, most digital displays fall rather short of the dynamical range classical film stock paired with an old projection device is able to deliver. But that’s another story.

But, what do you do if you have another film stock to scan, quite possibly one out of production for a long time, like Agfa Moviechrome? Your IT8-calibration with the currently available film stock might get you close, but you do not know how close. For sure, there is no way to manufacture a IT8 calibration target for that old film stock.

Furthermore, you might not even want to stay close to the original footage. I give you two examples from my experiences working with Super-9 color reversal footage for that claim. One example is composed of little ducks swimming in a white bathtub. The lighting was a combination of incandescent bulbs and fluorescent tubes. This resulted in a dramatic color shift of the footage. Here’s a rather faithful scan (color-wise) of a single frame:

This is how the film stock would appear to you when placed on a perfectly white background (which gives your brain the “white” reference).

However, when the original footage is projected in a dark room, things change dramatically. Missing the white background reference, your brain is going to adjust your color perception. The projected image as perceived by the viewers would look more like this:

The issue here is especially present in Super-8 film stock, as the cameras had only two color temperatures to work with; the film stock was fixed at tungsten 3400 K type A (indoor use), and you normally worked with the build-in “daylight” filter outdoors. In many situations however (like the one discribed above) the illumination situation was more complex, creating color cast. They are easily noticable in the digitized versions, but were much less noticable in any projection setting.

The second example is connected to the small length of a typical Super-8 reel, featuring only 15 m of film stock (a few minutes of film). So you typically worked with serval different rolls in single movie project.

Occationally, the final rolls got developed differently, so color shifts are noticable between them. Here’s an example. This frame with decent colors

is followed a few seconds later in the movie with footage that looks like this:

The color difference is due to the fact that this footage comes from from another roll of film, obviously developed somehow differently than the previous one.

Note again here, the greenish tint in the later footage would probably not be noticed by the viewer in a darkened projection environment; the brain would “white-balance” the difference out. But the difference is there and noticable in the digital version.

So - the question arises here: what should be the goal of the archival scanning? Keep every color variations present as seen by the scanner? Or even out these color variations, in order to approximate more closely the viewer experience during projection?

My current approach to this is the following: I am aiming for two different output channels. The first one is the original scan, preferably with a high dynamic range to be able to counteract color variation for the second output channel - for that second output channel, I aim at the highest viewing experience, both in color definition as well as image sharpness. Clearly, there is a lot of artistic license (and manual work) present in the second case…

Coming back to the other topic - what’s better, narrow-band LEDs, “white-light” LEDs or broad-band illumination. Or even a mixture of all? I do not yet have a definite answer, but let’s start with the obvious.

As the film emulsions of the old days were trimmed to be viewed during projection by human visual systems, the broad-band illumination supplied by a halogen lamp of the appropriate color temperature is probably the optimal illumination. Such an illumination comes close to a black-body radiator of the appropriate color temperature. Also, the human observer’s eye and brain might be approximated closely with present-day digital hard- and software. So that combination might be the optimal one.

As remarked above, it is close to impossible to obtain color calibration targets for old film stock, so there’s no way to improve this further. This is probably as close as you can get in terms of color fidelity - at least in my use case of home movies.

One step back is probably using a white-light LED. Nowadays, the quality of the top-level LEDs have remarkably increased. But if you look at the spectral distribution of the Kinetta white-light LED, as listed in the paper above,


you notice two prominent peaks. The narrow one, at the blue end of the spectrum, is actually the peak of the blue LED used in these white-light LEDs; the other, much broader peak is due to fluorescent dyes. Together, they trick the human visual system in seeing that light as “white”. But the interaction between such a spectral distribution and the camera’s filter characteristics could be stranger. Nonetheless, newer white-light LEDs have a better spectral curve, and it’s probably a feasible option.

Yet another step back from the reference case (human observer in a darkened room) will be a light source composed of different LEDs for red, green and blue. Such a combination might have advantages in terms of color separation, but might yield funny color grading. In my own scanner, I use this combination, but I had to change the prominent wavelength of one LED because of such false colors occurring. Also, I needed to modify the camera filtering (replacing the IR-filter) in order to improve the color definition in the yellow-to-red color sector. But I am still working on these issues, with no good solution yet.

Finally, we come to - in my opinion - the worst lighting option: the combination of separate color LEDs with a white light LED. You’ll likely end up with a spectrum that mimics something like a fluorescent tube (which is not a good idea, even so some flat bed scanners use such things). Of course, this is a “fluorescent tube” you can adjust to your personal taste by varying the current through the individual LEDs. If your goal is directly to achieve a great viewing experience (as discussed above), such a combination could still have some merit.

In concluding this post, I will describe my current setup: the illumination is using separate LEDs for red, green and blue. The amplitudes of these LEDs are so adjusted that in the raw images all four color channels have a similar, high amplitude. In this way, the number of exposures needed to digitize the high dynamical range of color-reversal film is minimized. The whitebalance of the camera is fixed in such a way that white image parts stay white. The camera itself was modified with a better IR-filter, which improved the color definition of yellow-red tones (skin tones are in this range). It still remains to be seen if that was a good idea, that is a point I am working on. The final “archival” scan is composed of five different exposures which are exposure-fused and stored as 16-bit per channel data. In a second processing step, the “archival” copy is color-graded and resolution enhanced; this is the final “viewing” version of the scan.


For clarity, when I said:

for a single sensor, I meant a monochrome sensor.

I think everyone has a slightly different take, as illustrated by the paper above.

Here is my half baked thought process, which may explain the may in the white-red-blue triple exposure thinking, and which will likely change as experience comes along.

The film contains information, including defects. I quoted this mission statement before taken from @friolator

to digitally reproduce the film as closely as we can, not so much the picture the film contains


It is clear we may not be able to capture all, and the conundrum of light and sensor is a fundamental decision on what is captured, and what is not.
Moreover, stacking provides a great workaround to extend the dynamic range on the sensor.

I think the goal of a scanner is not a ready to play version of the film. If we treat the film as information, the goal is to extract as much information as able within the limitations of time, quality, and cost. And cost/time is a critical factor for many who have a lot of film.

If we treat the film as signal and noise, the digital representation of a frame should have the best resolution (that one’s money can afford) and the signal to noise the dynamic range that the capture/channel can provide.

If the image in the film is signal, a tri-band narrow-band led is a filter, and I do not know yet what is lost in making that choice of discarding everything outside those 3 narrow bands. Something is lost, I do not know if it is relevant, but is lost.
Color sensors, also make a choice of what gets captured, and what is not, given by the sensor color sensitivity, even when using a white illuminant. But the band on the sensor color sensitivity is not narrow, so the limiting factor is the LED narrow band.

So if the film is information, the monochrome sensor is the wide band receiver, which requires a wide band antenna (the white illuminant). That’s great, but the problem is the separation of colors. The blue and red narrow band exposures would then provide the additional information needed to render a color frame.

It is the same as is done in video recording, the white = luminance, the red and blue provide the ability to create B-Y and R-Y.

Is the above illuminant setup better than R-G-B? Depends on the goal.

If the goal is to have a pleasing image, by definition of the narrow band RGB filters (leds), may be best.

If the goal is to capture the most information from the film, then a wide band capture (Y-White-Monochrome) will definitely keep more of it, and granted, more noise too. The narrow band blue and red LED is also not ideal, but it may be a cost effective trade-off.

In the White-Red-Blue exposures scenario is also worth mentioning that if there is no stacking, then the levels for the R and B illuminant should be adjusted for the best use of the dynamic range of the sensor/bit depth chosen for storage, which would provide better results when color correcting for finishing. And yes, why not consider separate channel stacking.

The same is also a consideration if using R-G-B LEDs and a color sensor with RAW files, go for the best use of the bit depth for each channel, not for the best color rendition.

The above is my thought process, and I am not aware of any scanner using this method.

Well, “digitally reproduce the film as closely as we can” might not actual be the best thing to do. First of all, film stock was and is still used in a well-defined transmission chain, with separate stages all working together to deliver (in the best case) a convincing color experience to the viewer.

It is important to understand that the actual film (response) is just only one of several stages of the transmission chain ending in the viewers eyes and brain.

Let’s keep things simple and focus on color-reversal film used in a home cinema setting (in more professional settings, much more intermediate steps will occur).

As remarked above, home cinema cameras did use only two settings for color temperature - indoor and outdoor. That is far away from todays fine-tuned AWB algorithms your digital camera is employing. Thus, already the original film stock will have color casts baked into. Why did the engineers back then get away with that? Because the viewing situation during projection covered that up: the dark room did not provide your brain with enough reference points to discover the color cast in the original material, so the viewer’s brain adapted to the color seen on the screen and corrected it. Why are we not getting away with the same trick with our digital copies? Because usually the image displayed on the screen is framed by bright enough surroundings supplying a reference for our brain to notice the color cast.

Coming back to the transmission chain introduced above. Once the film is returned from the lab, cut and combined into a movie wheel, it is projected, in a consumer setting usually by an illumination supplied by a halogen lamp. The spectrum of such a lamp is actually quite close to the spectrum of a black-body radiator.

Knowing the spectral transmission values of the film stock as well as the spectral power of the lamp, it is easy to calculate the spectral distribuation seen on the screen.

Note that up until this point, the full information of every single image spot of a frame would be composed of hundreds of spectral values. (For a nice, much more detailed introduction to all this in the case of digital image formation, I can recommend a series of articles from Jack Hogan.)

The spectral distribution on the screen is now watched by the viewer. In this process, the full spectral information is condensed from hundreds of values into just 3 color values per image spot. (Some animals have more color channels than human beings, by the way. They perceive colors different which are the same for us.)

Technically, it is possible to interfere the colors the viewer would experience, for example, as CIE XYZ values (within limits). This is the equivalent dinensional reduction as in the viewers visual system, again reducing the hundreds of data points of the spectrum seen at the screen to just three color values.

This dramatic dimensional reduction taking place in our visual system, from a spectrum with hundreds of values to a color perception characterized by only three values (with some non-linearities introduced long the way) is the core challenge of faithful color reproduction. In a professional setting this is usally solved by recording pre-specified color patches at the beginning of the transmission chain and making sure that these colors stay (within limits) the same all along the way, until the viewer is exposed to them.

Just a side note: in the good old analog days, I would occationally select a certain film stock because I knew that the colors of the scene would be effected in a favourable way. Kodakchrome gave you fantastic blues, for example, while Fuij would render various shades of green much better. Agfachrome was a good choice if rendering of brown tones was the goal.

Picking up the line of argument again. It is important to note that the RGB-values recorded by any scanner (known to me) are, very similar to the human visual system, the result of the combination of the transmission spectrum of the film, the illumination spectrum of the light source, the filter response curves of the camera used, plus some other unpleasantries like CRA-color variations. The film scanner does already the dimensional reduction from hundreds of values per pixel (that is a hyper-spectral image in modern language) to only three values per pixel. It is at that point that a massive information reduction is happening and this is also the reason for the appearance of color metamerism (Again, the reduction in the eye of the viewer from the full spectrum to only three color variables is equivalent).

The classical goal of each scanner is to be as close as possible with its RGB-values (in whatever color space they are living in) to the CIE XYZ values (or whatever other device-independent color space is deemed appropriate). Normally, this is achieved by color calibration, using for example images of IT8 calibration targets. Only that you won’t have such calibration targets available for historical film material…

Hmm - if I understand this correctly, you want to scan each frame in three passes, a full spectrum (“white”) capture, plus a red and a blue one? And then combine these into appropriate color channels? Interesting idea. In principle, that should be equivalent to scanning red, green and blue, at least as long as you are staying in the linear domain (i.e. raw images). Not sure about noise levels and quantization issues (dynamical range). But frankly, I do not see any immediate advantage of such an approach, as at that level, you are working in linear space and can transform this new approach into the standard approach working with RGB-filters by a simple transformation. As you mentioned, nearly every camera does that sort of transformation (RGB->YUV) . JJust for fun, here’s how one of the frames posted above will look like in YUV:

It is clear that the dynamical range of the U and V channels are lower that the Y channel, which is more or less equivalent to the brightness of the image (that is the one of the reasons why YUV is used in video applications - saves space). Just to throw another thought into the discussion: actually, the film layers are cyan, magenta and yellow layers, so we are actually trying to digitize something like this here:

Presumably, one could sample that stuff directly with appropriate placed LED curves and a monochrome camera? This is indeed somewhat discussed in the article linked above, ff. 14, but the unsolved question remains: how to get from such an optimized color separation process to a faithful color rendering process?

1 Like



The capture with white LED will have all the spectrum that is being filtered/discarded with a narrow RGB Led.

Y is an arithmetic combination of RGB, and Y full spectrum - unfiltered- is captured.

The trade-off is the capture of Blue and Red, which would be narrowed by the color LED band.

For nomenclature, w is wide (and white). n is narrow.

Yw - Captured with White LED or Incandescent.
Bn - Captured with narrow Blue LED.
Rn - Captured with narrow Red LED.

Gw = ( Yw - (0.30 Rn) - (0.11 Bn) ) / 0.59

My hypothesis is that the resulting Gw arithmetically calculated from the wide spectrum Yw would yield additional information, when compared to a narrowed LED captured G.

At this time I don’t have color LEDs to do a quick test of the theory, to date my scanning setup is white LEDs.

My thinking is inserting a step: to extract and store the most information possible when quantifying the color/light information on the film. In other words, maximum spectrum, maximum dynamic range, maximum resolution. After that is stored, postprocessing is the path to faithful, and better, color reproduction within the confines of the color space chosen for distribution/display.

I know this is out of the box, the color box (triangle), thinking.
To illustrate, when quantifying Blue into bits, it would be better for noise ratio to use the full dynamic range of the sensor for that color. At that stage, the level may not be the correct for faithful color reproduction. But after processing/correction, the blue-signal captured with a better range (bits) will render a less noisy (sensor and quantization noise) output than the normal level setting for best color rendition at the scan.

Well it’s fun to think and progress outside well-known paths. Let’s continue on some points.

It would certainly yield a different information in the Gw-channel. The problem I see: that additional information is (per image spot) only a single number. And while actual filter curves in digital cameras are designed to mimic closely what your visual system does (I again want to suggest the writings of Jack Hogan about that subject), ensuring thus that intermediate colors are recorded close to their actual position in perceptual space, your setup would not immediately yield such a feature. And I am unsure on how to obtain an appropriate color transformation. Note that you do not really have “additional information”, as still you are working with only three color channels. But it is certainly an interesting idea and one should try it out.

That brings me to another quote I want to comment on:

Well, let’s consider first the maximum possible information in terms of color. That is actually possible, namely by using a spectrometer - which would yield hundreds of numbers for each image spot. A little below that quality are hyper-spectral cameras, which yield at least data in several different wavelength bands. Citing again the paper by Flückinger et. al.,


Actually, such approaches have been used in the past to scan important artworks, and the multi-spectral data is usually sufficient to “illuminate” the scan in arbitrary illuminations, for example in VR-applications. For film scanning, the time required to aquire a full spectrum is prohibitive. And it is even not necessary, as the spectral information is anyway condensed in the human visual system into three numbers per image spot.

Next is maximum dynamic range - which is challenging if you are working with color-reversal stock. From my experience, even the 14bit raw image current-day cameras can supply are not able to fully cover the density variations of a Kodachrome or Moviechrome film stock completely. They come close so, and combined with some way of auto exposure will work ok. Besides, the information in the darkest part of a tiny Super-8 image is mostly spatio-temporal noise. That has to do how these film stocks work: they have layers with small crystals that are not very light sensitive, combined with medium and large crystal layers - the later being the most ligth sensitive ones, and the only ones exposed in a dark environment. Which brings me to the last entry of your list: maximum resolution.

Maximum resolution is an interesting subject. In well-lit, well-exposed image areas, the actual resolution you get from a tiny Super-8 frame is actually more limited by the optics of the camera you used when taking the footage than the performance of the film stock. This is quite different for darker areas of the film. Here, the prominent contribution to image corruption is actually film grain - spatio-temporal noise which obscures the actual signal you are after, namely the image intensity stemming from the scene you were recording.

This spatio-temporal noise “built into” film technology is implicitly handled by the viewer’s visual system being quite perfect at averaging out such noise.

I think anyone who has ever worked as a projectionist knows the quality change in image definition when you project a still image for a moment and then get the projector running.

So is it necessary to capture the form and variation of every single film grain? I would argue against it, as the information you are digitizing is in this case more or less noise, unrelated to the scene which was photographed. I am personally more interested in restoring the scene captured to optimal fidelity instead of the specific properties of the film stock used to capture the scene. I have the same approach with respect to color managment (so I end up in grading every single scene in postproduction, due to the variations described in a post above). I know that there are other, quite valid opinions on this subject, and I respect them.

That approach could actually yield an interesting opportunity - namely a better recovering of faded Ektachrome film stock. You actually did post here a nice example of such an endavour!

I beg to differ on this one. The result is 3 numbers, but it is what those capture. In this case the resulting Gw -I speculate- will be influenced by the broader spectrum primary component Yw, since arithmetically is the primary component of Y. My hypothesis is that the additional spectrum (information) captured as Yw with the broad illuminant (white in this case) will provide visible changes in the resulting G channel.

To make an analogy, when one is digitizing music, it is only one number. But if the sound is filtered before capture (what the leds are doing) the information in the channel (the music components) are not preserved.

What is it better, a narrow G or a wide G?
I do not have enough understanding to make that choice, except from a signal perspective, there is more signal on the Gw (and also more noise).

If time and storage is not an issue, one could achieve something similar for film with a monochrome sensor and multiple exposures of more narrow band LEDs. But on the human eye, all that would become 3 numbers as you point out.
If that is done, one would be able to blend these to effectively widen the resulting bandwidth of each RGB by blending only those that match the wavelength for the channel. Not sure it is worth the time and storage.
Other valid use cases are for UV and IR to detect other physical aspects.

Agree. Here the practical workaround is exposure stacking.


Agree. My thought process here is that if grain is better defined, digital cleaning process would also be more precise.


The Ektachrome post is 12 bit raw with white LED.

The film no longer is capable of faithfully reproducing the scene colors. When increasing channel gain in color correction, one should try to have a source with sufficient range to avoid digital artifacts/noise resulting from a limited range capture.

In this cases would white, or a tri-pass of Y-B-R would be a better source image than narrow band RGB? I think the answer is yes. And by source image I mean one that requires processing/correction to render a faithful scene.

I will certainly read these, although probably not understand all. I think that key to the narrow band to wide band RGB is this article

The number that would represent each channel is capturing the power for a given spectrum band, the cascade band of illuminant, film, lens, sensor, processing, correction, and display.

There is a single G number, which represents the power that will be emitted by the display, and is derived by the power captured by the sensor.

How is the G number influenced/changed when the power counted by the sensor is received from a chain with a wide or narrow illuminant?
It will be one number still, but I argue it will not be the same number if there was adjacent power at neighboring wavelength, adjacent signals (what I called above information).
The number may be different than when adjacent power is filtered by the narrow illuminant.

I think that the narrow blue and narrow red is an acceptable trade off, given that the eye ability to see detail at those wavelengths is less (compared to Green). The trade off is having a wide band Green, where most the details are perceived by the eye.

Thanks for taking the time to exchange these perspectives, certainly there is no one size fits all in the world of film scanning, and it is good to have a better understanding of the choices one have when assembling a scanner.

I have been watching this presentation as a way to better familiarize myself with color science in film and what created the itch for the wider band illuminant.

The “Color Science Basics For Filmmakers” YouTube video just posted by @PM490 is EXCELLENT! I recommend it to anyone who is attempting to understand this thread and failing (me)…
Also, thanks to the titanic contributors here for the healthy debate on the true essence and constituents of what is being scanned vs our use and perception of that material.


Just for the record: that is not really an analogy, as audio is (and stays during filtering) a temporal signal, so it is and stays intrinsically multidimensional (lots of audio samples spaced at evenly distributed points in time). The reduction from a multichannel spectrum into a single number is different from that.

Well, somewhat corresponding, there have been similar developments in RGBW camera sensors, actually quite early (the link is from 2012). While some people see advantages in such an approach, I think one has not really seen a market breakthrough for these type of sensors. It might be an interesting thing to try out in terms of film scanning anyway. Especially since your idea, while based on similar filtering, uses a monochrome camera, so I expect the challenges/results to be somewhat different.

Thank you @cpixip, but I did not follow what you meant.
Granted light are particles also, but both are waves. Moreover, a single pixel sensor is sampling at evenly distributed points in time.

Thank you for the references to RGBW sensors.

Another considerations on the WBR or WGBR for the film scanning application is that using a monochrome sensor reduces the hardware cost dramatically, it is not subject to the bayer filters. It is however, more time and processing intensive.

Again, thanks for taking the time to exchange perspectives. At this moment I do not have the components to test, maybe after I get the W-Color sensor scanner working, will find some funds to get a monochrome sensor and do the WRB experiments.

PS. Launch of an RGBW sensor/phone

1 Like

Found that there are some sensors using a similar approach to what I described above as a combination of WRB illuminant and a monochrome sensor. The color filter array for those is abbreviated as RCCB (or RCCB sensors).
This TI paper explains the use of Image Pipe for alternate formats as RCCB. (C = clear)

– just in case anybody is interested in more color science:

Follow the links in this list into the rabbit hole. :upside_down_face:

… and, adding another link here, a .pdf about “Cinematic Color”.

While we’re at it: here’s the first “SMPTE Essential Technology Concepts” webcast, by David Long, about color, contrast, and motion in cinema.

Very interesting discussion about what for me is the hardest part to pin down: the backlight. I have yet to find a good white LED with a reasonable flat spectrum. How flat is “flat enough” for you other guys?

I had the same thought as @dawsmart on the backlight, as using a car incandescent light as lightsource. Since we’re chasing a light source that is close to the spectrum of a projection bulb, it seems the simplest way to go. The obvious downside would be the heat production, but maybe it can be managed? Has anyone done any serious tests with spectral analysis, temperature and lumen production on small bulbs?

Also, how many lm does your setups have to make a good image in general?

I came across this sensor, it was mentioned in an Adafruit video, and it is an interesting tool for measurements of light quality.
Thought it would be of interest to you.

1 Like

:smile: :+1: absolutely! In fact, I have one AS7341 sitting on my desk right now - the AS7341 features three sensor channels less than the sensor mentioned in Adafruit’s blog, but it is currently available as affordable break-out bord. Up to my knowledge, the new sensor’s breakout board is only available via ams.

Both sensors need proper calibration in order to be used as a measuring device, very similar to the color calibration of cameras and sensors - therefore, I am currently refreshing my memory on color science. It’s been 20 years since I last worked in that field…

1 Like

Color Science, Cameras and the Backlight

I want to make a certain point about the backlight of a film scanner I think is important.

Diving again, after more than 20 years, into many things related to the way humans visually perceive their world, I am pretty sure now that there is one thing you should not use in the backlight of a film scanner: a mixture of narrow-band LEDs.

Rather, use the best white-light LED you can get. At least if you are scanning color reversal stock.

Why? Only a camera mimicing closely the way our human visual system works will “see” the colors of a color-reversal film similar to a human observer. In technical terms: the camera should mimic the “CIE 1931 2° standard observer”. As the name indicates, this standard observer was introduced in 1931 (on the basis of initially 17 observers) and has seen additions (like the 10° observer) and modifications over time - but the model and its associated color values (X, Y and Z) are still in use today.

A perfect camera (= one that sees the world as humans do) should produce for any given color patch values of X, Y and Z identical to the values the standard observer would “see”. Well, high-end cameras come close to this, but due to various technical reasons, they are not perfect and probably never will.

Now comes the important point: the filter response curves of humans which are somewhat hidden in the standard observer are broad and overlapping. And so do the filter curves of any high-end camera on the market.

The filters need to be broad and somewhat overlapping, because otherwise, the cameras would see occationally color differences where a human observer would not, for example. An extrem example in this regard are hyperspectral cameras, which see much more of a scene compared to any human being. Typically, these cameras use a set of very narrow filters across the full spectral range (or some other, equivalent construction).

However, to recap this again - if you want to record with a camera colors as similar as possible to the human visual system, you need three broadly tuned filters that you can transform to the X, Y and Z values of the standard observer. This transformation is actually happening in any software processing raw image files, as well as in any camera spitting out directly jpgs or the like.

To illustrate how broad the filter curves are, here are the filter curves of the IMX477 sensor used in the Raspberry Pi HQ camera:

They show exactly what I was describing before: the filter curves of the sensor are broad and overlapping. In fact, any other camera shows similar curves - they need to.

Now, if we use as illumination source a combination of different narrow-band LEDs, we actually kill the performance of any camera! Why? Have a look at the following plot, which shows the spectral distribution of daylight around noon (D65), a light used in projectors (Kinoton 75p) and my current backlight setup utilizing three narrow-band LEDs operating at wavelengths 465 nm, 512 nm and 634 nm. All three light sources would produce the impression of being a “white” light when shining for example onto a gray card.

Now imagine for a moment having a film frame with a slight color variation in the dyes of the film, around 600 nm. When using daylight illumination or the projector light, this color variation will be noticable by any camera as well as human observers.

But: nothing will be noticed in the case of my narrow-band LED setup! This is because there is simply no light available at that specific wavelength to sample the variation. No camera, no human being will notice this color variation with my LED-source.

Note that the same issue also applies to scanners employing narrow-band filters in front of the camera. While these things are good at picking up detailed spectral information in a narrow band, they fail miserably when it comes to good color reproduction. And as there is no way whatsoever to recover the missing color information as there is no way out of this trap.

So, for a good color reproduction, you will need a light source with a broad spectrum, ideally very close to average daylight illumination.

In conclusion, do not use backlight produced by a set of narrow-band LEDs for scanning color reversal film - there is no chance of getting your film’s color right.


Negatives aren’t projected, they’re printed to positives which are then projected.

Exactly right. The tinting will cognitively disappear in a darkened cinema. You’ll only notice it if it’s sandwiched between scenes tinted normally (or differently) in which case the colour timing looks messy.

Absolutely true. That’s why I restricted my post to color-reversal film. I probably didn’t make that point very clear. Good that you pointed that out.

Having learned a little bit more about color science, I would actually drop my comment “stay close to an illumination spectrum similar to projection lamps”. From what I am seeing, it seems that the performance of camera sensors in terms of color fidelity drops with warmer light sources like a Tungsten 3200K. I still need to do some more research on this matter.

1 Like

True, for color negatives, and as mentioned by @cpixip his comments (and all charts I posted to illustrate) are for color reversal film. Although B&W negatives are also not projected, there is probably not an issue for those.

@filmkeeper, you bring up an interesting topic.
Given the findings for color reversal, what is the best approach to negatives?

If the offsetting of the light is done by adding a blue narrow band LED (a spike in the illuminant spectrum) which is the result of a white LED + Blue LED… that create similar issues for the sensor to what is being discussed.

I don’t have broad experience with negative for moving images, but from the experience with color photo negatives.

Negative to sensor challenges of dynamic range and signal to noise of the resulting inverted/corrected image are another subject. Although the blue LED improves the signal to noise for the blue channel in the sensor, it does so only on the narrow band of the LED. Blue rendition is improved, but only on the narrow band that the LED provides.

Intuitively, I would think it is better to use filter a notch filter to attenuate only the orange color band of the negative orange, instead of the practice of adding blue light.

Another intuitive alternative, but only applicable to stop-motion scanners, is to do three captures:

  • One for the best exposure to the red channel only.
  • Another for the best exposure of green, and another for the best exposure of blue.
  • Then use only the best exposure channel into a merged RGB result.

This alternative would provide better results for Blue specially, and improvements for Green, when compared to adding blue light. Although the challenges for red dynamic range remain.

The method above is also applicable for faded reversal films, where color dyes fading may result in a very dim channel at the sensor, and consequently a lower signal to noise for that channel, when all channels sensor channels are exposed the same.

Important to caveat that when I refer to signal-to-noise I am referring to the electronic noise, the noise of the sensor, and not the source image “noise”, the grain, which would not be improved for faded dye.

I believe this is an approach that would be better than to skew the illuminant with narrow band LED to compensate for the fading, given the premise that the improvement would only be on a narrow band improvement.

An approach that may also be combined with other multi exposure techniques (mertens) for a better representation of the dynamic range of the film.

When looking at LEDs, how large of a blue spike is acceptable for you people out there? I haven’t yet found an LED that I’m happy with the spectrum. I found a lamp that had a pretty flat spectrum but of course a rather large blue spike. Since I’m a novice when it comes to colour science, I’m having a hard time interpret how big of a difference a spike like this does. Does anyone have any tips?

That sounds interesting. In what way have you noticed a worse performance on 3200K compared to the 6500K one?

Well, here are the details. As I already noted, what I am going to present are just preliminary results, and I have not yet thoroughly checked my approach for feasibility or correctness. Nevertheless, here’s a graph showing the color fidelity one can expect from a Raspberry Pi HQ camera with IMX477 sensor, viewing a standard color checker illuminated with light of varying correlated color temperature (cct):

As you can see, the color error has a minima with all three curves (I will later explain these curves in detail) around 4000 K and goes up for warmer (lower cct) or cooler color temperatures (higher cct, less worse). A color error larger than one in the above diagram would be just barely noticeable in direct comparison.

These curves were not obtained by direct measurements, but by simulating actually a virtual light source shining on a virtual color checker and seen by a virtual camera sensor. The simulation is based on the spectral distribution of tungsten-type light sources, the spectral distributions of classical color checker patches as well as filter responses of the IMX477 sensor and the IR-blockfilter in the camera. It’s quite a complex piece of software and I haven’t tested it thoroughly (or even the feasibility of such an approach). So take the following discussion with a few grains of salt.

The green curve displays how close libcamera would actually come with it’s own processing pipeline. This processing is governed by the data in the tuning file, and I did not fully include all processing stages. Specifically, I did not include the ALSC (automatic lens shading correction) and gamma curve (rpi.contrast) modules in the simulation - both would make libcamera perform worse than the green curve in the above diagram.

The jaggedness of the green curve is actually due to the set of compromise color matrices (ccm) in the tuning file. It seems that very different light sources were used in obtaining the set of calibration images the ccms in the tuning file are calculated from, possibly mixing fluorescent and not so perfect LED-based sources with Tungsten or other lamps. Well, just a guess. But the sequence of ccms in the tuning file does not vary smoothly with cct, and it shows up in the color error as well.

Note that if you would base your raw processing on the camera matrix embedded in the .DNG-file, you would end up with results similar to the green line in the above diagram. Otherwise, the JPG-images/preview-images are produced that way.

The cyan line in the above diagram displays the result when using the simplest DCP-input profile for the IMX477 sensor, created by Jack Hogan. The color error is less than the libcamera reference, and it varies much smoother with cct. If you look closely, there are still two tiny bumps in the curve - that’s were the two reference matrices of the DCP input profile are located, slightly below 3000 K for illuminant stdA and around 6000 K for illuminant D_65. In any case, according to the above diagram, you should get better results if you use Jack Hogan’s DCPs instead of the “camera matrix” in a raw converter.

In a film scanning application were the light source (and it’s cct) stays fixed (that excludes color-mixing LED setups with dedicated groups of LEDs for the primaries), it is possible to actually calculate an optimal ccm for the fixed situation (in terms of illumination). The result in terms of color error with such a direct, fixed matrix is displayed as the red curve above. Note that the optimization of the direct ccm is currently not optimized, I am still working on this part.

All the simulations used in the above diagram were obtained by using a tungsten-like illumination source (specifically, a “black-body radiator”). I am still in the process of enlarging my simulation to include arbitrary light sources (for example with the spectrum you displayed in your post). And of course, these simulations need to be verified whether they relate to reality, by actually taking calibration images with a real IMX477 sensor. So there is still work to do.

I must confess that these things are probably not really that important, as different film stock and even different reels of the same film stock show much stronger variations in color rendition than the color errors we are discussing here. To show you what we are talking about, here’s the actual color chart rendering of the simulations above, at a cct of 4800 K:

If you look closely, every patch features a central patch - that is actually the color each patch should have. Around each central patch, various other segments show the result obtained with the different processing options discussed. On the left side, Jack Hogan’s input DCPs were used, bottom-right the result of libcamera’s ccms are displayed and top-right the CCM calculated directly for the cct (“Direct”) is displayed.

Again - these color variations would not be noticeable if not viewed side-by-side, and they are definitely smaller than the color variations I see in different reels of color-reversal stock. So all of the above discussion is probably slightly academic…

1 Like