… you are absolutely right, this effect is there and I remarked above that if you are utilizing white light LEDs, you have no chance to counteract it. However, I am using separate red, green and blue LEDs, and the 5 different illumination settings are independently calibrated to the same color temperature.
There is actually another reason one has to go through such an individual calibration of every illumination setting: the current-intensity curves of the LEDs used are very different, so doubling the current on, say, the red LED is not equivalent to doubling the current of the green or blue LED. The imbalance caused by this is actually much stronger than the shift of the main peak of a LED by changing the driving current.
To give an example - the darkest illumination level I am using has the calibrated setting red = 113, green = 126 and blue = 52 (the full range spans from 0 to 4095). The brightest illumination level has the settings red = 2146, green = 4095 and blue = 889. So while the red and green components have nearly similar amplitudes at the lowest illumination level, the red component is only about one half of the green component at the highest illumination level. This is mainly caused by the different current-intensity curves of the LEDs, which are in turn caused by the different materials used for the different wavelengths.
I did actually tests where the color balance of some of the illumination levels was off (not much) - depending on the settings of the exposure fusion algorithm this was barely noticable. The reason is that exposure fusion tends to average for each pixel over several exposures. This reduces any deviation which is happening, including color shifts and image noise.
The current image capture pipeline is tuned to capturing standard Super-8 material. As I know from experiments, the Raspberry HQCam is assuming “daylight setting” in its processing pipeline (that is independent whether you use the standard one, the BroadCom stack, or the new one, the libcamera stack). So the illumination is set to deliver just that - light which mimics “daylight” as close as possible.
This actually ensures also that the raw camera channels are utilizing the maximal dynamic range possible, which was one of the goals of the design.
Setting manual whitebalance will also result in choosing a fixed ccm-matrix by the image processing pipeline while creating the MJPEG-images I am using later for exposure fusion. At this point in time, I do not know whether other steps in the image pipeline are changing colors as well (the automatic lens shading algorithm might be a candidate), but from test I would judge such influence is minor (if present at all).
I am still working on a good way to scan Super-8 material with a severe color cast, for example film which was exposed without setting the daylight filter of the camera correctly. It is possible to scan such material by changing the whitebalance settings of the camera (keeping all other settings fixed), but this approach lowers the amplitudes available in the raw camera channels, lowering in turn the dynamical range available for processing. It would be better to keep the whitebalance settings at “daylight” and adjust the light source to the material at hand - I am currently looking into whether this is possible.