Hello @dgalland - thanks for the feedback! I am familiar with your project and it gave me indeed some important guidance along my own design approach!
That approach seems to be a very common option in realizing a film scanner these days: pairing an Arduino for the low level stuff, a Raspberry Pi + camera (preferable the HQ camera) for image aquisition and a LAN-based transfer to a high-power PC for the rest. Joe Herman’s, yours and my system works along these lines. Currently, I am able to capture and store five different LDR exposures as (jpegs with a size of 2016x1512 px) in about 2 seconds. Post processing takes actually longer, I need about 2.5 seconds per HDR frame!
Concerning the autoexposure of Raspberry Pi cameras - yes, they need a few frames to adapt to new light settings. Never measured it because I am not using it. But since this is an adaptive algorithm, chances are that the time to settle to a new value depends on how high the step is between different illumination levels. From what I understand, the algorithm should lock onto the new exposure level normally within a few frames. By the way, since the Raspberry Pi foundation has introduced the libcamera stack, there is potentially the possibility to roll your own algorithm. However, so far I did not manage to install the libcamera stack properly, so I can not report on this further.
Concerning the light source I am using, that is a long story. I actually started with a white-light LED and a diffusor, but I simply could not get the diffusor to work ok. Also, I discovered that I had some weird film stock which would need some dramatic color compensation in order to digitize correctly. For example sequences filmed directly from the screen of a color-TV set in the 70s, or film stock where I forget to properly set the daylight filter of the camera. All of this pushed me into designing an integrating sphere with different LEDs for the primary colors. I based my initial choice for the LEDs on some research, taking into account the different sensitivities of my camera and the characteristic density of film stocks, only to find out that my initial choice did not deliver what I was hoping for. So I tried actually white-light LEDs and obtained resonable results. After this, I tried again several different combinations of LED wavelengths in a RGB-setup and finally arrived at some workable combination of RGB-wavelengths.
In hindsight, it may be that the broader spectrum usual white-light LEDs are better suited for sampling film stock. If you are unlucky, the narrow spectral peaks of a RGB-LED combination might just by chance pick up a dip in the transmission spectrum of the film dyes, leading to false colors. With a broad spectrum white-light LED, this risk is much lower.
Anyway, with different LEDs for red green and blue, I am able to compensate the small color shifts introduced by driving LEDs with different intensities. Here’s an example, namely the intensities of the different colors corresponding pure white at five different exposure levels (max. intensity is 4095, corresponding to 12 bit):
int RedLED[] = { 113, 243, 525, 1094, 2146, 0};
int GreenLED[] = { 126, 303, 725, 1725, 4095, 0};
int BlueLED[] = { 52, 111, 235, 470, 889, 0};
As you can see, the green LED is driven by the highest current in each setting, the blue LED with the lowest one. Also, the relative amplitudes change in order to keep the white balance constant for the camera.This reflects the different responses of the LEDs to different driving currents.
Using an f-stop which is programmable is an interesting idea. That might be faster and easier to implement than a programmable light source. However, each lens has usually a well-defined f-stop range where the lens delivers optimal performance. Opening up too much or closing the f-stop to much will leads to increased blur. That might be a challenge with this approach.