Well, I started actually out with varying the camera’s exposure setting for the different exposures.
However, at least with the old Broadcom stack, all Raspberry Pi cameras (v1/v2/HQ) needed a too long time to settle to the requested exposure value. (Here (second image in that post) is an example of how long it can take for the exposure to stabilize. This is actually not from a Raspberry Pi camera, but very typical for those as well.)
In the end I switched to just let the camera run continously, without changing any setting at all, switching off as much of the “auto”-functions as possible.
Well, I get 10 fps per second raw frame rate, using mode 3. That is actually the maximal frame rate you can achieve in this mode. I am working with a resolution of 2016 x 1512, which is slightly less than the one you quoted. All these images are transfered in the live preview to a Win10 PC, requiring a LAN-bandwidth of 130 Mbit/sec.
The actual capture and transfer of the five different exposures takes about 2.20 to 2.30 sec in total. This is because in capture mode, not all frames delivered by the camera are transfered to the Win10 PC. First, because of the (bad) mechanical design of my scanner, I have to wait 0.55 secs after a frame advance for things to settle down mechanically. Furthermore, because I have not yet synchronized the tunable LED source with the frame capture, I simply wait additional 0.25 secs after switching the illumination amplitude before capturing the next exposure and sending it to the client. This reduces the required LAN-bandwidth to about 25 Mbit/sec.
In principle it would be possible to connect a frame trigger signal from the HQ camera (available on a secondary contact pad) and speed up the scanning process by synching LED and camera switching - so far, I have not bothered to try out this idea.
The full capture of the available dynamic range of Kodachrome Super-8 film stock is difficult to achieve with only three exposures - in this case you will indeed need some automatic exposure compensation to get most of the data. And you will need to space the different exposures farther appart than the EV-spacing I am using of about 1 EV to capture highlights and deep shadows. With five exposures, I am able to run the scan with constant settings, no matter what the material throws upon the scanner. One further advantage when using closely spaced exposures is camera noise reduction happening during exposure fusion. Here are some more examples of five exposures used for exposure fusion. Some of them might be really difficult to capture with a three exposure setup and auto exposure.
Well, that might be. On the other hand, dirt and the like are imaged quite sharply and are well-defined on the scans. I have uploaded the original prime scan as well as the exposure fusion and the restoration result of the frame depicted above.