Well, the short answer is that it is an automatic process.
But here are some more details about my approach. Some background information first: my first image processing algorithms were coded in FORTRAN 77, I later switched for tens of years to “C” (and occationally C++/Java/Matlab/VHDL), but I finally ended up in doing most of my work in Python now. Shows my age, I guess…
Initially, I actually implemented a real HDR capture pipeline with the Raspberry Pi v1-camera as a basis. That is, appropriate gain curves were estimated and a real HDR with image radiances was created from different exposures of a single frame. However, my hope to find some appropriate, generic tone-mapping algorithm for the very different material I had at hand did not realize in any way.
Having spend also some time researching the human visual system, it occurred to me at the time Mertens with others published their exposure fusion approach that this approach has a lot in common with the way our human visual system views the world.
Here’s what I wrote in another thread about this:
The above mentioned opencv-implementation is quite fast and usable, but I opted to write my own version. It is a little bit slower and but offers more ways to modify and tweak the basic algorithm. I am still in the process of optimizing this, but I am fine with the current results - especially because the process is fully automatic. No manual adjustments needed!
Here is my current work flow, with approximate timings included:
- Insert the film into the scanner, fix camera exposure to 1/32 sec, set the resolution to 2028 x 1520, set red and blue gain to a fixed value.
- Check for all 5 exposure values that the sprocket area is pure white, set analog and digital gain in such a way that in the lowest exposure setting the bright sprocket area shows values about 10 units less than maximum brightness. This ensures that any bright image areas of the film frame are properly recorded.
- Start scanning. Scanning consists of advancing the film one frame, waiting a fraction of a second for the film and mechanics to settle, and than taking 5 different exposures in short succesion by dimming the light source over a range of 5 exposure stops. This takes about 1.5 - 2 seconds for the 5 exposures.
- The images are transfered from the HQCam to a Pi4 which acts as a server to my LAN. Depending on the speed of the LAN, transfer might take additional 2 - 5 secs per film frame.
- The images are received by a client software which stores them to a disk. Currently, the client is also responsable to trigger the next frame capture, which introduces an additional delay. All in all, I usually calculate that for every second of Super 8 stock (with 18 fps) I need about 40 sec to a minute for scanning the material. So this approach is quite slow on the capture side.
- Once the capture is done, the set of five images are combined into a single output frame. The exposure fusion algorithm which does that is usually run not on the source resolution of 2028 x 1520 px, but on a smaller 1200 x 900 px resolution. At that resolution, it takes on my PC between 800 - 1100 msec to process a single frame. At full resolution of 2028 x 1520, the computing time for a single frame increases to 1900 - 2100 msec/frame. The 1200x900 px images are stored as a 16bpp RGB image on disk, with an average size of about 5MB.
- A final post processing program takes the 16bpp images and crops them 80% to get a final output image size of 960x720 px. At this stage, sprocket detection and alignment as well as color correction and grading is done. Also, a slight sharpening step is included here. This pipeline needs less than a second per frame (But I have not timed it really). The pipeline outputs the final frame as 8bpp RGB image with a size of 960 x 720 px.
Now, for me, 960x720 px is a resonable output format for my purposes (my old “Revue” camera combined with Agfa stock is far away from reaching even such a resolution… )
As the capturing and processing is slow, lowering the resolution at appropriate times in the pipeline gives me some gains in terms of processing speed. Actually, as capturing and post processing run unattended (a whole film is processed with the same settings), I usually do the processing overnight, just checking the results in the morning.
Anyway, I am still in the process of optimizing the whole thing. As this is the fourth or more version of my film scanner, I might end up with something totally different in a few months time…
Hope this wasn’t a too long response to read. Good luck with your light source, and it will be interesting to see what raw captures the HQCam can achieve!