Thank you so much @cpixip, the details and insight are much appreciated.
I was thinking of doing some sprocket-registration at the RPi, so I could use it as feedback to minimize the accumulation of errors in the transport. At this time there is no mechanical or optical link to sprocket in the transport controller.
My thought was to detect sprocket at one of the exposures (probably the one with less light), maybe create a sidecard file with the info, feed the detected position error to the transport controller (pico).
I think that VirtualDub2 uses 64 bit processing and depending on the coder converts to 16 or 8 for output.
Thank you for this insight, I missed that fusion required gamma-curve source files. Interesting twist.
In the interest of best dynamic range for all channels, I was thinking that for negative color film it would make sense to figure a way to separate the exposures to capture a better range. In other words, to offset the negative base by exposing certain channels (particularly blue) in a different range.
I don’t have 8mm movie negatives, so would probably only do this as a side project for some 35mm negative stills.
That was my thinking also. Additionally, when used with light intensity control, it allows to use a 12bit sensor to capture an equivalent dynamic range larger than a 16bit sensor.
Thank you for that insight, I have to take a closer look at Mertens vs Debevec.
A couple of additional points for consideration.
This is an interesting article on the subject.
In working with DaVinci in an underpowered machine, my experience was that the processing time for RAW-source files was significantly more than when using TIFF-source files. In my case I was using NEFF which have compression and uncompressed TIFF, that along may be the tasking factor.
I was actually thinking that an intermediate would be set for best quantization of each channel.
My exploration started with the frustration with scanning reversal Ektachrome film with a fade die (12 bit raw source). When pushing the levers with Resolve the quantization stairs in the Waveform monitor for the fade component are quite visible.
A fusion technique for each color component level would allow using a different exposure range for the components relevant to the fade die.
Y = 0.3 R + 0.59 G + 0.11B. If the exposures are selected for maximum range of Y (specially on a film with fading), It is certain that that the R and B channels would not be using the full 16 bit quantization.
All the above in the context of controlling light for expanding the capture range of each component (white light for me).
Like with RAW, the full-quatization-intermediate (FQI) would require development prior to visualization.
And why would one go through all that trouble?
- Capturing of films subject to color fading without pushing the noise at postprocessing.
- Better signal to noise per color component.
- Better dynamic range per color component.
- Future proof the resulting scan for future displays HDR.
Sorry if this is becoming another extended topic (like the illuminant discussion). @PixelPerfect apologies for the pollution to your topic
This may be helpful too. DCPtool.
I’ve used a similar workflow, and the missing link (for me) is to have higher dynamic range. As mentioned above, I have some films with faded dyes.
In my first scanner the camera was a DSLR 12 bit, output 12bit NEFF. The Rpi HQ is also 12bit. Using fusion with a higher bit-depth intermediate to bring into Resolve is an order of magnitude improvement.
If the film is in good shape, and well exposed, a source raw HQ (12bit) will look great, and may be all you need. When things are not ideal (film not exposed correctly / scenes with very high contrast or fade dye) more dynamic range is needed.