I am currently investigating a third option: debayer the DNGs and save then as a linear PNG or TIF. The reason is that the really time-consuming part of processing a DNG is the debayer step. Grading is nowadays a real-time operation.
Direct DNG import: you get the full dynamic range of the capture. However, the files are large and your disk/USB-cable might not be able to sustain the high transfer rates required for real-time playback. Proxies and caching appropriately helps here.
Intermediate Image Format, developed in a raw converter: that is, the DNG is debayered and subsequently graded into a PNG or TIF. Advantage here: the time-consuming debayering is done only once (not everytime anything in the composition is changed, as it would be the case with 1.). In principle, you could use 16bit PNGs or TIFs as intermediate format, in order to avoid excessive quantization. 16 bit has enough headroom to ensure grading in any video editor not to be limited by digital precision (think of banding in sky images). More natural would be in that approach any 8bit output format, as you have already done the basic grading work when developing the raw. That would also reduce the file sizes considerably, lessening the demand on the processing hardware. However, it drastically reduces also the headroom available for further grading. One additional drawback @jankaiser already mentioned: you are grading in an image processor, not a video editor - so it is not easy to find a grade working good enough for the whole scan.
Intermediate Image Format, not yet developed: this is my third suggestion to the topic. I actually need to follow this path, as I need to do some image processing which is not available/cumbersome in the video editor of your choice (specifically, film registration via sprocket detection) beforehand. Here, the DNG file is debayered in software, and the full 16 bit linear values obtained are written into a linear PNG or TIF (I am still experimenting which format (compressed/uncompressed) works best with my hardware). These files will not work if input directly into daVinci. They need a LUT to be assigned. Specifically, “LUT>VFX IO>linear to Rec 709”. If that is done, the image is indistinguishable from the image the build-in daVinci raw converter outputs. But the time-consuming debayer step is done only once, externally. Therefore, with 2k res (specifically 2048 x 1520 px) I can obtain realtime performance with, as already mentioned, the full dynamical resolution of the original DNG.
Finally, @jankaiser’s suggestion to render to an intermediate video file format is also a viable way to speed up post production. Proxies and cached intermediate results (which you can switch on in daVinci in a very fine-grained way) are doing something similar behind the scenes. Of course there is a loss associated with this - you do not get a reduction in file size without compromises. But it does make sense from a storage point of view (as well as processing speed).
Thanks @jankaiser very interesting video from Scott Schiller. Will also look into your workflow. With jpg capture and using Comrpessor to create the .mov and loading into FCPX I already have a much better result comparing to my workflow about a year ago. Your previous recommendation to stick to the original frame rate helps also. Adobe lightroom I will skip and will dig into Davinci Resolve.
Thanks @cpixip. I will follow your suggestions and try to capture it in a small instruction video. DNG is new to me and I’ve to learn more about it. To be continued… first resultshttps://youtu.be/q3acbGi7888?si=89J2thMXQ65lN7hE
I’m curious about how you handle the source cropping for images processed with Mertens merge.
Do you guys include those bright white sprocket holes in your sources, or do you just go with the neatly cropped final shots? I’ve noticed that including a full bright area significantly alters the appearance and brightness of the merged result. Does the Mertens merge algorithm play nicer with one method over the other for you?
I would not say “significantly”, but yes, the intrinsic processing of the exposure algorithm leads to some bleeding of very bright areas into dark areas. This is especially noticable around the high contrast borders of the sprocket hole.
One can counteract this somewhat by using less levels in the Laplace pyramid (if you have access to this parameters) and other tweaks. In my experiments, I did not really consider this bleeding effect so annoying that I bothered to improve the situation.
In fact, I switched recently to using for my scans only single raw captures, as the colors seem to be more consistant than with exposure fusion. Some hard to correct local color shifts might be caused by the same underlying effect - spread of intense colors or luminance values to neighbouring image areas. Again, this is caused by the way the final image of exposure fusion is calculated via a Laplacian pyramid. Using only a single raw image has its own challenges, namely too much noise in really dark parts of the image, but that issue can be solved by an appropriate noise reduction.
In my case I have always sent the cropped images that only include the desired area of the frame to the Mertens algorithm.
In my opinion the results are very acceptable.
It has never happened to me, but I am aware of users who have sent the algorithm images that include unwanted areas, such as the extremely bright film pull hole along with very dark areas around the frame, and sure enough, the Mertens algorithm It does not give good results, mainly fluctuation in the brightness of the images from one frame to another is seen.
I do not know the internal functioning of the algorithm, what I have commented is the result of my own experience and that of other users with whom I have exchanged it.
I’ve started experimenting with CapCut Pro. Here you can find a short comparisson with Davinci Resolve Studio. Unfortunately the denoise function has only 2 stages weak and strong. I’ve used weak in this video: https://youtu.be/o3SpcFS9cBM?si=LTFfNCuuX7s9GVwE The input for CapCut was a not-edited .mov from DavinciResolve Studio (TIFF image sequences).
There is currently an interesting development happening with the Raspberry Pi HQ sensor. It seems that there is headroom with respect to the speed of two CSI-lanes the sensor is using. So there is a chance to increase fps above the meager 10 fps in 4k resolution - at least with the RP5. Specifically, with 10 bit raw, 27 fps @ 4056 x 3040 px and with 12 bit raw, 23 fps @ 4056 x 3040 px seems possible.
Whether that will make it into the offical kernel will probably depend on things like EMC/EMI specs which might be exceeded with the higher frequencies the CSI-lanes would be running at. The speed used also exceeds the specs of the RP5 chip (nicknamed “RP1”). It’s 1.8GBit/s (new speed) vs. 1.5Gbit/s (spec of the RP1 chip). But, it’s an interesting development.
I tried what is explained in the forum post, the gain is spectacular. In full resolution, the request loop without any processing has a fps that exceeds 20fps compared with 10fps before. The gain is also seen in jpeg encoding (OpenCV) at 20fps. On the other hand, DNG encoding remains below 10fps, probably because DNG encoding in python is not very efficient.
I am still nostalgic for the Mertens merge which allows to obtain a satisfactory result for a whole reel in an automatic way without having to search and fix an exposure which is not necessarily easy. The disadvantage is to reduce the framerate (even with the 'HDR trick"). In addition there is a problem with libcamera which is very slow to fix an exposure and even more if you want an automatic exposure.
There is however a new possibility, the AGC / AE algorithm can do its calculations on different channels which gives the possibility to obtain a “short” image and a “long” image, the parameters being adjustable (but not easy to understand) in the configuration file.
See in my discussion with examples on the picamera2 github
You write in the linked post “The result of the merge is spectacular but it is a “black box” the treatments are poorly documented” - well, that’s probably something to consider…
Yes for the merge but on the other hand the possibility of obtaining two images including in DNG is interesting. The parameters of the configuration file allow to finely adjust the constraints for the desired exposures short and long. But it is an auto exposure that will vary on the reel, there are pros and cons.
Are there any references for the DNG merge?
In principle, you could try any of the standard “merge” procedures. From a technical point of view, the Debevec algorithem should work the best with .dng, with the gaincurves being just linear functions. Not sure whether existing code will handle the necessary larger data ranges, being from 0 to 65535. Most images operate only within the 0 to 255 range. This might prevent the use of available software for this task.
As I mentioned, I tried some time ago to merge up to 3 raw files manually, with rather mixed results. I did not continue that route so far.
If we read the documentation in more detail we understand that these treatments are performed by the Pisp under the control of libcamera.
In the Pisp reference guide the merge is called the stitch and we have the following description:
Then the optional Tonemapping seems to be a parameterizable gain adjustment
Ok, what is described seems to be rather straightforward. And the results from your experiment look promising. So maybe this is yet another option to consider for scanning.