I am currently investigating a third option: debayer the DNGs and save then as a linear PNG or TIF. The reason is that the really time-consuming part of processing a DNG is the debayer step. Grading is nowadays a real-time operation.
Direct DNG import: you get the full dynamic range of the capture. However, the files are large and your disk/USB-cable might not be able to sustain the high transfer rates required for real-time playback. Proxies and caching appropriately helps here.
Intermediate Image Format, developed in a raw converter: that is, the DNG is debayered and subsequently graded into a PNG or TIF. Advantage here: the time-consuming debayering is done only once (not everytime anything in the composition is changed, as it would be the case with 1.). In principle, you could use 16bit PNGs or TIFs as intermediate format, in order to avoid excessive quantization. 16 bit has enough headroom to ensure grading in any video editor not to be limited by digital precision (think of banding in sky images). More natural would be in that approach any 8bit output format, as you have already done the basic grading work when developing the raw. That would also reduce the file sizes considerably, lessening the demand on the processing hardware. However, it drastically reduces also the headroom available for further grading. One additional drawback @jankaiser already mentioned: you are grading in an image processor, not a video editor - so it is not easy to find a grade working good enough for the whole scan.
Intermediate Image Format, not yet developed: this is my third suggestion to the topic. I actually need to follow this path, as I need to do some image processing which is not available/cumbersome in the video editor of your choice (specifically, film registration via sprocket detection) beforehand. Here, the DNG file is debayered in software, and the full 16 bit linear values obtained are written into a linear PNG or TIF (I am still experimenting which format (compressed/uncompressed) works best with my hardware). These files will not work if input directly into daVinci. They need a LUT to be assigned. Specifically, “LUT>VFX IO>linear to Rec 709”. If that is done, the image is indistinguishable from the image the build-in daVinci raw converter outputs. But the time-consuming debayer step is done only once, externally. Therefore, with 2k res (specifically 2048 x 1520 px) I can obtain realtime performance with, as already mentioned, the full dynamical resolution of the original DNG.
Finally, @jankaiser’s suggestion to render to an intermediate video file format is also a viable way to speed up post production. Proxies and cached intermediate results (which you can switch on in daVinci in a very fine-grained way) are doing something similar behind the scenes. Of course there is a loss associated with this - you do not get a reduction in file size without compromises. But it does make sense from a storage point of view (as well as processing speed).
Thanks @jankaiser very interesting video from Scott Schiller. Will also look into your workflow. With jpg capture and using Comrpessor to create the .mov and loading into FCPX I already have a much better result comparing to my workflow about a year ago. Your previous recommendation to stick to the original frame rate helps also. Adobe lightroom I will skip and will dig into Davinci Resolve.
Thanks @cpixip. I will follow your suggestions and try to capture it in a small instruction video. DNG is new to me and I’ve to learn more about it. To be continued… first resultshttps://youtu.be/q3acbGi7888?si=89J2thMXQ65lN7hE
I’m curious about how you handle the source cropping for images processed with Mertens merge.
Do you guys include those bright white sprocket holes in your sources, or do you just go with the neatly cropped final shots? I’ve noticed that including a full bright area significantly alters the appearance and brightness of the merged result. Does the Mertens merge algorithm play nicer with one method over the other for you?
I would not say “significantly”, but yes, the intrinsic processing of the exposure algorithm leads to some bleeding of very bright areas into dark areas. This is especially noticable around the high contrast borders of the sprocket hole.
One can counteract this somewhat by using less levels in the Laplace pyramid (if you have access to this parameters) and other tweaks. In my experiments, I did not really consider this bleeding effect so annoying that I bothered to improve the situation.
In fact, I switched recently to using for my scans only single raw captures, as the colors seem to be more consistant than with exposure fusion. Some hard to correct local color shifts might be caused by the same underlying effect - spread of intense colors or luminance values to neighbouring image areas. Again, this is caused by the way the final image of exposure fusion is calculated via a Laplacian pyramid. Using only a single raw image has its own challenges, namely too much noise in really dark parts of the image, but that issue can be solved by an appropriate noise reduction.
In my case I have always sent the cropped images that only include the desired area of the frame to the Mertens algorithm.
In my opinion the results are very acceptable.
It has never happened to me, but I am aware of users who have sent the algorithm images that include unwanted areas, such as the extremely bright film pull hole along with very dark areas around the frame, and sure enough, the Mertens algorithm It does not give good results, mainly fluctuation in the brightness of the images from one frame to another is seen.
I do not know the internal functioning of the algorithm, what I have commented is the result of my own experience and that of other users with whom I have exchanged it.