My Super8 film scanner

I’m talking about amateur cinema with amateur equipment.
We can mention many factors that influence the projection, but it will not change the quality of the film itself. Believing that the art of projection is going to work wonders all the time is not an my experience. And I come back to the fact that the scans are very often of better quality than the projections.

Do you have a screen or a video projector that allows you to reproduce all these colors?

I think for comparison, you could start by lowering the gamma on the image on the right.
You will obtain a less contrasted image which will allow you to better see the details in the dark part without burning the whites. Then I’ll play with exposition and maybe with the black level and the contrast. The result of the scan will be a bit bland image, but I will restore the balance as much as possible in post-processing. I’m not saying I could do as well as you, but not having your film with me for testing, we won’t know :wink:

But again, I find it great to test these options, well done. But as long as it will take 3 hours of treatments for a 15m reel. It’s not playable for me right now.
But later, when the cameras do this directly in their hardware, it will be great.

@Roland: Well, we all have different opinions, emphases and experiences. I worked with Super 8 in the 70’s and 80’s of the last century and high quality cameras and projection equipment were used then - admittedly not by everyone.

Now to the issue of scanning. As certain types of film stock are fading away over time, one of my goals is to preserve what’s left - 100%, today. Ideally, such an archival scan should be able to digitize the content of the film in question in full glory. That is not possible directly, with current hardware. As most camera sensors max out at about 14 bit/color channel, you will need to work with multiple exposures - or at the very least with precisely chosen exposure settings, metered to every frame individually, while utlizing the full dynamic range of a 14 bit camera. Anything less will leave parts of frames either blown out or pitch-black, with the image information in these frames missing.Or, quantization effects become noticeable (see below).

Any of the more elaborate approaches mentioned above (plus a few more) is able to deliver an archival scan. That is, the available information of the scan is sufficient to reconstruct the full HDR data contained in the movie frame (there are additional other conditions on such a setup like scan resolution and so on that I won’t discuss here).

In any case, the ultimate goal is to get a true HDR (unlimited dynamic range) for every movie frame.

For me, this is important: that my archival scan is future-proof. I do not plan to rescan a film sometime in the future. It might already have faded away.

Note that the archival scan is not intended to be used for presentation. This is the purpose of a secondary output, let’s call this the presentation scan. The presentation scan might have colors corrected, unsteady camera motion compensated, film grain added or removed, and so on.

From a technical point of view, the presentational scan needs to adapt to the limits of current display technology, transmission and distribution channels. Todays standard are 8-bit/color channel displays, being slowly replaced by displays and other technology approaching the contrast ratios you need for true HDR. But we are not there yet.

So generally, one of todays goals is to map the archival HDR (or a pre-representation of that) to the limited dynamic range (usually 8-10 bit) available in todays transmission formats and display hardware. This goal is known as the magical area of “tone-mapping” within the HDR context. There has been a lot of efforts spend in developing approriate algorithms for that task (i.e, adapting true HDR to be displayed on limited contrast displays) and I guess I studied ( = read the original papers) and tested most of these algorithms.

With respect to non-manual operation, the Mertens algorithm outperforms every other algorithm known to me.This can be directly traced back to the structure of this algorithm, which mimics processes similar to those of the human visual system (I worked over 10 years in researching neural algorithms of the human visual system). However, my favorite point is again: exposure fusion is fully automatic, no manual tuning required.

Of course, once the technological threshold of full HDR display and transmission has passed, tone-mapping won’t no longer be necessary. As from the archival scan described above (a stack of multiple exposures for each frame) actually a true HDR can be computed, this process is as future-proof as it can be.

In closing, thanks for your nice description of manually tuning the original scan towards the automatic exposure fusion result. I do have a program which does exactly this (it matches a source histogram to a reference histogram) automatically:

The single exposure scan is now on the left, the exposure fusion reference is on the right. While the intensities of both images are now aligned, you may notice quantization errors in the left image, most notably on the jacket of the man speaking to the guide. Also, the texture of the sand in the shadow is barely visible in the single scan result. Given, the quality on the left can easliy be achieved with real-time scanning (i.e. 3 min for a 15 m roll), but for the quality on the right you need to throw in more machine time (a little less than 3 hours on my system). I am choosing the right side, mainly because a single scan per frame is not future-proof and the results are visually more pleasing without manual interaction.

1 Like

For a fair comparison, the same after effects should be applied to both images, like the degrain and sharpening you mentioned.

I’ve been trying all evening to get mertens merge to produce a better looking image than a raw image. For this comparison I’ve used the following code:

import cv2
import numpy as np

from picamera2 import Picamera2
from libcamera import ColorSpace
from libcamera import Transform

# Camera + tuning
# - can we add a color profile here?
tuning = Picamera2.load_tuning_file("/tmp/imx477_cpixip.json")
picam2 = Picamera2(tuning=tuning)

# Capture configuration
capture_config = picam2.create_still_configuration(main={"size": (2028, 1520), "format": "RGB888"}, raw={"size": picam2.sensor_resolution}, transform=Transform(hflip=1))
picam2.configure(capture_config)

picam2.controls.AnalogueGain        = 1.0
picam2.controls.FrameDurationLimits = (10, 2000000)

# Auto-exposure image
picam2.start()
request = picam2.capture_request()
metadata = request.get_metadata()
request.save_dng("output/01_auto.dng")
request.save("main", "output/01_auto.jpg")
print(metadata)
baseExposure = int(metadata["ExposureTime"] / 2)
request.release()
picam2.stop()

# 5 exposures: 1 underexposed, 1 normal, 3 overexposed
images = []
for i in range(5):
	exposure = baseExposure*2**i
	print(i, exposure)
	picam2.controls.ExposureTime = exposure
	picam2.start()
	request = picam2.capture_request()
	metadata = request.get_metadata()
	images.append(request.make_array("main"))
	request.save("main", "output/02_fixexp_" + str(i) + ".jpg")
	print(request.get_metadata())
	request.release()
	picam2.stop()

# Mertens merge the 5 images
merge = cv2.createMergeMertens()
merged = merge.process(images)
merged = np.clip(merged * 255, 0, 255).astype(np.uint8)
cv2.imwrite("output/03_merged.jpg", merged)

Then opened the resulting images in adobe lightroom, and press the auto lighting button. The reason I did this is to get comparable images, with exposire, shadows and highlights automatically restored in the same way.

Here’s 5x exposed mertens jpg versus the autoexposed jpeg:


To me the mertens version looks slightly better, but most shadows and highlights seem equally recovered by lightroom.

Doing the same compare using the raw/dng image, the difference is even smaller:


Note that I’m ‘cheating’ a little becouse the raw image has 4x the resolution of the jpeg image. Which of the images you think looks best?

Am I doing something wrong with the mertens exposure times perhaps (see code)? I’m using the autoexposed image as the base exposure time, and then:

  • 50% exposure time
  • 100% exposure time
  • 200% exposure time
  • 400% exposure time
  • 800% exposure time

Or perhaps my film is not hdr ;). I don’t know what film stock it is.

@PixelPerfect, Welcome to the forum. I have been around the forum long, but finally getting into the camera section of my project, much to learn for me.

Understand there are different point of views, and I certainly respect yours @Roland.

I would point out that even professional 35mm film was meant to be projected, yet it is now scanned for presentation at HDR displays. I am delighted to watch decade old movies on my tv, in much better quality that they were ever presented at the movie theater!

In regards to resolution and dynamic range, in my opinion more is more, and go with whatever one can afford. By affording I mean, the cost of the lens, sensor, illuminant, storage, and, more importantly the time to capture and process.

After scanning all my films at 24MP RAW 12bit, I am building a new scanner only for the purpose of capturing higher dynamic range. Would every film require it… certainly not, but for those few precious ones, it is worth every second and every bit (I am going for low $).

Again, I find your research great and I appreciate your point of view, but I am not convinced by your demonstration.
I imagine the details in the man’s shadow come out the same in both images.
Only the right image has a corrected sharpness (the grain also of course), but not the left image.
For all films in general, it is difficult to tell the difference between the details of an image and the grain of the film.
It’s often a mix of the two, especially in dark areas.

Totally agree

I’m often amazed at the ability to light up dark areas with post-processing programs

Agreed. It’s a slightly unfair comparision… :innocent:

Well, certainly the specific frame you use for testing does not pose a too great challenge. It seems that it was taken on a sunny day with overcast sky. Besides, you are using the modified tuning file for the IMX477 for capture - which was developed in order to improve these things.

For some examples on the difference between single scans and exposure-fused footage, have a look at this post. These scans were taken in the early days with the standard tuning file of the IMX477. Since than, things have changed; I now use only 4 different exposures while scanning.

I did here a comparison between the results of exposure fusion and raw development. In summary: if you get your exposure right, the raw capture is mostly equivalent to an exposure fusion approach. However, while the later automatically creates a visually pleasing image, the former requires some kind of raw development - at the time of the test, the color science of developing raw images from the IMX477 was not present (this has changed in the mean time), so it was no option back then.

I think if you compare your scans closely , you still notice some tiny differences especially in the dark shadows of the tree. For such areas, the camera signal from an exposure-fused stack just has a finer quantization level than the raw image file. Given, usually you will not need that precision, as these areas are anyway rather dark - but sometimes, you might want to get these areas brighter in the post, and than you will notice a difference.

With respect to your code - nicely done and I guess a great example for the forum. A few remarks: it takes some frames until an exposure value requested is actually realized by the camera. And sometimes, a shutter speed of 9600 is realized with a shutter speed of 4800 and a digital gain of 2. So you might want to check the metadata returned with the frame both for the correct shutter speed and digital gain value. Or: just wait for about 15 frames between the setting of the shutter speed and the actual capture. Another issue is hidden in the line

merged = np.clip(merged * 255, 0, 255).astype(np.uint8)

This tactically assumes that merge-process does deliver a float image in the range [0.0:1.0]. At least in older versions of the opencv software, this was not the case. So you might try instead something like

minimum = merged.min()
maximum = merged.max()
merged = np.clip( (merged -minimum)/(maximum-minimum) , 0, 255).astype(np.uint8)

agreed - that was a bad choice for an example. Maybe these old examples here here tell a better story.

And you are right - once you scan your footage perfectly, you discover how much film grain is present in dark areas. It has a much higher amplitude and size, compared to normally exposed areas. Here again you can find two positions: some say “grain is the film” and leave it in the footage, some try to get rid of the film grain in order to recover small image details which would be not be noticeable otherwise (I’m in this party).

@PixelPerfect this is interesting. I’ve bought recently a Reflecta scanner (same as Wolverine) and compared it to my T-Scann 8 and T192/DS8 scanners. I was surprised about the ease of use of the Reflecta scanner and the sharpness of the images. The negatives are the Auto White Balance (can not be switched off) and the poor compression mp4, no single images can be stored. The most worrying part is the mechanical movement of the film. Reflecta has limited the waranty to only 400.000 frames. So this could be not a could base for a film scanner? But still a very interesting approach. I’m a beginner in the field of 8mm digitizing and still need to learn a lot. These kind of projects helps to gain knowledge. Good luck with the project! Best regards, Hans

Ah yes, I’ve seen that topic before. The dynamic range on some of those images is huge.

For every different exposure, I’m doing:
picam2.start()
picam2.stop()
This is a little slow, but it does give the expected results. That being: the digital gain close to 1.0, but unfortunately it is never exactly 1.0, and I haven’t been able to fix this.

I am manually checking the metadata. The entire metadata is printed to screen for each image captured with:
print(metadata)

Thanks!

Do you know if there is a way to put these into the DNG file? Preferably in python before writing the file to disk. But if there’s a command line tool that can do this that would be great too.

My box does not say Wolverine actually, I just call it wolverine becouse that’s the name everyone seems to know. For the fun of it, here’s the exact same frame, in full resolution, captured by the wolverine 2 years ago:

I didn’t know that. But if it can digitize my family films, then I’m done, and happy. The only thing I’m not happy about is capturing the wrong side of the film, becouse I see no easy solution to filming the right side of the film with this scanner as a base.

1 Like

Well, I do not think that there is a straightforward way to do so. The .dng-files which libcamera/picamera2 create are not perfectly following the standard, as far as I know. That’s why at some point, I stopped following this development. You would need to investigate by yourself, maybe ask on the Raspberry Pi Camera forum about the current state of affairs.

What a difference! Wow.

To my surprise, I see in this thread that there are fellow forum members who, regularly and normally, use the Picamera2 library.

I say to my surprise because every time I search for information about the current state of the Picamera2 library, the first thing I find is warnings that the library is still under development and subject to modifications.

For this reason, I continue to use the venerable Picamera library in my software.

In short, my question is: Has the time come to use the new Picamera2 library permanently?

Thank you in advance for your answers.

@Manuel_Angel - because the Raspberry Pi foundation does support only libcamera/picamera2, in the long run, you just do not have any choice. While I still think libcamera is an aweful software design, picamera2 abstracts somewhat. Lately, the API-changes have been minor. So in a certain sense, picamera2 is usable.

I came up with my own tuning file for the HQ sensor, and this is giving at least me and a few other people better colors, more aligned with the old way. I reduced my scan to four exposures per frame (it was five). As elaborated above, scanning a single frame takes at the full resolution of the sensor 2.4 seconds, with 1 second needed for film transport of each frame.

Organizing the pieces for the HQ sensor to file puzzle. In regards to the output file, resulting for the 4 exposure and Mertens, what file type/bit depth do you use?

Rewind the film on another reel with the emulsion (glossy) side to the outside. Then scan in reverse and reverse the mp4 in post. Succes, Hans

Just normal jpgs (8 bit per channel), transfered from the RP to a client running under Win11. The client stores the files on the hard disk.

Exposure fusion is done later, after the whole scan is done, with a separate program. Another prgram is used for an inital color correction. Yet another program handles temporal denoise. After that, the files are loaded into DaVinci.

Thanks Rolf. I meant the file generated by the exposure fusion (between exposure fusion and initial color correction).

Thinking in terms of best workflow, from the perspective of best dynamic-range = largest bit-depth

My thought was that it would make sense to blend the 8-bit jpegs (or 12 bit raw) into a higher bit-depth intermediary. Something like 16bit TIFF.

That intermediary can be the source for initial color correction, or Resolve (or VirtualDub2), and generate an allow the process to ultimately deliver a video file with higher dynamic range.

In my first workflow, I started with a single exposure of 12 bit raw (NEF). In Resolve cropped, adjusted levels, and color corrections to a 16bit-TIFF sequence intermediary. Then used NeatVideo with VirtualDub2 to denoise and encode into a 10bit-h265 (my Samsung TV was happy with it).

I have not used Mertens, and probably do not understand it enough, but my thought is:
The fusion bit depth should be of a higher bit-depth than the sources.

  • Start with X exposure source files of 8 bit (jpeg) or 12 bit (DNG) (illustrating only 3 for simplicity).
  • Fuse the exposures into a 16 bit-depth intermediate.
  • Correct/process with a workflow that keeps higher than 8-bit depth (for example, some plug-ins in VirtualDub2 result in 8bit outputs).
  • Encode with 10 bit output file for viewing (HDR in the future?)

Thoughts?

@PM490 - you’re right on the spot with your scheme.

Here are more details about my current processing pipeline. It is still evolving, by no means finally; one important feature is the progression of bit-depths and image sizes towards the end product. So here we go into the details:

  1. Scan of each single movie frame with 4 exposures (nicknamed ‘Highlight’, ‘Prime’, ‘Shadow1’ and ‘Shadow2’). Sensor resolution (4056,3040)@12bit. Shutter speeds: 2400, 4800, 9600, 19200. Conversion via libcamera and alternative tuning file into (4056,3040)@8bit rec709-encoded .jpgs.
  2. Transfer via LAN-link to PC-client; display and storage of received images.
  3. After full scan of the footage exposure fusion. (4056,3040)-source images are converted to floating-point images and scaled down to (1800,1350) resolution for further processing. This includes sprocket-registration and sub-pixel alignment of each frame stack. Intermediate image is written out at (1800,1350)@16bit. Own software.
  4. An initial color correction on the fused result is done. Either using own software (floating point precision), DaVinci Resolve (presumably also floating-point) or VirtualDub (16 bit processing). Output is written to hard disk (1800,1350)@16 bit.
  5. Restoration step. Basically temporal degraining and sharpening. This is done via an avisynth-script and VirtualDub. Output and processing at (1800,1350)@16bit.
  6. Final edit in DaVinci Resolve. Timeline resolution (1440,1080)@18fps. Color-correction and crop to image frame. Output is whatever is required.

Hope this scheme above is detailed enough.Occasionally, intermediate sizes are chosen somewhat larger than stated ones in order to get some headroom for stabilization of unsteady takes. Also, tests I have done with running the whole pipeline in floating point format did indicate that 16bit intermediates are sufficient for my requirements.

In another twist, I tested scanning up to three differently exposed raw files (.dng) and combining them directly into a raw file with 16bit depth per color channel. I did not notice much difference in the results compared to the above described exposure fusion process.

Note that raw files are linear in image intensities whereas .jpgs have a gamma-curve applied. So the combination of raw files has to be done differently from the exposure fusion done on the .jpg-images. It’s actually much less complicated. Using the standard exposure fusion algorithms for combining data from raw files is probably not going to work.

Continuing the comparison: exposure fused footage has a slight advantage over a 16bit raw. That is because exposure fusion has a build-in optimization step, adjusting image intensities in a way that the output image looks nice. The 16-bit raw obtained by taking several differently exposed raw image and combining them has much more information - but the image is raw. That is, I need to “develop” the image in a second step. That is mostly a manual process. I do this occasionally for important photographs I have taken with my DLSR, but for me, it’s no fun. I certainly do not want to do this for a 20 min Super-8 real with hundreds of differently lit scenes. However,I am still tinkering around with raw-captures.

Last thought: while raw captures at 16 bit are obviously future-proof in terms of switching sometime to a HDR-workflow (when hardware and standards permit it), this is also the case with scanning each film frames as an exposure stack of jpgs. The Debevec algorithm from 1998 covers that nicely - going from a stack of jpgs to a real HDR. I have used this algorithm (in my own implementation) since years, with different cameras/scanners. Very stable algorithm.

I’ve done some investigation and found that the exif information from the .DCP file can be copied into the .DNG file using exiftool like this:

sudo apt install exiftool
exiftool -tagsfromfile PyDNG_profile.dcp -UniqueCameraModel -ColorMatrix1 -ColorMatrix2 -CalibrationIlluminant1 -CalibrationIlluminant2 -ProfileName -ProfileToneCurve -ProfileEmbedPolicy -ProfileCopyright -ForwardMatrix1 -ForwardMatrix2 -DefaultBlackRender -overwrite_original image.dng

What light source are you using? I need like 10x these exposure times for a decent image.

After reading somewhere that HDR should actually be done in RAW, and not in RGB, I was looking into ways to do this. So this sounds very interesting! Would you be willing to share your code?

With the ability of Davinci to use the DNG files directly as a raw 18fps input film, the raw DNG route looks the most interesting to me:

  • RPI: Capture RAW frames
  • RPI: Make the DNG files as good as possible (future proof): rec709 / rec2020, color profile, HDR multiple frames?
  • PC: Pass DNG files directly into Davinci without using other tools to create final output

Having the DNG files directly in Davinci will leave all options open for 4K, HDR, 10bit or whatever would be nice to have in the future (or now even).

I’ve manually created a 1 second (18 frame) clip this way, and it looks so much better than the wolverine version. Can’t wait to receive my stepper motor driver from aliexpress and start actually capturing film.

Well, I think one of the profiles of Jack Hogan is just ASCII-encoded and you might be able to extract the appropriate color matrices and forward matrices, as well as the corresponding calibration illuminants. This might get you going; but maybe the software creating the .dng-files in the first place does use Jack Hogan’s data anyway - as I mentioned, I do not know the current state of affairs here, but there have been developments the last time I checked.

Well, it is a 3d-printed integrated sphere. Used to be equipped with red, green and blue LEDs, nowadays I am using instead three Osram white-light LEDs, specifically Osram Oslon SSL80. The later give a better color definition. Some more information can be found here.

At this point in time, there is not really something I could share. I am still in the process of investigating the best way to do it.

Basically, any raw capture is already a HDR - only, that the range of intensities is limited to the bit range the camera can deliver. This has two consequences. For starters, bright lights might saturate. Of course, you would adjust shutter speed in such a way that this is not the case. However, once you have done this, you will get quantization noise in the low illumination parts of your image. Simply, because the camera is working with evenly spaced intensity levels. You can capture however a second raw with a longer shutter speed, pushing these dark areas into a better range. Of course, in this second capture, all brighter image areas will be blown into the white. Nevertheless, the shadows will be much better defined in your second capture.

For combining these two raws, you need to go from integer numbers to floating point variables. Than you need to multiply the second, brighter exposure with an appropriate scaler which gives you (nearly) identical intensities for image areas present in both images. Once you have this rescaled second exposure available, there are various way (hard/soft threshold for example) to combine both raws into a new image which should have reduced camera and quantization noise in the darker areas of the image.

The information in the darker areas of the image will come from your second, brighter exposure, the information in the brighter image areas from your initial base exposure. That’s it, basically. The challenges in this approach are hidden in the way the two exposures are combined; this will have an effect on how much the end result will be affected by image and quantization noise. I am still doing some research here.

Technically, a true HDR records just the radiance of the scene. So there will never be any rec709 or any other profile hidden in an HDR. Same is true for raw images: a gamma-/contrast-curve is not applicable for raw data (usually - newer sensors kind of deviate). Raw values are linear values (again: mostly).

Let’s get a little bit more into the details:

  • Raspberry Pi software captures (or at least did capture) raw sensor data into various non-standard formats onto the hard disk. Some of these raw file formats were proprietary, some are close to Adobe’s .dng-spec. How close at the moment I do not know.
  • Raw data is just that: the data the sensor is actually sensing. There is no gamma-curve applied to this data, this is actually one of the last steps done during the development of the raw data. Other steps in the development of the raw include white-balancing and application of the appropriate color matrices. And this is indeed the processing libcamera is applying to the raw data to get from that raw sensor data to the jpg or png which it usually outputs.
  • Raw data is in a certain sense already a HDR-signal. However, it is quantized to a certain range (10/12/14 bit) and exhibits therefore overflow and quantization noise. That spoils the fun occasionally.
  • a real HDR is a floating point representation of the radiance of a certain scene. Specifically, its values are unbound. A real HDR is not equivalent to what is called a “HDR” on many internet sites or in any sales material.
  • exposure fusion via Mertens does not create a real HDR. The goal of exposure fusion is to transform a stack of low dynamical range images (LDR, usually 8 bit) into another image which can be viewed on normal display (that is, actually another LDR image). That is, the output imagecaptures the spirit of all the data contained in the LDR-stack. To achieve this, the dynamic of the original LDR-stack is reduced to something which can be nicely displayed on any standard display (8 bit/channel).
  • exposure fusion is similar to the normal HDR-process, but does everything in one pass. The normal HDR-process consists of two independent steps: 1) estimating a real HDR from the scene data and 2) tone-map the HDR into a displayable LDR image. Note again the huge difference: this is a two-step procedure, first create a HDR, than tone-mapping it into a displayable LDR. Again, the result of a normal HDR-process is different from the result of exposure fusion.
  • As noted above, a jpg image output by libcamera has, as one of the final steps, a gamma-curve applied to its values (this gamma-curve could be rec709. In the standard tuning file of the IMX477, the gamma-curve is something someone thought “looks nice”). In any case, the gamma-curve applied as well as other image processing options performed in a camera make the estimation of a HDR image from jpgs non-trivial. Debevec came up with a solution in 1998. He first uses the stack of LDRs to estimate the gain-curve of the camera. Once that gain-curve has been calculated, one has a tool to transform the jpg-image intensities back to their original raw values. Combining these recovered raw values into a single image file gives you finally the HDR you are after.
  • A true HDR looks rather dull, similar to a non-developed raw. The reason: scene radiances cover normally such a broad range of values that it is impossible to display this on a standard LDR display. Only after a secondary step, the tone-mapping, the HDR content becomes viewable on our average displays. I am not aware of any good tone-mapping algorithm for HDRs. (Just for the record: while a HDR should have the correct colors, the raw file does not. A raw file needs color science to be applied.)
  • Specifically, developing a raw into something viewable consists of two steps as well: A) get the colors right (that step depends on the camera’s bayer filter) and B) get the intensities right (that usually amounts to remap some intensity ranges which are either too bright or too dark and finally apply an appropriate gamma-curve). The later step is very similar to the tone-mapping of the HDR discussed above.

One idea I have not yet tried in this context is the following: the new libcamera/picamera2 approach does not only give us access to the raw data, but also to the metadata of every single image. Now, we know that the 12bit of the HQ camera is close to sufficient for most color-reversal material, provided that the exposure of each frame is optimized. So, what if we capture the film with autoexposure doing its fine work (optimizing the exposure of the frame to the working range of the sensor) and recording with every frame the values the automatic came up with? Specifically, we should take note of shutter speed, digital and anlag gain of each frame. With this information, one should be able to transform the .dng-files (with varying exposure settings due to the autoexposure working) to a common reference exposure setting which we need for converting the frame sequence into film footage. In the end, such an approach would be something like a poor man’s HDR capture, with limited (12 bit), but adaptable (via the autoexposure algo) dynamical range. Might be something to experiment with, as only a single raw-capture needs to be done for each frame…

1 Like

Thank you so much @cpixip, the details and insight are much appreciated.

I was thinking of doing some sprocket-registration at the RPi, so I could use it as feedback to minimize the accumulation of errors in the transport. At this time there is no mechanical or optical link to sprocket in the transport controller.
My thought was to detect sprocket at one of the exposures (probably the one with less light), maybe create a sidecard file with the info, feed the detected position error to the transport controller (pico).

I think that VirtualDub2 uses 64 bit processing and depending on the coder converts to 16 or 8 for output.

Thank you for this insight, I missed that fusion required gamma-curve source files. Interesting twist.
In the interest of best dynamic range for all channels, I was thinking that for negative color film it would make sense to figure a way to separate the exposures to capture a better range. In other words, to offset the negative base by exposing certain channels (particularly blue) in a different range.
I don’t have 8mm movie negatives, so would probably only do this as a side project for some 35mm negative stills.

That was my thinking also. Additionally, when used with light intensity control, it allows to use a 12bit sensor to capture an equivalent dynamic range larger than a 16bit sensor.

Thank you for that insight, I have to take a closer look at Mertens vs Debevec.

A couple of additional points for consideration.
This is an interesting article on the subject.

In working with DaVinci in an underpowered machine, my experience was that the processing time for RAW-source files was significantly more than when using TIFF-source files. In my case I was using NEFF which have compression and uncompressed TIFF, that along may be the tasking factor.

I was actually thinking that an intermediate would be set for best quantization of each channel.

My exploration started with the frustration with scanning reversal Ektachrome film with a fade die (12 bit raw source). When pushing the levers with Resolve the quantization stairs in the Waveform monitor for the fade component are quite visible.

A fusion technique for each color component level would allow using a different exposure range for the components relevant to the fade die.

Y = 0.3 R + 0.59 G + 0.11B. If the exposures are selected for maximum range of Y (specially on a film with fading), It is certain that that the R and B channels would not be using the full 16 bit quantization.

All the above in the context of controlling light for expanding the capture range of each component (white light for me).

Like with RAW, the full-quatization-intermediate (FQI) would require development prior to visualization.

And why would one go through all that trouble?

  1. Capturing of films subject to color fading without pushing the noise at postprocessing.
  2. Better signal to noise per color component.
  3. Better dynamic range per color component.
  4. Future proof the resulting scan for future displays HDR.

Sorry if this is becoming another extended topic (like the illuminant discussion). @PixelPerfect apologies for the pollution to your topic :slightly_smiling_face:

This may be helpful too. DCPtool.

I’ve used a similar workflow, and the missing link (for me) is to have higher dynamic range. As mentioned above, I have some films with faded dyes.

In my first scanner the camera was a DSLR 12 bit, output 12bit NEFF. The Rpi HQ is also 12bit. Using fusion with a higher bit-depth intermediate to bring into Resolve is an order of magnitude improvement.

If the film is in good shape, and well exposed, a source raw HQ (12bit) will look great, and may be all you need. When things are not ideal (film not exposed correctly / scenes with very high contrast or fade dye) more dynamic range is needed.