Pi HQ Camera vs DSLR image fidelity

… not in my setup (and use case). First, the graphs you linked to mention that the data refers to “Testing done at 1.36:1 magnification”. Due to the sensor size/setup I am using, the Schneider is operated at 1:1 magnification. Secondly, my scanner is rather bad in the parallel alignment between film gate and sensor, so a smaller aperture (=larger depth of field) gives me a little bit more headroom in terms of this misalignment.

Thanks for this information! I know other footage from the Bauer camera. Matches your scans. This was a very good camera in the old days.

There has been a lot of discussions here on the forum and elsewhere about the HQ sensor. The noise behavior is a little bit strange (more noise on one side of the sensor, for example). In my scans, the film grain typically overwhelms the sensor’s noise level by far. A good exposure can help here. For simplicity, I usually set the exposure for raw capture in such a way that the light level of the empty gate (or the sprocket hole) just touches the highest bit values the raw can encode. Better would be to use the brightest spots in your footage for that adjustment - that would give you about 5-10% more headroom. Drawback is that you need to do this for every film type you encounter. As I have footage were different film types are intermixed, I usually to not bother to optimize exposure.

Incidentally, in the post linked, there is also a link to a more challenging .dng-file from my scanner (HQ/Schneider-combo). You might want to examine this scan, taken with my setup, with your scans.

Currently, I am employing spatio-temporal processing techniques which get rid of film grain and in turn of digital noise. That’s another option to get rid of camera noise.

Another is of course averaging multiple exposures like @npiegdon suggested. It is also feasible to take multiple exposures with sufficient overlap in the image intensities and combine them appropriately. In any case, you will end up with an image exceeding the initial dynamic range of 12 bit for the HQ sensor (IMX477). That is: you will get an image with noticeably less noise, at the expense of a longer scanning time.

Speaking of dynamic range: your Cannon should be operating at 14 bit instead of the 12 bit the HQ sensor features natively. Obviously, there is a small price difference between the two camera setups which should show up somewhere, I guess.

Continuing with the topic “dynamic range”: you did not specify in detail how you captured your footage. Now, if you are using the picamera2-lib for capturing raw-files (.dng), you have to be very careful which format you are requesting from the sensor. The reason is hilarious: if you just request the highest resolution from the sensor, the pipeline works in a compressed format (at least with the RP5 I am using). And the data is converted back internally to a non-compressed raw for saving as a .dng-file. More details here. The use of such a compressed format (and the recoding to uncompressed) should introduce additional digital noise. But I have not bothered checking this.

Well, what works best for you will depend heavily on your use case. I summarized my journey so far in this thread. Actually, my journey started with Sony cameras, being replaced by much cheaper v1, v2-sensors from the Raspberry Pi people, only to be substituted with an USB3-based machine vision camera, to be exchanged later with the HQ-sensor (IMX477). I am still using this sensor, mostly because over time, I ended up with nearly total control on how the image is captured on this sensor (the “imx477_scientific.json” tuning file was created by me). In this respect, be aware that most raw converters actually use for development color matrices embedded in the .dng-file - which in the case of the HQ sensor are set indirectly by the tuning file you are using. So the tuning file is important, even if you only capture raw. (That is in my view somewhat counter-intuitive…)

I can (and I do) pipe the .dng-files created by my picamera2 software directly into DaVinci Resolve. Cannon creates natively .cr3-files. I do not know what color science is actually used when developing this raw data with something other than the Cannon software package (probably some approximation Adobe came up with?). Up to my knowledge, there is no way to directly use .cr3-files in DaVinci (yet).

So…, what works best for you will depend on a lot of factors, including the compromises you are willing to accept along the way.

1 Like