Pi HQ Camera vs DSLR image fidelity

Hello everyone,

I’ve been scanning a few films diy-ing this approach https://www.film-digital.com/ using my 5DII and a cheap microscope lens. The individual frames turned out beautiful, but often times the film would become unsteady and create double exposures or other artifacts that couldn’t be treated in post, so I decided to build my own frame-by-frame scanner.

Now I’m sitting here with a Pi5, HQ camera, and a Schneider 50/2.8, and am trying to figure out what the PiCameras limits are, because with every configuration I’ve tested, I can’t get the same fidelity as with the 5DII.


Top: Canon, taken with an old 35/2 lens and macro extension rings. It’s the lens I had attached to the PiCamera before getting the Schneider, thinking it didn’t perform well enough. It’s slightly softer than the microscope lens if mounted to the canon but still really nice.

Bottom: Pi with the Schneider @5.6, raw image at half native resolution (resized full resolution is maybe getting a tad closer, but not “quarter the storage space” closer).

Both crops are zoomed to 100%.

Both images weren’t taken at ideal conditions - everything’s still a bit wobbly (am focusing with a helicoid adapter and still constantly swapping things around), but I haven’t yet managed to take one single PiCamera image that has gotten close to the natural grainy crispness of the DSLR.

I’m still hoping that it’s not the sensor but the config I’ve set for the camera. I’m using the scientific tuning file at a fixed gain and exposure, and have turned off noise reduction - if that even applies for raw images. Sometimes I think my preview looks a little crisper than the resulting image, but that’s probably because it’s resized to the boundaries of my monitor.

Please help me figure out if it’s possible to get the same result from the PiCamera :slight_smile:.

…Meanwhile I’ll be looking into getting a zoomed in preview window for better focusing…

2 Likes

just a quick remark: do not use the “half native resolution”, as it is noticeably worse than a scaled down “full resolution” image. This has been the subject of several discussions here on the forum, already at times were the supporting software was based on picamera, not picamera2. With the old picamera-lib, “mode 2” was much inferior than a rescaled “mode 3”.

Here’s an old comparison between these modes

from this discussion. You definitely need to view the above image at full resolution to notice the differences between half (“mode 2”) and full (“mode 3”) resolution mode of the HQ sensor in full glory.

Still, looking at the dirt particles in your example, there seems to be quite some softness with the HQ/Schneider combination, compared to the DSLR. From my experience, setting focus with these small formats is quite challenging.

Your DSLR capture seems to have a slight magenta tint…

1 Like

Thanks for the reply.
The images I posted are screengrabs from the actual raw images that I copy-pasted into the editor. Looks like they got badly compressed during upload.

Is there a better way to supply images? I’ve setup a preview window with a zoom feature to make focusing easier and it makes a bit of a difference.

The top-most image is from the canon @100% zoom. The other ones are from the PiCamera at full and half raw resolution, resized for comparison in PS. The difference isn’t as big as in the first two images, admittedly (ignore the slight differences in color, I just quickly raw-edited the colors to make the images look at least similar).

hmmm. Didn’t realize that you are working with raw captures.

While a f-stop of 5.6 seems to be a sweet spot with the Schneider - have you tried slightly higher f-stops? I think I use a setting between f/5.6 and f/8.

Remember that the ten’s of thousands captures you are going to take when digitizing S8 footage will not treat your DSLR shutter nicely.

In any way an interesting comparison. The S8 image (whatever the capture method is) looks very good - do you know by any chance which S8 camera/lens was used?

Image quality also depends on the type of light source you are using. I assume the light source is the same in all of your captures. But generally, directed light enhances micro-contrast (= “sharper images”) compared to more diffuse light setups. I am using an integrating sphere, the most diffuse light source you can have.

The difference in image quality might not really matter once you are progressing toward a final product, which tends to have at max a resolution of 1440 x 1080 or so for S8 material. And even that resolution is hard to achieve with certain camera/film stock combinations. Of course, the better the source material, the better the end result, usually…

1 Like

I thought the sweet spot for both sharpness and resolution was f/4.7?

I know of two cameras my father used, a Nizo S48-2 and a Bauer C5 XL Makro. So it has to be one of those.

The light source is the same, yes. Some dimmable cool-white COB LED with a diffusing sheet between light and film. I think I’m satisfied with it (used it before inside the projector).

I’ll give different f-stops a try. I just got the lens two days ago. But it does look like my issue is with the sensor. The noise on the HQ cam looks kind of “digital” to me, while the noise from the Canon feels more analog/organic and I can’t unsee it… :upside_down_face: If this is something I might be able to tweak, let me know!

You can use Magic Lantern’s silent mode to take shutterless photos, so I’m not worried that much, but I wanted to have the DSLR available for regular photography (it also only reasonably manages a resolution of about 1800x1200 in that mode, which does scale up nicely If needed, but still…). Unfortunately, at the moment I not only like the images better, they’re also way more storage efficient at about 1MB per frame/dng after converting. Guess I need a little more thinking…

I thought the sweet spot for both sharpness and resolution was f/4.7?

Maybe it behaves differently depending on sensor size. I think the lens is usually reviewed on full-frame sensors.
I wanted to use the above mentioned microscope lens on the HQ camera, but the image was absolutely useless at 1x magnification (instead of about 4x with the Canon). Maybe the long tube worked like stopping down or something. With the Pi camera, the image is only sharp at the center and has terrible glares at the edges.

One nice workaround for digital noise that isn’t too onerous–assuming you’re holding the film steady enough–is to take a few exposures and simply average them together. A film scanner has an advantage you never get with live subjects: the things we’re taking pictures of sit perfectly still!

This post along with my next one in that same topic show a little data for the method.

The graph in the second post was taken back before my film transport could hold things under tension. I kind of want to repeat the experiment now; I suspect it’d be a much smoother curve. Even with that noisy data, it’s hard to argue against 3-5 averaged frames for squeezing another bit or two of depth out of the signal.

In my own results it completely eliminated that feeling of digital noise in the image. Now the only discernible noise is from the original grain in the film.

… not in my setup (and use case). First, the graphs you linked to mention that the data refers to “Testing done at 1.36:1 magnification”. Due to the sensor size/setup I am using, the Schneider is operated at 1:1 magnification. Secondly, my scanner is rather bad in the parallel alignment between film gate and sensor, so a smaller aperture (=larger depth of field) gives me a little bit more headroom in terms of this misalignment.

Thanks for this information! I know other footage from the Bauer camera. Matches your scans. This was a very good camera in the old days.

There has been a lot of discussions here on the forum and elsewhere about the HQ sensor. The noise behavior is a little bit strange (more noise on one side of the sensor, for example). In my scans, the film grain typically overwhelms the sensor’s noise level by far. A good exposure can help here. For simplicity, I usually set the exposure for raw capture in such a way that the light level of the empty gate (or the sprocket hole) just touches the highest bit values the raw can encode. Better would be to use the brightest spots in your footage for that adjustment - that would give you about 5-10% more headroom. Drawback is that you need to do this for every film type you encounter. As I have footage were different film types are intermixed, I usually to not bother to optimize exposure.

Incidentally, in the post linked, there is also a link to a more challenging .dng-file from my scanner (HQ/Schneider-combo). You might want to examine this scan, taken with my setup, with your scans.

Currently, I am employing spatio-temporal processing techniques which get rid of film grain and in turn of digital noise. That’s another option to get rid of camera noise.

Another is of course averaging multiple exposures like @npiegdon suggested. It is also feasible to take multiple exposures with sufficient overlap in the image intensities and combine them appropriately. In any case, you will end up with an image exceeding the initial dynamic range of 12 bit for the HQ sensor (IMX477). That is: you will get an image with noticeably less noise, at the expense of a longer scanning time.

Speaking of dynamic range: your Cannon should be operating at 14 bit instead of the 12 bit the HQ sensor features natively. Obviously, there is a small price difference between the two camera setups which should show up somewhere, I guess.

Continuing with the topic “dynamic range”: you did not specify in detail how you captured your footage. Now, if you are using the picamera2-lib for capturing raw-files (.dng), you have to be very careful which format you are requesting from the sensor. The reason is hilarious: if you just request the highest resolution from the sensor, the pipeline works in a compressed format (at least with the RP5 I am using). And the data is converted back internally to a non-compressed raw for saving as a .dng-file. More details here. The use of such a compressed format (and the recoding to uncompressed) should introduce additional digital noise. But I have not bothered checking this.

Well, what works best for you will depend heavily on your use case. I summarized my journey so far in this thread. Actually, my journey started with Sony cameras, being replaced by much cheaper v1, v2-sensors from the Raspberry Pi people, only to be substituted with an USB3-based machine vision camera, to be exchanged later with the HQ-sensor (IMX477). I am still using this sensor, mostly because over time, I ended up with nearly total control on how the image is captured on this sensor (the “imx477_scientific.json” tuning file was created by me). In this respect, be aware that most raw converters actually use for development color matrices embedded in the .dng-file - which in the case of the HQ sensor are set indirectly by the tuning file you are using. So the tuning file is important, even if you only capture raw. (That is in my view somewhat counter-intuitive…)

I can (and I do) pipe the .dng-files created by my picamera2 software directly into DaVinci Resolve. Cannon creates natively .cr3-files. I do not know what color science is actually used when developing this raw data with something other than the Cannon software package (probably some approximation Adobe came up with?). Up to my knowledge, there is no way to directly use .cr3-files in DaVinci (yet).

So…, what works best for you will depend on a lot of factors, including the compromises you are willing to accept along the way.

1 Like

Okay, that’s a lot to digest. I have no idea if I’m doing things right.

I have tested out the opencv_mertens_merge.py from the documentation and the first results aren’t that great (e.g. blown out highlights, no visible change in noise…). Also, the script takes about 10 seconds to merge three images at full resolution. Is that just how long it takes, or is it just a bad example?
Which settings do I need to look out for to get a good result?
I’ve been reading through the linked posts, but I’m not yet at a point at which I understand what everything means.

Also, I’ve noticed this line in said script, and remember reading about DigitalGain elsewhere in the forums:

gain = metadata["AnalogueGain"] * metadata["DigitalGain"]

What’s the relationship between digital and analog gain? There seems to be a way to adjust ExposureTime and AnalogueGain so that DigitalGain is 1.0. But I couldn’t manage to figure it out. From what I understood, anything above 1.0 might introduce some digital noise to the image?

How can I tell whether my raw file is compressed or not? I’m also using a Pi5 with picamera2-lib. I tested setting raw format to SRGGB12 as suggested, but couldn’t tell if there was a difference… There’s no difference in capture time or file size at least… but maybe that’s becasuse I’m using a microSD card that can’t handle those write speeds?

Also, I’m extremely confused about the various camera configurations. From what I understand it’s irrelevant which one I choose, as they’re just templates for typical use, right? Or is there more behind it?

About my Canon workflow: I recorded directly “into” the projector using the Magic Lantern firmware to take raw-format video at specific fps and exposure times. Then I converted that video format to individual dng files for cutting and grading in daVinci. The 5DII is old and the CF cards’ write speed limits the size of the recording, so I could only reliably record at 10bit color depth and 18 or 24fps at max. There were a few other drawbacks like not getting the entirety of the frame. And whenever the perforation was bad or the film had a cut, I would get double exposures or motion blur, because the film would get unsteady for a bit. But the overall look and feel of those frames is/was very pleasant. By now I’m thinking I should’ve put the money I’ve used for scanner parts into a more recent DSLR :joy:.

I’ve spent way too much time today toying around with different settings and getting more and more confused about the results. At the moment I’m not even sure if the Schneider is better than the 35/2 Canon which it should be… If you wanted the most “raw” result from the camera, which settings would you choose, and how can I tell whether they were actually applied (like, is there a “wrong” moment for printing out metadata)?

To me, the lack of sharpness in the HQ camera image looks a lot like the f-stop problem I’ve had with my own build. Basically, I found the Schneider lens to only be truly sharp at f/4.7 (as mentioned by @npiegdon). All other apertures appear slightly blurry, at least in my particular setup.

At f/4.7 I find the combination of the Schneider and the HQ camera to deliver pretty nice images. I can see the slight “digital” feeling noise as well. But I find this is pretty much gone with the default default denoising settings in Adobe Lightroom.

1 Like

Can someone post an example of what that “digital” noise would look like? Most noise in image sensors is of the analog type. Given, the “raw” image presented to the user as raw sensor data is nowadays also processed digitally. For example, the HQ sensor (IMX477) features a switchable defect pixel correction (DPC):

There was a time when this feature had four choices to select from, but nowadays you can only switch it on or off.

But I doubt that this is the “digital noise” @verlakasalt and @jankaiser are talking about.

In any case, nowadays basically every sensor chip does some image processing and the “raw” image is not so raw as probably expected.

Which brings me to this topic:

Well, all .dng-files created by the picamera2 software and written to the disk are certainly uncompressed. The issue is that the RP guys introduced “visually lossless” image formats when the RP5 surfaced. And they are the default formats libcamera/picamera2 are using when capturing images. The very weird approach which is happening under the hood in case you are requesting a .dng file from picamera2 is the following: the “visually lossless” compressed image is decoded into a full uncompressed image which is than stored as a .dng raw. I discovered that wierd design choice by parsing the picamera2 code, trying to find out why the .dng-file creation was taking about 1 sec on the RP5. As far as I know it’s not explicitly stated in any of the RP documentations. @dgalland confirmed my suspicions in this thread here.

That was some time ago and the picamera2 lib has seen some updates. So I am not sure whether this is still an issue.

In any case: the important point is that you want the frontend of the libcamera software to operate with real, uncompressed raw data. Here’s a quick (partial) snippet of my initialization code which achieves this goal:

# image orientation on startup
hFlip = 0
vFlip = 0

# modes for HQ camera
rawModes   = [{"size":(1332,  990),"format":"SRGGB10"},
              {"size":(2028, 1080),"format":"SRGGB12"},
              {"size":(2028, 1520),"format":"SRGGB12"},
              {"size":(4056, 3040),"format":"SRGGB12"}]

# default modes
mode       = 3
noiseMode  = 0                  
                  
    logStatus('Creating still configuation')
    config = self.create_still_configuration()
    
    config['buffer_count'] = 4
    config['queue']        = True
    config['raw']          = self.rawModes[self.mode]
    config['main']         = {"size":self.rawModes[self.mode]['size'],'format':'RGB888'}
    config['transform']    = libcamera.Transform(hflip=self.hFlip, vflip=self.vFlip)
    config['controls']['NoiseReductionMode']  = self.noiseMode
    config['controls']['FrameDurationLimits'] =  (100, 32000000)

As you can see, the startup mode selected is 4056 x 3040 px with format SRGGB12. And you can check that this mode is in fact selected by monitoring the debug messages of libcamera - one of the last lines of the message block issued right after camera initialization will tell you what mode libcamera actually selected. The format selection is a complicated process and might fail, depending on the libcamera-version. In my software, the relevant debug messages from libcamera/picamera2 are like this:

[0:02:32.329027649] [2233]  INFO Camera camera.cpp:1183 configuring streams: (0) 4056x3040-RGB888 (1) 4056x3040-SGRBG16
[0:02:32.329162538] [2264]  INFO RPI pisp.cpp:1450 Sensor: /base/axi/pcie@120000/rp1/i2c@88000/imx477@1a - Selected sensor format: 4056x3040-SGRBG12_1X12 - Selected CFE format: 4056x3040-GR16

It’s the 4056x3040-SGRBG12_1X12 sensor format which you want.

Well, you cannot switch on or off the DPC within the picamera2 context, you need to issue a command line as discussed above. Note that the above code snippets deactivates any noise processing by the line

config['controls']['NoiseReductionMode']  = self.noiseMode

as self.noiseMode is equal to 0.

Citing myself here…

Well, you probably want to update the link to the script, here’s the correct one.

That script is propably not something you really want to use in your film scanner project. It just gives you an idea of how to achieve something, but it is far from optimal. There is actually a lot to comment about this question. I will try to be short…

First, the example script you are using does not take into account that the opencv implementation of exposure fusion will create intensity values less than zero and larger than one. That is, the normalizing step is wrong. There is a discussion about that here on the forum, but I have that topic currently not at hand.

Taking a step back, there have been a lot of dicussions here at the forum which way of capturing S8 footage is the best. The issue here: typical S8 footage requires about 14 bits of dynamic range (if exposure is chosen carefully) and that is less than a single raw capture with the HQ sensor can supply. Your Canon, by the way, could do that.

That was the reason I initially scanned my footage with a stack of carefully chosen different exposures and combined them with a script implementing the Mertens exposure fusion algorithm by myself (not relying on the opencv-implementation). My script is a little bit faster than the opencv-implementation and does yield different results (the implementation is different and it subpixel-aligns also the different exposures). However, I switched lately to just taking a single raw image of each frame instead of an exposure stack. Again, browse the forum for the pros and cons. In short: for most footage, the 12 bit of the HQ camera is more than sufficient. In the extreme cases of high contrast images, you will keep the shadows dim anyway, reducing the noise print here. If you employ a noise reduction scheme, the advantage of capturing multiple exposures is only minor. On the other hand, you gain substantially on scanning speed. So: I started in the HDR camp and ended up in the single raw capture camp.

There are some other things were the exposure fusion algorithm performs better, as it performs intrinsically a local contrast equalisation. That helps with image detail and bad exposures by the S8 camera used in recording, but I decided that this does not outweight the scanning speed improvement (if you analyse closely the example video I posted here, you should notice the difference between exposure fusion and single raw capture).

There is “more behind” it. For starters, a still configuration selects different defaults than a video configuration. If you look at my code snippet above, I do select a still configuration for starters and than modify it. Most importantly, I increase the number of buffers to 4, among other things. Only if you increase the buffer cound, you can be sure that there will always be a buffer available when requested. That is actually the default value for video configurations and increases your capture speed.

As you can see, the HQ sensor is a quite complicated beast under the hood. Some of the options available with the sensor are not exposed by the libcamera/RP guys. But some stuff, for example the whole color science, is for the first time ever accesible for the user. On the other hand, what your Canon does in terms of image improvements is probably unknow to the world. And certainly not at your command. However, if you are more pleased with the Canon results, why bother with the HQ sensor?

From my experience, neither the sharpness of the input into your processing pipeline nor the color quality of your intial scan does actually matter too much. At least not if your main goal is to resurrect historical imagery for today’s people. You will always heavily postprocess your footage, deviating substantially from the quality/appearance of your initial scan.

Of course, if you goal is to archive as best as possible fading S8 footage (but: the most used Kodachrome film stock does not really fade…) in order to have the possibility of revisiting your interpretation of the footage in a few years from now (with presumably better post processing features available), you might put more weight onto the best raw capture you can achieve.

2 Likes

I’ll reply to everything else later (very grateful to all of you taking the time :slight_smile: ), but two quick questions:

@jankaiser: Do you currently have the Schneider reverse-mounted? Did it make a difference in image quality?

I’m suspecting that I just can’t manage to get a genuinely crisp image from the PiCamera. I don’t have a proper test print, so I just put something with fine print in front of it (angled badly…). schneider-SRGGB12-S-1725106348.5086498.dng - Google Drive
Now, maybe a very silly question: Do I have to adjust the Pi HQ camera’s back focus ring in any way if I’m using a helicoid adapter to focus?

@cpixip What I perceive as “digital” noise is kind of blotchy, colored and a little like jpeg artifacts, while the Canon noise doesn’t have any color, it’s more like fine sand. Maybe I can find a good example.

I think you have a lot of answers in the posts above. I’d just like to stress the importance of a good focus. In my opinion, the helical adapter is not suitable for adjusting the focus, rather it determines the magnification ratio, which must remain fixed (it’s almost equal to 1 for the HQ and S8). The back focus ring is useless, what matters for determining the magnification is the distance between the center of the lens and the sensor. After that focus is achieved by moving the camera, and a micrometric table seems essential. As for the schneider, it seems that with a magnification close to 1, there’s no difference between direct and reverse mounting.

3 Likes

I think @dgalland is correct here. Lenses are calculated and optimized for typical use cases. A normal photographic lens is usually optimized in such a way that the lens is positioned approximately one focal length away from the image sensor, imaging objects several meters away from the camera. So the front section of the lens expects objects several meters away, while the back section of a typical photographic lens assumes that it is operating at a much smaller distance, approximately one focal lenght. (I am simplyfying here a little bit.)

If you mis-use such a photographic lens as a macro lens, the whole lens optimization obviously is not correct. Actually, the value ranges of front and back distances are kind of swapped in a such a macro setup.

To remedy this a little bit, it is common practice to reverse-mount the lens. In this way, the sensor side of the lens is operating closer (near one focal lenght) to what it was designed for. Also, the object side of the lens is operating closer to the design spec. Usually at 1-2 f-stops distance - not the several meters the lens was optimized for, but still better than the other way around. That’s the whole story of back-mounting standard lenses mis-used as macro lenses.

Now, the Schneider is an enlarger lens. That is, the front side of the lens is not optimized for an imaging distance of several meters ( more in the range of 50cm or so). The optimal back side distance is already longer than the 50 mm focal length of the Schneider (at 50 mm distance, the enlarger’s image would be sharp only in a few meters distance, clearly not the usual use case of an enlarger lens). So reverse mounting of that lens is probably rather unnecessary.

That image looks ok for me, sharpness wise. One can discover some chromatic abberation in the corners of the image, but otherwise…

Here’s for you a recent scan of S8 footage with my HQ/Schneider combo, developed in RawTherapee without any manual modification. The default “Capture Sharpening” was turned off

(you will need to download the image to view it at the original resolution).

Continueing, in the following a cutout of your image, developed in RawTherapee with deactived “Capture Sharpening”, each original pixel enlarged 4x, one EV pushed upwards, is shown:

Well, there are color blobs noticable in the white paper regions - is that the “digital” noise you referring to?

I suspect this color noise is caused by the rather tiny HQ sensor size - your Canon has instead a full format sensor. That is, a sensor of size 7.6 x 5.5 mm competes here with a sensor of size 36,0 x 24,0 mm.

In your image, especially the red color channel seems to have a rather large noise level. I found red noise > blue noise > green noise. That is somewhat expected, as there are two green channels which are more or less averaged to yield the final green channel.

You might get better performance with the HQ sensor if you increase overall exposure; your “schneider”-example can be pushed 1.5 EVs upwards without clipping. And you might want to look in addition into your the light source. Maybe you are using one which has not enough spectral components in reds.

A hint would be the red and blue gains when running the HQ sensor. In my scanner, the default gains are red = 2.89 and blue = 2.1. The image of the blank leader tape above was taken with these gains. I would be interesting to compare this with your gains.

1 Like

@verlakasalt My Schneider is mounted forward.

@cpixip I don’t have an image ready at the moment that shows the noise, but the one you posted kind of shows it. I would describe it as somewhat blocky, mostly colour noise that is visibly smaller than the film grain … like the kind of look you get from an old phone camera.

1 Like

The HQ sensor (IMX477) is a “consumer camcorder” chip… :innocent:

the HQ sensor (IMX477) features a switchable defect pixel correction (DPC) :…
But I doubt that this is the “digital noise” @verlakasalt and @jankaiser are talking about.

No, but that’s interesting to know as well.

Citing myself here…

Hmm. There was another thread that made me think I could calculate a “natural” HQ cam exposure time close to my desired exposure time so that digital gain becomes 1.0, but I couldn’t figure it out. I guess I’d need to auto-exposure first and then set that value as my fixed exposure?

A hint would be the red and blue gains when running the HQ sensor. In my scanner, the default gains are red = 2.89 and blue = 2.1. The image of the blank leader tape above was taken with these gains. I would be interesting to compare this with your gains.

Ah! Currently most of my adapters are attached to the Canon, so I can’t get the exact value from the Pi, but that image was captured at 6500K, which was the color temperature I used for recording from the projector, and I think it translates to gains roughly around 3,4 and 1,7 - so definitely on the red side.
So much stuff to adjust… I’ll definitely take another look at my light source.

I’ve already spent a lot of time in your other threads regarding image quality and HDR :-).

However, if you are more pleased with the Canon results, why bother with the HQ sensor?

Good question. Mostly I wanted to free up the Canon for other things, and I had hoped to be able to work with a bigger resolution than what my old Canon can do. Shuttlerless photos only work well at the available movie resolutions, and (with the 5D2) that limits pixel count to something around 1080p.

I think if the full-size raw images from the Pi Camera weren’t 24MB each, I wouldn’t be overthinking this so much.

I think you have a lot of answers in the posts above. I’d just like to stress the importance of a good focus. In my opinion, the helical adapter is not suitable for adjusting the focus, rather it determines the magnification ratio, which must remain fixed (it’s almost equal to 1 for the HQ and S8).

I’ve discovered when I mount the camera in a way that the lens stays at a fixed distance and only the camera moves back and forth, I’m actually changing focus, not magnification. Or… at least it looks and feels that way. If the camera stays fixed and the lens moves back and forth, it’s clearly the other way around. The only issue is, with the helical adapter I don’t have any reference point. I’m just twisting that thing until the image looks sharpest…

That image looks ok for me, sharpness wise.

Thanks for the assessment!

There are color blobs noticable - is that the “digital” noise you referring to? Otherwise, the sharpness of the image looks ok for me given the hardware you used for capturing this image.

Yes. I’d say that’s it. Also, in the images I posted earlier, the sky in the first one is just “grainy”, while the second one is more “swirly-blotchy”. Very scientific :-).

Well, maybe your Canon pulls out of its sleave some image processing which is happening before the raw data is output. Another issue is certainly the sensor size: the Canon (I think this is a full format sensor?) must have a lower noise level than the rather small HQ sensor.

I checked and, indeed, Magic Lantern does 3x3 pixel binning for their raw video format. I thought they only used part of the sensor. However, the image I posted above is a 100% cutout from a much larger resolution image, so I figured it could be comparable.

@jankaiser I took a look at “your build” and you’re mentioning your raw files take up only 18 MB of storage. Mine are always 24 MB. What’s the difference? Also, if you’re editing in Lightroom - what happens in Apple Compressor? Is it able to read the changes you made to the raw files?

Anyway. I’ve edited four clips just to get a better feel for things myself. Hasn’t helped yet, but they might show… something, i guess…

https://drive.google.com/drive/folders/1iT-17EFZl-d7H4SXIUENX-OIf7-ZEpev?usp=sharing

01 is a live Canon recording (10bit, 1856x1238 raw at the time). You can see the double exposures I mentioned. Also, at one point you can see how I’m live adjusting focus…
02 is 01 but with a bit of noise reduction and sharpening.
03 is the same scene from my scanner (HQ cam 1/2 raw) - first time scanning a full reel with a Canon 35/2 lens. Focus turned out to be not perfect.
04 is 03 but with a bit of noise reduction and sharpening. Doesn’t look half bad either.

That particular film (Agfa, from 1982) was btw. littered with this stuff: Filmrestauration des Agfa Moviechrome Super 8 - Schmalfilm - Filmvorführer.de. I “cleaned” it with distilled water, which made the white bloom invisible (you can still see the craters it left), but the environment was not the best, so now there’s dust everywhere…

This discussion has kind of gone all over the place, but it’s been really helpful to me figuring out which camera I want to go ahead with. I’m absolutely annoyed at the fact that I can compare the Pi-captures to the Canon-captures :joy:. I think if that wasn’t the case I’d be perfectly pleased with what the PiCamera delivers. But as it is I can either have PiCamera convenience with 1/2 raw at “slightly worse quality” taking up about three times the storage space, or “comparable, possibly better quality with” 1/1 raw taking up about 12 times the storage space (each after running through Adobe DNG converter). …Or I could just go back to the DSLR… Argh :slight_smile:

which has been discussed here on the forum as well:wink:

Which brings me to your example captures cmp. only 01 Canon vs 03 HQ (both without too much processing).

First: did you change your illumination source between captures? In the 01 Canon capture you can see faint image structures in the sky (what you call “craters”). These are actually structures caused by the mold developing on these old Agfachrome film stock. These structures are no longer visible with the 03 HQ video. If you would used an integrating sphere (=completely diffuse illumination), these structures should not be visible at all.

Secondly, your 03 HQ captures are certainly underexposed. You can see this on the sprocket hole intensities, were the RGB values are 252/254/254. The sprocket hole should have exposure values of 100% in the raw file, nothing less. In fact, every film material reduces light intensity. So the brightest spot you will encounter in your scan will be a featureless highlight. And it will be captured faithfully if your raw exposure maps this image intensity to 100%. Of course, the sprocket hole will burn out, but who cares?

From my experience, this gives you an additional exposure increase of 5-10%. And increasing your exposure time will reduce the noiselevel of your captures.

If I recap my approach correctly, I first set the gain to 1.0. I never use auto-exposure, but I set the exposure time to a fixed value. Than, I increase my light source intensity up to the point where either the sprocket hole is imagined at 100% raw intensity (when I am lazy, which is probably the standard) or select an image area burned out in the source image, and set its raw image intensity to something close to 98-99% (which I only do if I capture single 15m rolls, which I barely do). That should give you brighter images than the ones in your example images.

If you do not have an adjustable illumination source, you can adjust the image intensities by simply changing your exposure time appropriately. With digital sensors, always expose to the highlights, nothing else. And always check that - after waiting a few frames - the digital gain stays at 1.0. You do not want any other value.

Please note that the .dng-files the HQ/Schneider combo would produce can be read directly into DaVinci Resolve. You have all the important options available in the Raw module at DaVinci’s Color Page, including shadow and highlight, as well as highlight recovery (check this one). Also, you can change, if that fits into your workflow, color temperature and tint. Of course, exposure correction is also available. Well, you will need a very fast disk for that approach to be fast enough to be fun to work with. However, proxies can save your day here.

1 Like

This is not a discovery, but the result of optical calculations!
The distance between lens and sensor determines the magnification.
Super8 frame 5.79 x 4.01 mm (projector window removed)
IMX 477 sensor 6.29 x 4.70 mm
Magnification: 1.0858
Focal length 50mm
The optical calculation gives
Distance lens (lens optical center) image 50*1.0858+ 50 = 104.29

1 Like