Strange encounter in the red channel/RPi HQ Camera

Yep, it was a pain for sure. I wish there were somewhere you could get 8mm IT8 targets (though the grain noise at that scale would hurt its overall usefulness)…

With as much color correction as I’m applying after the fact (even to the Kodachrome footage), it’s a bit of a moot point. The calibration gets everything into the right ballpark and I edit from there. Again, if there’s slight metamerism going on between the film stocks, it’s imperceptible to me.

Here’s an example of nicely exposed Kodachrome with no fading that I’m still adjusting to taste for things like white balance. The left is straight off the scanner using the IT8 calibration. The right is closer to the final image I’d like my audience to see. (The original seemed a little too blue/cool.)

1 Like

nice scan! :+1: :sunglasses:

I’ve looked through your posts as well as the code you’ve linked and I think I still need a few pointers on what to look for exactly.

How did you determine the above values? Which values/levels do I need to observe in order to avoid clipping? At which point do I retrieve those values? From the dng? Or from the array before the dng? Can I find them among the data the linked code puts out? How do I put all this information to use? :slight_smile:

Edit: To frame this a bit: What I understand is that I need to find a level of illumination/exposure that avoids values below a certain level (e.g. 250 within a 12bit range). Right now I only know how to create a histogram based on an 8-bit png (in Python). Since I’m only now grasping the meaning of “linear dng” - if my dng is linear (is it?), that would mean I couldn’t go below a value of about 16 in the black/dark areas when represented as an 8-bit image. But I’ve seen frames with “blacks” below an sRGB value of 3 to 0 without that particular type of noise banding. What? I’m constantly in a state of thinking that I’ve understood what’s going on but then I feel like I haven’t understood anything at all.

Edit2: I’ve seperated the channels for my 8-bit histogram and it appears that large areas of red-value below 5 within the image area result in the above case of noise banding. (A few speckles don’t seem to be relevant). I’m going to test further whether this is consistent for my setup. It probably changes when gain values change…

If you run for example the script I posted above on one of the raw files I posted as well, you will get the following output:

Set raw file to analyse ['I:\xp24_full.dng'] > 

Image:  I:\xp24_full.dng

Camera Levels
_______________
             Black Level   :  256.0
             White Level   :  4095.0

Full Frame Data
_______________
             Minimum red   :  77.0
             Maximum red   :  4095.0

             Minimum green :  167.5
             Maximum green :  4095.0

             Minimum blue  :  162.0
             Maximum blue  :  4095.0

Center Data
_______________
             Minimum red   :  208.0
             Maximum red   :  1367.0

             Minimum green :  260.0
             Maximum green :  4057.5

             Minimum blue  :  230.0
             Maximum blue  :  2889.0

Now, the first two lines report the black- and whitelevel of the data. In this case 256 and 4095. These numbers are not fixed but depend mainly (in the case of the HQ sensor) on the type of RP you are using. A whitelevel of 4095 indicates that something smaller than a RP5 was used.

In any case: the light intensities the sensor saw are encoded in a linear fashion in the raw data. Absolute black corresponds to the reported blacklevel, the whitest white is corresponding to the reported whitelevel. You should never find pixel intensities larger than the whitelevel. With the blacklevel, that is a different story.

The actual minimal and maximal values found in your raw file are reported in the sections following. Indeed, the largest value found in the full frame data is listed in all three color channels as 4095, which is identical to the whitelevel reported. So there’s a good chance that there are some areas in the image which are burned out. And indeed, this is the case with this specific image.

Now, looking at the minimum values found, they all turn out to be lower than the blacklevel reported. What is happening here? Are these pixels blacker than black? In a way they are - but most probably just useless noise.

Looking at the following center third section (“Center Data”), the situation is slightly better. All bright pixels found are well below the maximum value (the reported whitelevel). The largest intensity is in the green channel, 4057 < 4095. Also, in the green channel, the darkest pixels stay above the blacklevel at least in the green channel - however, the red and blue channel are still lower than what would actually be expected. Again, that is a characteristic of the HQ sensor and the associated software and there is nothing one can do about - maybe treat these values as “super-black”? Your choice… :sunglasses:

Well, in fact, there have been a total of 5 different approaches suggest to reduce these noise stripes found in the HQ sensor. And there is of course even a sixth: get a better sensor.

For further information on how the raw data is transformed into a viewable image, I would suggest to check again the script listed here on the forum which does a complete Python-based raw development. It contains all the secrets one wants to know.

Oh :person_facepalming: I was so fixated on @PM490’s findings, I forgot about your script. I copied and tried it even… Thanks for being patient.

All the other ways to mitigate the noise require too much of a change with my setup, so I’m picking the only one I can implement with what I have.

1 Like

There is more to it, as @cpixip already pointed out. The image is the leftover that the dyes didn’t eat.

The disagreement is what is the best leftover…
On one side, there is dye crosstalk that the film process inherently has and is undoubtedly reflected in reversal film datasheets, including Kodachrome. White LED cannot reduce it.

On the other side, RGB is unlike the sensitivity and color response of a color camera/sensor (HQ sensor depicted below in gray), yet allows for reduction of dye crosstalk.
20241111 HQSensor Response with RGB LEDs

Yes, both end in three values (additive colors RGB) that would become light in some form of display so we can see the image.

I beg to disagree that there is no difference with unfaded material, since the crosstalk is part of the color image the film was designed to render. But at the end it boils down to taste and resources.

Results of faded dyes with white light? here is an example previously posted. Note the scene was shot with a mixture of natural light (window) and tungsten reflector, so much color correction was needed.

On a side note, the DAC PCB in my scanner was built to add up to 7 additional LED drivers for narrow band LEDs, for a V2 of the scanner. But first I need to redesign the sphere PCB (or PCBs) to accommodate. In other words, I am in no way dismissing some use cases for narrow/multi spectrum LEDs, only my opinion is that it is not necessarily the default-best path. When (if I ever get the time) to put it together, get the White+Narrow illuminant together, it would be a way to confirm my hypothesis that WRB scanning process would have some benefits over RGB, and compare results of White vs RGB too.

Very cool results.

@verlakasalt When capturing the DNG make every effort to have the active image be slightly higher than black. When in resolve, use first Lift to adjust it back to a lower level of a darker or zero black.

@cpixip in Resolve, because of the raw-raw-tiff, I am setting the timeline to linear to match the source files. In that context lift and offset work a bit different, and offset appears to be a straight subtraction of the channel offset. When not in linear, these two appear to do something similar, probably due to the gamma curve already in the dng.

2 Likes

While we were having this discussion a few days ago, I was searching around for some info and bumped into a post on some other forum that mentioned an alternative supplier of IT8 targets that I’d never heard of before: greywhitebalancecolourcard.co.uk.

They make 35mm Ektachrome slides of an IT8 target with the accompanying data files for a price similar to Wolf’s. (Their worldwide shipping is also quite reasonable!)

I knew it was a long shot–I’d already tried contacting Wolf about this in the past and never received a reply–but I used their contact form to ask whether they could produce a custom version of the 35mm slide where the calibration target was printed smaller in the center, say at 16mm or 8mm size. I admitted that at those sizes you’d be fighting an uphill battle for accuracy against the noise in the grain, but that I’d still be interested anyway.

Four days later and I just got a reply: “I can do this but it might take around 10 working days. Is that okay?”

This would reduce the burden of calibrating a sensor from hours of carefully repositioning a target to just the time it takes to get a single nice capture (and type one Argyll command line)!

I haven’t replied yet. I was wondering if I should mention that–if possible–they should make a few extra in the smaller size because I know some other people that might also be interested in the same thing? Would anyone else be interested? Or should I be the test subject first to see how useful it turns out and report my findings here?

2 Likes

@npiegdon Thanks for sharing this info.

My know how on developing a color profile or using Argyll is zero. I would certainly be interested in something to test/calibrate the experimental methods.

Something more practical -at least for me- would be a target that can use the color match feature of Davinci resolve.

Before your posting I came across this one but is certainly outside my budget.

My thinking is scan in whatever method one prefers for the source circumstances, even in raw-raw (raw values without color science info), even linear values, and let resolve do the calibration.

To that effect, I was considering creating a simulated target frame capture. Basically, to normalize the capturing process, create a TIFF with what an ideal color target values would be at the sensor (keep in mind that in my case there is offset and more green).
While it will not be color accurate, nor will it calibrate the light, it will set Resolve color correction pipeline in correctly for -at the minimum- offset, gamma, and gain, and then one can do the lesser adjustments for light/color balance, which one would do anyway for visual appeal.

The automatic function of Resolve expects all colors to be on the same range. When binning with all channels captured at once with the same light, it does not compensate for the double green (the tint is off to green). When each channel is captured for maximum range, the function works flawlessly.

1 Like

Well, a nice discovery. However, as you remarked already by yourself,

And it’s not only the color corrections, we could throw into this the fixed color temperature of the film material, unmatched to the light actually contained in a scene, or the varying characteristics of the different film stock at hand, or even variations during the development of different 15 m film rolls (I have one Kodachrome example showing this bluntly). Not to mention different fading characteristics of film stock, heavily dependent on time and storage conditions.

So any color processing will probably at best end up with a viewable image, but certainly not the “right” colors.

Nevertheless, of course anybody wants to be as close as possible to “true” colors for the final version of a scanned movie. That is, the scanning camera/illumination combination should be as true as possible. If you do not have a camera with color science to start with - like in your case with a monochrome camera paired with narrowband LEDs, a calibration target with the appropriate software surely makes sense.

The normal use case would be a camera combined with a broadband illumination, not too far off from a normal white light spectrum. Here you’d expect that the camera manufacturers color science is doing a decent job in capturing the colors of your film. I am not so certain if the camera in question is a machine vision camera, I am certain that the manufacturers default color science for the HQ sensor is off a bit - that was the reason I came up with the scientific tuning file.

The later was actually created without any reference or calibration image ever taken. The approach was to simulate the complete optical pathway in software, based on the known filter responses and illumination spectra in question. Imagery with color checkers were only used after the color science was computed to check the performance. This is also one preferred way according to ACES-documents. If (and that’s a big “if”) the spectral curves are available, there is no need to use calibration targets - which have their own challenges. You can directly calculate the appropriate color science. This worked well with the HQ sensor, a little bit less well for RP’s global shutter camera, and I have not tried more than this at the moment.

In the end, the differences in the approaches will probably be overshadowed anyway by the variations encountered in the source material.

2 Likes

Check the “TIFF and reference data” link over in this post in the Backlight thread. The “argyll steps.txt” file has everything you need. One command to read/compare your scan of the calibration target with the calibration data supplied by the manufacturer. Then, another command to generate an ICC profile from it.

With an ICC profile in hand you can do lots of things. It’s usable directly in places like Photoshop where you can simply “assign” the image to use that profile. But it can also be used to generate the kind of LUT that Resolve likes. So your first node would be to use that LUT and you’d be starting with the nice, “real” calibrated results for all your footage.

That thing is wild! Anytime you add “NIST-traceable” to a product description, the price gets another zero or two. But having 98 spectral response data points at 5nm increments for each color patch is awesome. I don’t know what I’d do with it… but it’s awesome. :rofl:

I’ve noticed the same throughline across all my hobbies: metrology. For whatever reason, I seem to love precise measurements and quantifying anything I can get my hands on. :sweat_smile:

There’s something to be said here for why we calibrate monitors, scanners, and printers in the first place. Each one is a little different. And they drift over time. I generally distrust every “out of the box” state my devices arrive in. (This probably dovetails into my statement about metrology just now. I can’t turn it off in my brain! :laughing: )

Agreed.

Alright, I’ll point the greywhitebalancecolourcard people at this thread and mention that there’s at least a little interest in them maybe making more than one at the smaller size.

1 Like

Having already mentioned ACES, I am going to play a little bit “devil’s advocate” here…

I am pretty sure that this thread on the ACES-forum touches the original question - namely negative intensity values due to out-of-gamut colors*. There are even some comments/suggestions in the linked thread which directly discuss insufficient blacklevel handling similar to what was already discussed here in this thread (suggesting to mis-use the “Lift” parameter of the “Raw Camera” tab to solve it).

With respect to input profiles and LUTs, I’d suggest to view this video by Daniele Siragusano, referenced in the cited ACES-thread as well, about additive mixture challenges and why they are important (skip to 07:20 if you are in a hurry). In short, you probably want to avoid non-linear operations in your input profile (that is: non-linear operations performing gamut adaptions. LUTs are usually non-linear mappings). Those things should probably come later, in the color grading step? (This video is talking about cameras looking at a real scene (light mixes linear in a scene and in the camera) - however, our scanner cameras are looking at a film positive. Which is basically a non-linear media from the start. This might require a bit of extra thinking.)

(* these negative “out-of-gamut” colors are not really a “curse” if your image processing pipeline can handle arbitray negative and positive values, like in DaVinci with it’s 32bit floating point format. But other software might clip negative values to zero and large positive values to whatever the maximum is - in the case of 8 bit data it would be 255, in the case of 12 bit data 4095 and with 16 bit data, that would be 65535.)

(EDIT: funny that we discussed some of these things nearly 6 years ago. There has been some progress I think - we all agree that all approaches will yield a basic scan from where the artistic interpretation towards the final product can start (that is, “color grading”).)

2 Likes

@cpixip You mentioned using a 10mm x 10mm form factor. I was searching in the forum but only found the reference to the LED. Do you have mechanical drawing/spec of the LED PCB?
Brainstorming about the next round of LED PCBs, and once I figure out what the light for my V2 design would look like, would like to also make use the same PCB order to make some 10mm x 10mm form factor for the white bridgelux and facilitate a lower cost for 98CRI LEDs for scanning.
EDIT: Reason? my PCB was built for the projector modification, and made it work for the sphere. Have everything in Kicad already, so making a 10x10 along with other PCBs will not be trouble.

Well, I bought the LEDs already mounted on these tiny PCBs. Here’s an old image showing some setups I tried out:

The right one shows the LEDs I am currently using. It’s simply a 10x10 mm PCB with a heat sink attached. At some time, this was one of the standard mounting options of those tiny LEDs

These LEDs are paired with my 3D-printed integrating sphere, described here (including .stl-files). I am considering upgrading the LED illumination, but for this, I have to take apart the whole film scanner.

And: before I start this project, I first want to understand the colorimetry (color science) of various illumination setups - research done years ago and which somehow went out of focus over time. One result of these previous simulations was this here

already giving some insight into the situation. The simulation was done the following way: I assumed a color checker being illuminated by various light sources. The color checker was recorded by a standard HQ camera with standard IR-blockfilter. This is not exactly the situation we have in a film scanner (reflective vs. transparent target, print dyes vs. film dyes, for a start), but I thought it would give at least some hint what one could expect.

For each light setup, the color matrix was individually optimized to achieve the best overall result. In the center of each patch, the color is displayed which a human observer would perceive when looking at the color checker in normal daylight.

The bottom-left subpatch shows how the HQ sensor would “see” the colors with the same illumination. Clearly, there are some differences, but the mean ∆E-error is remarkably low, only 1.08. (It would be slightly larger if the color matrix would be directly from the tuning file.)

The other patches show the best results with different narrowband LED illuminations. “Swiss” shows the result with the LEDs quoted in the Diastor-paper, “ReelSlow8” the results with the LED-setup given by @npiegdon quite some time ago in this forum. “Aligned” shows the results of another LED configuration, somewhat hand-optimized for the task at hand.

While this simulation shows certainly the trends one would have expected, the simulation is not really equivalent to our film scanner setup, as mentioned above. I want to get some more insight here before I start considering to change/optimize my scanner’s illumination source. That is: before I fully dismantle/redesign my scanner.

That’s enough information for I was looking to do.

Sounds like a plan.

I am close to have the first version of Snailscan-Too actually be good enough for initial scanning, and the plan is to then redesign the LEDs to better fit the sphere (which in my case is a bit bigger for 16 mm scanning).

Understood. There is some alignment, since I would like to test/experiment with film. At this time my approach is to capture raw-raw (linear values direct from the sensor -same light intensity for all 3 or individual light for each) and use/let Davinci Resolve do the color pipeline.

The goal of redesigning of a V2 sphere is to have multiband and white, and be able to experiment with different setups under the film setup.

Would it be helpful to have mechanical drop-in replacements (10mm x 10mm)? or would you prefer to wait until the redesign?

The primary reason to do the ultra small factor of 10 x 10 would be to make it a mechanical replacement for your project, In general, I think a slightly larger PCB would make it easier to dissipate the 1W power of the LED.
The LEDs that I plan to pcb (already have 7 bands on hand) for the multiband are this series.

I think the collective will benefit from a high-quality light/high-intensity small factor LED, and having the know-how and capability to make a run (when doing other PCBs) would be a way to get back to the collective from all the valuable information, especially your insight into the HQ. The Scientific file was a game changer for the ability to use the HQ. Thanks again!

Edit: perhaps we should start a topic for Color Science / Illuminant Experiments.