Strange encounter in the red channel/RPi HQ Camera

Two (probably premature) questions regarding this fascinating experiment:

(How) is the raw-raw-binned result different from setting the picamera2 red/blue gains to 1.0? (I.e. is the resulting tiff only neutral/grey, because of your light source? Or did you leave the lens cap on?) Was the DNG captured at 1.0 gains?

How come your DNG is only 18 MB? Mine are always 24 MB. I think I’ve asked this before, but can’t find the post again because the search term is too short…

Note that the raw data in the .dng-file is also not affected by the color gains. It’s just that raw development software takes into account various tags embedded in the .dng-file. Specifically,

  • Black Level - these are set to a constant value, which could be improved upon: quite often, the pixelvalues are lower than the reported black level. Sometimes, specially blackened pixel groups are used to come up with a better black level. Not within the HQ context.
  • White Level - again, fixed. Does not matter too much.
  • As Shot Neutral - these are the inverse values of the red and blue color gain the camera was operating at. Any raw converter will apply these values to the raw data of the .dng-file.
  • Color Matrix 1 - this is the color matrix libcamera came up with, guided by the red and blue color gains which are used to estimate a correlated color temperature (cct). This cct is used to interpolate a color matrix from the examples contained in the tuning file. Normally, this matrix is again used by any raw converter to come up with a decent color image.

If I understand your approach correctly, your “raw-raw-binned” should be basically identical to the raw data contained in the .dng-file (save of you adding the two green channels together).

So I think we are more seeing here the effect of different software designs in processing the raw data. If you immediately convert from the integer format into floating point (DaVinci is doing that as well as my own software), you will be able to push negative pixel values back again into the positive range. If you continue to work with unsigned integer mathematics (for speed reasons, for example), you are going to clip hard these negative pixel values. Which would enhance the noise…

Recalled there was an answer… here it is
I updated the system weeks ago… not sure why my install would be using an older library, will have to check into that. Perhaps the selected raw format affects the DNG size? not sure.

I did not changed from the previous experiment to replicate the banding, Red Blue gains are:

"ColourGains": (2.8, 1.9), # (R,B)

No, all are taken with very low light and uncapped.

The raw-raw-binned is done in 16bit unsigned. I do shift the 12/13 bits to be the most significant bits of the variable. Then it is saved as a 16bit tiff. And as you point out, I do so for saving speed. Other than adding the greens, and shifting the bits, I am not doing any other processing.
Since the output of the sensor has offset, there is no clipping (black clipping/offset).

Passing the 16bit TIFF directly into Resolve (using a linear timeline setting) takes care of it. Again, in my case, there is no color science and there will be no negative values. One has to remove the offset using the color tab.

Up to here, we are in the same page.

Conflicting Results

My expectation was also that -other than the G1+G2- there would be no difference in the raw information in the DNG or in the raw-raw-binned TIFF, and certainly there should not be any difference in the red channel.

The results however are disproving the expectation that the DNG raw is unprocessed.

Working with flat image at low levels is hard to make up what is going on, so I captured the same image with the 0x0800 LED level both in DNG and raw-raw-binned (also uploaded in the previous link).

Bringing these two captures DNG and the TIFF into Resolve shows significant differences. The DNG near black levels appear compressed/clipped and the overall noise is significantly more.

Again, this was unexpected, my expectation was that the raw content on the DNG would not be significantly different than the raw-raw-binned TIFF.

Including Resolve Screenshots to share the color tab values of gain, lift, offset.
Resolve Development Settings for the DNG

Edit: I got the files crossed when renaming. Apologies for the mixed up

Resolve DNG
Edit: the file name incorrectly says TIFF


Resolve TIFF
Edit: the file name incorrectly says DNG


Resolve DNG (left side of sprocket) TIFF (right side of sprocket)

On the image the noise difference is not dramatic, but it is visible.

Have to think how is it possible that the DNG raw values are being compressed, but it appears that is the case.

Edit/Adding
Had the idea of using the color settings of the image (shot at LED L0800) as a reference comparison to then better visualize the differences on the flat captures (shot at LED 0200).

To that effect, added a gain node of 3, and then fine tunned to keep the black levels and light levels the same on the RGB channels.


It is already apparent that the noise profile of the DNG is different than the raw-raw-binned.
The settings for the DNG and TIFF were saved, and applied to the flat shot. To make the waveform and image more visible, another node of gain 8 was added.

DNG Left half / TIFF Right half


Notice that in both images, banding is present. Again, it was established that the banding is apparently inherent to the IMX477 HQ sensor.

Grass underclip
The waveform monitor illustrates what is apparent clipping on the underside of the noise on all channels!

Kind of a visual representation of the prior grass analogy…

Except that the clipping appears present on all channels, not just the red as I hypothesized prior.

Summary Findings

  • The banding was found in all capture methods, which appears to confirm it is inherent to the HQ IMX477 sensor.
  • The processing pipeline of the DNG file appears to under clip the raw values on all channels.
  • The under clipping may be a contributing factor to make the banding more visible.
  • Captures using pihqcam.capture_array(“raw”) do not appear to have the same under clipping as the DNG.
3 Likes

Well, that could be a reason for the obvious difference. Remember that we once talked about not using any of the new compressed raw formats? Just a short recap: these compressed formats were introduced as “visually lossless” - which is just marketing crap. Of course they compress with data loss.

Now, if you’re working with such a compressed raw format - and it’s the default on the RP5 - something absolutely crazy is happening: the compressed raw (remember: which is not the original sensor data, because it is compressed) is decompressed before it is written as simulated uncompressed raw data into your .dng. So it seems that a user captured the raw sensor data into a .dng, while in truth, he captured more noisy “compressed raw” data. While you think you have the raw sensor data, in truth you don’t. Great scheme, in my humble opinion. That’s why it is important to check the actual mode the sensor is operating with (the libcamera debug output should state the format used).

Anyway, to make my point again (which I obviously did not get across in my post above): if you make sure that you are working with uncompressed data (easy on RP3/4, not the default on RP5), the data in the .dng is just the raw data your sensor has recorded. Including the Bayer-pattern. Including an offset in the intensities. That is, a really unexposed pixel will sit at a level similar to the black level reported. You can check the real values of your data by a little script I published recently. @verlakasalt found out that there are pixels reported below the value of the black level. That should not come as a surprise to anybody - as the black level written into the .dng is not measured by the HQ camera (what would be the appropriate way) but only programmatically set to a certain constant value by picamera2.

So, the raw data is there, it’s similar to your “raw-raw-binned” - only that you have separate three channels, and the raw image of the .dng has four channels, but these channels are interspaced into a single monochrome image. How is that relationship?

Well, any raw developer is supposed to subtract from the raw pixel values the black level in order to turn pixel values in values linear related to light intensities of the scene. This is the first step on a whole process pipeline, but it is an important step. Because the following steps assume in their processing to have this special relationship between pixel intensities and light intensities of the scene: a linear relationship with absolute darkness corresponding to 0.0.

Well, the next step a raw developer needs to apply is of course the deBayering of the image. Here, the monochrome .dng-data (with the hidden 4-channel data) turns into a 3-channel image. By the way, that image is not yet a real RGB-image, because the characteristics of the illumination has not be canceled out yet.

Now, this intermediate should be more or less identical to your .tif-image, the one you call “raw-raw-binned”. If I understand your process correctly. Maybe you did skip the subtraction of the black level, but than I assume that you did employ only a rather simple deBayering algorithm (I am using for examply only simple subsampling). In this case, the missing black level subtraction won’t matter.

(The canceling out of the scene illumination follows in the next step, which applies the red and blue gains chosen either manually or by any automatic algorithm (green is kept at 1.0) - this whitebalancing step turns a typical greenish image into a real RGB image, albeit with a rather low saturation (and other twists as well).)

My point is the following: if you pair bad black levels with unsigned integer math, you are going to get into trouble with pixel values below the black level when developing the raw. At best, these values will be clipped. This results in a strong non-linearity which will increase the noise level at low intensity values. I think we are seeing something like this happening in the noisestripes.

While the stripes are most probably created by the sensor’s analog read-out mechanism, they might be amplified by certain raw developer software.

And again, I would be surprised if your results above - a difference in noise levels for medium value - between the .dng and your “raw-raw-binned” .tif is real. That would be an important finding…

@cpixip, apologies since I was adding to the above while you were posting.

I am using a RPi 4, the camera setup is the same for capturing the DNG or the RAW.
I am a newbie with the picamera2 library but do not recall an option to compress the DNG… is there an option to make it uncompressed?

Uh. This is getting interesting. The compressed raw is a feature of the RP5 only, as far as I know. It was not available on RP4s. Could have changed over the times. So where does this difference come from? Will take a few days to make some tests by myself, with an updated RP4 as testbed…


According to exiftool, the DNGs on the experiment are uncompressed.

For clarity I was referring to the level compression visible in the waveform, not making any references to image compression. And do agree that some image compression methods would flatten the dark areas.

It appears that somewhere in the path to the DNG or DNG development, the black is clipped to a number.

If one takes no other action than create a DNG with the picamera2 and open it in Resolve, that appears to be the case… although as you said, that does not prove that the actual raw values are clipped, only that the typical user path would result in clipped values.

Here is an interesting exchange about the subject…

[user] Since it’s 16 bit i subtract 4096 from each value? What about the negatives?
[davidplowman] That’s right. And you need to clamp anything that would be negative to zero.

I would speculate that the picamera2 (or development software) does the same, somewhere.

This statement in the post you cited is relevant:

The sensor has some un-illuminated pixels surrounding the image, and uses them to offset the black level to drive it to the nominal value. This is to correct for variations across the sensor, temperature changes, etc. It probably also does some “hot pixel” correction. All this happens before the Raspberry Pi receives the data. So you can’t get the “true” (uncorrected) pixel values, at least not with the sensor register settings in the standard Linux driver.

I tried to describe this already above.The black level of a raw capture is normally not a constant, but varies with temperature and other things. But libcamera/picamera2 is using instead a fixed value, in the case of the RP5 4095, for the black level. That leads to clipping of noise in very dark image areas.

No, that is not quite true. That was actually exactly my point above. Let me try again. If you do not explicitly request (with a RP5) anduncompressed sensor format like 'SBGGR12', the front end of the RP5 attached to the camera will operate by default in a lossy “compressed raw” format. This format is somewhat similar to the log-encoding higher end video cameras employ and speeds up frame rates. It’s actually an “ok”-choice if you are interested in saving the image in .png- or .jpg-format.

But, if you request to save the raw sensor data into a .dng, something very strange is happening. The “compressed raw” format (lossy) is decoded back into a normal raw image (which is not identical to the original raw sensor data, due to the lossy encoding of the source image) and that simulated raw image is stored in the .dng-file - uncompressed, as exiftool told you correctly. So even so exiftool is seeing “uncompressed” data, you cannot be sure that it was not “compressed” at an intermediate step. Specifically, it happens with the RP5 between the CSI-receiver connected to the camera and the rest of the RP5 processing pipeline. Hope this time my explanation makes a little bit sense.

No, the black level subtraction is not happening in picamera2, but in whatever raw developer you are using.

That was my point all the time. That the black level is being subtracted (it has to!) by the raw developer - whatever that software is.

The raw data picamera2 writes into the .dng does not have the black level subtracted for sure. This is only happening in the development software you are using to read the .dng. The only thing you can blame picamera2 in this case is that the black level reported is just a fixed number - which it should not be. Again, a clear indication is the occurrence of pixels having values noticeably lower than the reported black level. Even the maintainer of picamera2, David Plowman, got (and overlooked) in his example values below 4095, like 4048, 3872,....

Again, I am using the RPi4.
I am also explicitly using format “SRGGB12”.

Davinci Resolve

One more experiment
A quick experiment to confirm that Davinci Resolve DNG developer is enforcing the clipping set in the DNG BlackLevel would be to extract the raw values from the DNG and process these the same as the raw-raw-binned into a TIFF.

I remixed @cpixip script above adding the SRGGB12 quick debayer, and extracting the DNG raw image.

import numpy as np

import rawpy
import tifffile

path = r'tpx_L0800_church.dng'

inputFile = input("Set raw file to analyse ['%s'] > "%path)
inputFile = inputFile or path
outputFile = inputFile+".tiff"    
# opening the raw image file
rawfile = rawpy.imread(inputFile)    
print ('Rawfile Pattern',rawfile.raw_pattern[0][0])
print ('Rawfile shape', rawfile.raw_image_visible.shape)

# get the raw bayer 
bayer_raw = rawfile.raw_image_visible
bayer_raw_16 = bayer_raw.astype(np.uint16)
rgb_frame = np.zeros(( 1520,2028,3),dtype=np.uint16)

# quick-and-dirty debayer
if rawfile.raw_pattern[0][0]==2:

    # this is for the HQ camera
    red    =  bayer_raw_16[1::2, 1::2]                                 # Red
    green1 =  bayer_raw_16[0::2, 1::2]                                 # Gr/Green1
    green2 =  bayer_raw_16[1::2, 0::2]                                 # Gb/Green2
    blue   =  bayer_raw_16[0::2, 0::2]                                 # Blue

elif rawfile.raw_pattern[0][0]==0:

    # ... and this one for the Canon 70D, IXUS 110 IS, Canon EOS 1100D, Nikon D850
    red    =  bayer_raw_16[0::2, 0::2]                                 # Red
    green1 =  bayer_raw_16[0::2, 1::2]                                 # Gr/Green1
    green2 =  bayer_raw_16[1::2, 0::2]                                 # Gb/Green2
    blue   =  bayer_raw_16[1::2, 1::2]                                 # Blue

elif rawfile.raw_pattern[0][0]==1:

    # ... and this one for the Sony
    red    =  bayer_raw_16[0::2, 1::2]                                 # red
    green1 =  bayer_raw_16[0::2, 0::2]                                 # Gr/Green1
    green2 =  bayer_raw_16[1::2, 1::2]                                 # Gb/Green2
    blue   =  bayer_raw_16[1::2, 0::2] 

elif rawfile.raw_pattern[0][0]==3: #HQ SRGGB12

    red    =  bayer_raw_16[1::2, 0::2]                                 # red
    green1 =  bayer_raw_16[0::2, 0::2]                                 # Gr/Green1
    green2 =  bayer_raw_16[1::2, 1::2]                                 # Gb/Green2
    blue   =  bayer_raw_16[0::2, 1::2] 

else: 
    print('Unknown filter array encountered!!')

rgb_frame[:,:,0] = red << 4
rgb_frame[:,:,1] = (green1+green2) << 3
rgb_frame[:,:,2] = blue << 4

tifffile.imwrite(outputFile, rgb_frame, compression=None)

# creating the raw RGB
#camera_raw_RGB = np.dstack( [red,(green1+green2)/2,blue] )
camera_raw_RGB = np.dstack( [red,(green1+green2),blue] )

# getting the black- and whitelevels
blacklevel   = np.average(rawfile.black_level_per_channel)
whitelevel   = float(rawfile.white_level)

# info
print()
print('Image: ',inputFile)
print()

print('Camera Levels')
print('_______________')
print('             Black Level   : ',blacklevel)
print('             White Level   : ',whitelevel)

print()
print('Full Frame Data')
print('_______________')
print('             Minimum red   : ',camera_raw_RGB[:,:,0].min())
print('             Maximum red   : ',camera_raw_RGB[:,:,0].max())
print()
print('             Minimum green : ',camera_raw_RGB[:,:,1].min())
print('             Maximum green : ',camera_raw_RGB[:,:,1].max())
print()
print('             Minimum blue  : ',camera_raw_RGB[:,:,2].min())
print('             Maximum blue  : ',camera_raw_RGB[:,:,2].max())

dy,dx,dz = camera_raw_RGB.shape

dx //=3
dy //=3

print()
print('Center Data')
print('_______________')
print('             Minimum red   : ',camera_raw_RGB[dy:2*dy,dx:2*dx,0].min())
print('             Maximum red   : ',camera_raw_RGB[dy:2*dy,dx:2*dx,0].max())
print()
print('             Minimum green : ',camera_raw_RGB[dy:2*dy,dx:2*dx,1].min())
print('             Maximum green : ',camera_raw_RGB[dy:2*dy,dx:2*dx,1].max())
print()
print('             Minimum blue  : ',camera_raw_RGB[dy:2*dy,dx:2*dx,2].min())
print('             Maximum blue  : ',camera_raw_RGB[dy:2*dy,dx:2*dx,2].max())

Findings
What I would have expected is both TIFF values to be the same, including the black levels.

The results add another question to the mix.
The experiment confirmed that the black clipping issue is the DNG developer enforcement of the BlackLevel tag fixed by the picamera2 library in the DNG, as @cpixip indicated (many times) above.
However, while the black level of the DNG is identical to the raw capture, the gain is not.
I did a retake to confirm that there was no error in LED settings on the prior image, or exposure settings.
Here are the screenshots of the new image. DNG is the left half of the sprocket and the right side of the frame. Raw is the right side of the sprocket and center image.


1 Like

Hi Pablo (@PM490 ), thanks for your interesting tests. I think something weird is happening here. Using your tpx_L0220.dng file, I recovered the noisestripes as usual:

I did find the similar horizontal noisestripes in your tpx_L0220.tif image:

However, the noise characteristics do not match - what I would have expected. Your tif-image features a much finer grain than the dng. Very weird. It still might be an issue of the quite different processing, that is in this case “Gimp” vs. “RawTherapee”.

I think that in your tif-file, there is no concept of either black level nor white level. Within raw sensor data, there is a concept of these entities. That’s the reason why they are embedded in the .dng-file.

The raw converter uses these values to derive an appropriate scaler from these two values. The black level is mapped to 0.0, the white level to 1.0. That’s the norm. This is not happening (as far as I understand this) with your .tif-format.

Note that black and white levels also play a role in the highlight recovery process.

So I am not too surprised that the intensity values do not match.

What worries me is the different noise characteristics of your .tif and the .dng. Maybe one should develop the .dng not with RawTherapee (as I did above), but with pure Python code. I will look into that in more details in the following days. At this point in time, I find it highly unlikely that the raw data you pipe into your tif-file would be different from the data piped into the .dng-file. That would be crazy - but I have encountered enough crazy things within the RP-context to expect a few surprises now and than… :sunglasses:

Similar to what I depicted above in Resolve.

The TIFF will open RawTherapee, but Resolve handles color at low levels better.

For DNG files, RawTherapee also provides a “Sensor with Bayer Matrix” section, where the Raw Black Points can be adjusted. Have not found something similar in Resove.
Playing with those sliders, illustrate the color science interaction with the resulting levels. In the screenshot, lifting the levels apparently addresses the black level compressions/clipping.

Same as the sensor.

Side comment… what in the DNG scope is called Black Level is what -in a former life- was called pedestal level (analog tube/CCD cameras). Values below Pedestal would be clipped, usually there was a knee before the hard clipping. The hard clip would be at the Black Level would usually be near zero (except for video format requirements. For example, NTSC that had an explicit pedestal of 7.5 IRE).

Thank you for pointing that out. It is a possible explanation for the slight difference in gain between the DNG raw-values and the raw-capture raw-values. If I understand it correctly, in the 12 bit scope then 0.0 would represent 256 (Black level of picamera2 library), and 1.0 would be 4095. That would increase the gain a bit, but the gain difference appears a bit more.

That is also observed in Resolve, when using it to develop the DNG vs the TIFF.
When extracting the DNG data (the most recent experiment) and bringing it into Resolve as a TIFF, the noise pattern is closer, but the raw-capture shows a bit higher and finer noise than the DNG-capture.
When using the DNG development in Resolve, it is likely the byproduct of the developer. But when extracting the raw values from the DNG with rawpy, and bringing these in the same processing path to a TIFF, I do not yet know why the noise would be dissimilar.

I uploaded the church source files (DNG and raw-capture), and also the TIFF extracted from the DNG with the script above.

Thanks for all your feedback. Certainly, the experiments illustrate the hidden perils of the DNG development, even with trusted-proven software like Resolve.

1 Like

Don´t be afraid of HQ
I think is also important to keep the perspective of the big picture, and not scare anyone away from using the HQ.

The experiments above are designed to make the issues as visible as possible.

To end on a lighter note, here is another take of the church frame.

The capture was using raw-capture, binned into a 2030x1520 frame and exported into a 16 bit TIFF (13 bit G 12 bit RB). The uncompressed TIFF size is 18.5MB, and it is captured and saved (to a NAS) in less than 1 second (in a RPi4).
Imported and color graded in Davinci Resolve, then converted to a jpeg for posting.


Happy scanning!

1 Like

I remixed @cpixip script above adding the SRGGB12 quick debayer, and extracting the DNG raw image.

I’ve read through the lines of code, and am tying to figure out how to replicate what you did…

raw_frame_16[:]= (self.pihqcam.capture_array("raw").view(np.uint16)) #LS12BIT

This is to capture the frame.
And this to save the file.

tifffile.imwrite(outputFile, rgb_frame, compression=None)

What’s inbetween :smiley: ?

Also, I think I’ve picked one of the worst frames to test the banding on. I took another film with a couple of really dark frames, and it was much harder to get the banding to appear (within the frame area). It might be dependent on film stock how dark the borders are. Or rather on the films white balance, perhaps. The other film’s borders seem to have a blue tint, possibly balanced for tungsten, while the beach is surely balanced for sunlight? Does that make sense or am I just realizing the obvious :sweat_smile: ?

I created a separate posting to make it easier to find the information. As you will see, to process the data there is not a lot in between… to make it run fast, there is a lot of code, some of it still evolving. But I am sharing the overview to provide everyone with new tools.

@cpixip, suggest that you check out numba, it was a game changer to work with the raw info while capturing in real time.

It basically depends on the level received by the sensor being near zero for any particular channel. If that is the result of the dim light or dense film or the gate frame, the sensor doesn’t care. If the dark areas are a bit less than near zero, it should go away.

2 Likes

I think it’s about time to close this thread. A lot of interesting discussions were triggered and I want to summarize what I can remember of these discussions, distributed among various threads and over quite some time.

To recap: initially, the question was where the dark band in the red channel of a peculiar frame capture was coming from (the dark band in the sky):


It turned into a discussion about the origin of annoying noisestripes in captures done with the HQ sensor (which was also used in the capture example above), among other things.

While both issues are related, the causes are different. I will discuss that in detail in what follows. In short, the dark red band is caused by out-of-gamut colors - the HQ sensor is able to “see” colors which can not be represented in rec709 (or sRBG) color gamuts. The noisestripes however are an intrinsic property of the IMX477, the “camcorder” chip used in the HQ camera. But they are amplified by the color transformations necessary to arrive at correct colors.

Work-arounds in case of the out-of-gamut issue include:

  • reduce saturation during capture, increase again in postproduction.
  • just pretend that your raw format works in a larger color space (helps with DaVinci, for example). Easy, but colors are slightly wrong.
  • maybe in the future: creating a tuning file natively working in a larger color space than rec709.

Work-arounds in the case of the noisestripes include so far:

  • ignore them. These noisestripes are only appearing in very dark areas of the image, and these should stay normally dark anyway.
  • employ averaging (noise reduction), either temporal or spatial, or both.
  • employ multiple different exposures, appropriately combined.
  • for overall dark scenes: increase exposure time for these problematic ones (obviously, that does not work for high contrast scenes, only overall dark scenes).
  • get new camera with at least 14 bit of dynamic range (here hides the main reason of this issue: the dynamic range of color-reversal film requires occasionally 14 bit or more, the HQ sensor delivers at most 12 bit).

In what follows, I will try to show how a raw capture is turned into a viewable film image, and how the two issues, out-of-gamut colors and noisestripes show up in the raw development process.

So let’s start with the raw Bayer image of the sensor. This looks like this:


This is in fact a monochrome image with 4 different color channels intermixed. A zoomed-in cutout of the center

shows this interleaving.

Note in the above full view of the capture the appearance of cyan areas on the right side of the raw Bayer image. In all of the displays in this post, very low values will be marked by a cyan color, very high values with a red color. The cyan areas at the right side of the Bayer image are already an indication of the noise floor of the sensor - of course, drastically enhanced. Under normal circumstances, this would not be visible.

The histogram of the Bayer image looks like this:


The data ranges cover nearly all of the available range between blacklevel and whitelevel. Specifically, we find

Black:      4096.0
White:      65535
Min Bayer:  3920
Max Bayer:  65520

So - the maximal pixel value stays below the whitelevel - therefore, no red markers in the above image, but the minimal pixel values go below the blacklevel - check the cyan areas on the left.

Of course, that should actually not really happening - but in contrast to more advanced approaches, libcamera/picamera2 does not measure the current blacklevel of the camera, but inserts into the .dng just a fixed number. The blacklevel of a sensor is however a function of a variety of things, including sensor temperature etc. Well, …

The histogram of the Bayer image is not so easy to interpret, especially because it mixes the histogram of 4 different channels into a single histogram. So the next step is to create three color channels, here termed “Raw RGB” out of the Bayer image.

Usually, this is done by rather elaborate demosaicing/debayering algorithms. In this demonstration, I stick with the simplest one: just sub-sampling the different channels into four separate channels designated red, green1, green2 and blue, and finally combining them into a single RGB image like so:

camera_raw_RGB = np.dstack( [red,(green1+green2)/2,blue] )

If we look now at the histogram of camera_raw_RGB, we obtain this:


That looks nice. The red and blue histograms are similar in shape, the green one is stretched out a little bit too far, so it’s missing (cutting off) the bump visible at 30000 in the red and 40000 in the blue channel. These bumps correspond in fact to the intensities of the sprocket hole, the brightest area of the image.

Generically, looking at the green curve, the real image content of the frame is distributed from slightly below 4095 up to around 50000. The histogram part of the green channel between about 50000 and the cut-off comes actually from the border of the sprocket area. Since the red and green channels are lower in intensity, the bright sprocket hole shows up in these histograms as broad bump, this bump is missing in the green channel, it is cut off here.

In fact, all histograms are similar in shape, only stretched out differently. This is a direct result of the chosen illumination source. In our case, a whitelight LED with a cct of about 3200 K. If I would have used a different light source, the stretch-factors between the different color channels would be different.

Now, the characteristics of the light source obviously have to be accounted for. This is usually done in a rather simplistic way, by specifying a correlated color temperature for the light source, or, in our libcamera/picamera2 context, by specifying the red and blue channel gains. The inverse of the gains are encoded in the As Shoot Neutral-tag of the .dng.

But before we account for the characteristics of our light source, at this point in the processing scheme, another operation is asked for. The reason is the following: if we would continue with this image data directly, we would finally end up with a noticeable magenta tint in highlights of the final image (I skip the details here, see here for a discussion).

We might call this step our “highlight recovery” step. In our case, it’s simply a clipping of the red and blue color channels using the white point data of the light source available in the .dng.

With Whitepoint = [ 2.8900001, 1.0, 2.09990001] (actually the color gains at the time of capture), he actual code doing this looks like this:

camera_raw_RGB_normalized = np.clip( ( camera_raw_RGB - blacklevel ) / (whitelevel-blacklevel), 0.0, 1.0/whitePoint )

Note that also a rescaling of the image intensities into the [0.0,1.0]range is performed in this processing step. The histogram now changes to this:


The data corresponding to the sprocket hole has now been clipped all channels. All histograms are now looking similar, only the stretch factors are not yet handled. We soon will do this.

The image resulting from this clipping/rescaling process looks now like this:


Again, pixel values near the noisefloor are marked by cyan, pixel values near the maximal intensity are marked in red.

The noise floor in the above image is now more visible. That is mainly caused by my primitive debayer-algorithm (only sub-sampling). Now the sprocket hole is marked as critically close to the maximum brightness (red colors), and that is ok. We do not want to use the data in the sprocket hole at all.

This raw image has the typical green tint of any raw image on the planet and it is now time to convert this into something closer to reality. That is, we must now counteract the illumination still present in the data. As already noted, this is achieved by multiplying the red and blue color channels with the appropriate color gains. The histogram changes drastically


And through that miracle, the curves of all channels magically align. That is just what we are after!

Let’s look at the resulting image:


That looks quite natural, but strictly speaking, the colors are all wrong. Most obviously, the saturation is way too low. To arrive at correct colors, we need to do yet a final step: apply a compromise color matrix to this data.

As per definition, your .dng-file comes with exactly that matrix. Basically, the final computation is something like

 img = scene @ camRGB_to_sRGB.T

where the camRGB_to_sRGB depends on the matrix encoded in the .dng-file.

This matrix itself was computed at the time of capture by libcamera/picamera2, based on the red and blue color gains set either manually or automatically. This matrix depends on the correlated color temperature of your light source and shifts the colors finally into their right place.

Let’s first check the histogram resulting from that operation:


One thing that can be noticed immediately: there are obviously quite a few pixels in the red channel which are negative. Actually, if you look closely at any of the above histograms, negative pixel values were present all along our way - basically right from the moment when the (not so correct) blacklevel was subtracted from the raw data. Now however, there are much more of these pixels. What are they? Well, that the matrix camRGB_to_sRGB looks (in this example case) like so:

[[ 1.88077433 -0.83782669 -0.04294764]
 [-0.25912832  1.6064659  -0.34733757]
 [ 0.09087893 -0.74820937  1.65733044]]

There are strong negative components and they shift pixels with original low values in one channel (say: the red channel of a blue sky) easily into the negative range if the colors in other channels (again: the blue sky) are saturated enough. Our camera is able to capture and encode these colors, but our destination color space (in this case rec709) is not able to handle this. The negative pixel values are an indication of out-of-gamut colors (in this case, in the blue sky area).

So, let’s look at the final image:


Note that the cyan band appearing now in the sky? This is identical to the black band visible in the red channel above. These pixels have colors which cannot be represented within our current color space. Besides that, note that the low level noise (the noisestripes) has increase as well - the reason here is that the above matrix generally amplifies all color channels a little bit. This makes the noise a little bit stronger.

One way to circumvent the appearance of negative pixel values is to instruct the camera to work with a lowered saturation. This would not change the raw data, nor the red and blue gains, but the camRGB_to_sRGB-matrix written into the .dng-file. Clearly, you could ramp up saturation again in your editing program.

Another, better approach would be to map from the start not into sRGB but another, wider color gamut. This would however require a recalculation of the ccm’s contained in the tuning file. Not sure that I am going to go this way.

Yet another option discussed above is pretending that a much wider color-gamut was used while reading the files into DaVinci. This solves again the banding issue, requires again an adjustment of the saturation in post processing. And: - the primaries you are working with are a little bit wrong, so your colors might be off somehow.

In closing, here’s the above final image without the out-of-limit markings:


I think that is quite usable for a final color grading.

4 Likes

Great summary Rolf @cpixip, thank you for taking the time and the effort.

The histogram scale is logarithmic, since gamma was not mentioned, assume the images in the post are also corrected (not linear).

Any suggestion to a good value for “Saturation” Control to keep the capture within gamut?

Does the math allow for using the current scientific file as a reference and calculate a tuning file for a wider color space? Understandably it may not be 100% accurate, yet in the context of film scanning -which most certainly be color graded to individual taste- that would aid to circumvent present out of gamut issues at capture.

Is working with Davinci Resolve project Color Space settings also an alternative?
Not sure if when setting the timeline to linear, and selecting the output color space to 709 (or a wider if desired) the Resolve Color Science would eliminate the out of gamut colors for the output.

Again, thanks for sharing your experience and know how distilling this extensive topic.

1 Like

Good catch! The images displayed were all subjected to a rec709 gamma-curve for display purposes. Otherwise the shadows would have been too dark, as usual with linear images. The image data itself stays linear throughout the whole processing.

1 Like

At the moment, I have no good answers to all of that.

Decreasing saturation during capturing is on a second thought actually nonsense. That setting will not afffect the raw data contained in the .dng nor the ccm in the .dng - which is the culprit in our case.

With respect to a new tuning file: the format has changed; I would need to understand the new format and rewrite my software. The usual color spectra I have do not really cover any larger color-gamut (by the way: the cyan-patch of a standard color checker is already out of the sRGB color-gamut!). Otherwise, it should not be a too challenging problem - as DaVinci offers besides sRGB only P3 D60 (probably not worth the effort, as it is not so much wider than sRGB (besides marketing talk)) and Blackmagic Design, I would probably opt for the later. Would need to find the specs of that color space and do the calculations… - we will see.

With respect of choosing alternative color settings in DaVinci: choosing P3 D60 or Blackmagic Design as ‘Color Space’ with footage captured for sRGB (the current tuning file yields this) gives in my opinion rather bad colors - the color science just does not match. Given, you could colorgrade back to something viewable, but that’s not too attractive from my point of view - and you will never get close to the real colors of the film.

So, a lot of options, but no real solution yet…

1 Like