Pi HQ Camera vs DSLR image fidelity

Found a tool and a helpful video. This is a screenshot from the VESA test tool:

Turns out my office notebook supprts HDR. The images on that wide gamut test page look quite nice with it, too. My desktop display only covers about 60% DCI-P3, which is more than I expected, but not enough by current standards.

This video/workflow might be exactly what I was looking for with my limited setup:

From what I understand, this color space transformation makes sure that he’s working in the largest gamut the RAWs offer, but what’s visible in the end is Rec709.

3 Likes

Hmmm… - this will be an interesting research topic. Here’s a raw file developed in my own software which I designed to be as faithful as possible to the raw data. It’s from one of my test films, Kodachrome stock. Not too bright colors, but at least the skirt of the young lady in the background gives a solid saturation:

Now, looking a CIE color chart of this image, one notices that the rec709 colors (the red triangle) do cover most the color gamut of this frame.

Here’s another example. This time an Agfachrome film:

and here we actually encounter out-of-gamut colors, in the deep blues of the sky:

Some readers might find this frame familiar, as it already was a subject of another discussion about clipping of colors.

Well, I am still looking into this, but it seems that most S8 footage features color gamuts less or equal to rec709.

Nevertheless, it seems that with highly saturated footage, you need to be careful with your DaVinci setup. Otherwise, again, clipping will happen in DaVinci. I am still trying out various approaches, with no result yet.

Just to show you what the HQ sensor is capable of, here’s a real image

and the corresponding CIE-diagram:

Clearly, the color gamut of the HQ sensor exceeds the rec709/sRGB color gamut by far. (Note: at this point in time, I have no idea what causes the clipping noticable at the red-magenta-blue line. This needs further investigating.)

Of course, a wider color gamut would yield more intense colors, mimicing better an actual scene photographed. Problem is: you might not be able to display that on the monitor you have at hand.

So if you have such a wide color gamut from the camera, but only rec709 on your display, you have various options:

  • reduce overall saturation, so that even extreme colors end up in your color gamut. All colors will move toward the white point (marked with a tiny red dot in the diagrams), but overall, the image will look washed out (at least by todays oversaturated “standard”).
  • just let the colors outside of the display gamut clip. Most interestingly, it’s hard to notice (most of the time). Overall color impression stays vivid.
  • use some non-linear squashing function for the saturation level, shifting only the extreme colors slightly toward the white point. Very similar to the second option, but less noticable in extreme cases. Can be realized in DaVinci with a combination of “Saturation” and “Col(or) Boost” settings (see below).

If your footage is already within the color space of your display (like in my very first example), you can even increase the saturation a bit to meet today’s imaging standards. In fact, I routinely use the a combination of “Saturation” and “Col(or) Boost” to get an image I am pleased with - of course, that not longer has anything to do with the real colors of the footage. The “Col Boost” is here more valuable than the overall “Saturation”, by the way.

2 Likes

In all of my footage it has consistently been the reds and oranges that seem to be able to go quite a bit farther once you break out of the sRGB box. On a wide-gamut monitor you can toggle back and forth between color spaces and see the rather dramatic difference. It’s usually in clothing like bright red jackets or–my family had a tendency to drag the S8 camera out on Christmas day–bright red wrapping paper and other holiday decorations.

It’s kind of hard to describe the subjective visual experience. I’ve been looking at computer monitors my whole life but when these newer displays show you a color that traditionally hasn’t been representable on a screen it looks pretty cool. :sweat_smile:

My impression has also been that properly color managed apps (again, Chrome’s video player) will pick up the slack here and do the clipping for you. So if your video is mastered for something wider than sRGB (and tagged correctly in the video’s metadata), the people with more capable displays will be able to enjoy the wider colors and the people without will see the best they can, clipped automatically.

(The same can’t be said for video players that have poor color management capabilities. There, you’d probably end up with the dreaded under/over-saturated view.)

1 Like

(Not sure if this isn’t something for the Lighting Research thread.)

So far I’ve been using this teensy COB LED, because I needed something to fit into the projector and for some reason this was what an early round of research took me to.


No info on the CRI/Ra value, though.

Its diameter just about large enough for 8mm, but even the slightest misalignment leads to vignetting. I didn’t notice it as much with the projector, but now it bothers me, so I ordered a CRI >=95 bulb which should arrive today.
Ahead of that I made a test with a regular LED bulb I had lying around (Lepro 13.5W E27 LED 1521 Lumen, 2700K (CRI>80)) just to see whether it would be bright enough for my setup.

Attached are two captures (unfortunately 1 frame apart). Right is with the COB led using these sheets as a diffuser: ASLAN® RP 35 Rearprojection Muster bei Foliencenter24 (I stuck white on clear for this).
Diffuser is rather close to the lamp (0.5-1 cm) because I wanted more light, and then about 3-4 cm to the film plane.

The left frame is captured with the Lepro lamp, no extra diffuser.

For both captures I tried getting a good histogram beforehand, all channels overlapping.
Lepro gains: 2.022, 2.797
COB gains: …well, forgot to write them down and can’t check now. But something around 3.0, 1.5

I have no idea whether one of these is objectively better. I’m just fascinated by how different they look. (This is the default development done by Windows).

Looking at them side by side I see now that the Lepro one has a slightly glowing film border, so there was probably a bit of flare going on. I just placed the big bulb behind the film - no extra casing.

@cpixip How did you retrieve those CIE charts?

… with my own software, based loosely on the Colour Python library.

Well, here’s a shot at your raws. First the “cob led”:

Clearly, it’s slightly overexposed. Some tree branches in front of the sky are burned out. Anyway, here’s the CIE-diagram:

Nice and easy, not too saturated colors. You could ramp up saturation in the post, depending on taste.

Ok, here’s the result for your “lepro led”:

Noticeably darker, so a few more tree branches are visible. Interestingly, the nearly blank image areas of the sky get a light purple cast - this has been observed with Kodachrome film stock before. This little riddle is not yet solved…

Well, and here’s your CIE-diagram for that data:

Looks for my taste a little bit too much shifted towards blue-magenta. Did you set white balance and exposure correctly on this capture? By the way, one of the data viewers in the “Color Page” of DaVinci is actually just about the same CIE-diagram I am using here - so you might want to load your images into DaVinci for a test. I have not checked that viewer yet in detail.

EDIT: Here’s a comparision of my CIE-plot (of the red lady image above):

with the DaVinci scope (“Scopes → CIE Chromaticity” tab):

Seems to show basically the same.

2 Likes

Thanks for the DaVinci hint, I missed that.
And it’s very interesting! That CIE diagram shows the output color space that is set in the timeline settings. However, I have no idea whether that is really a representation of the source images or whether the image is converted to that color space. I assume it is the former. If that’s the case there’s really a lot of color information in there. :open_mouth:

E.g. this is what I get for P3-DCI:

And this for DaVincis supposedly biggest color space, DaVinci Wide Gamut:

I cropped the image so that the film borders are not part of the color evaluation.

Likewise your display of pots:
P3-DCI


DaVinci WG

I think I need a new display… :smiling_face_with_tear:

Edit: I’m still a bit confused about what exactly is displayed in those diagrams. Are these the “true colors” of the source material, or is it just an interpretation for the respective color spaces? There is clipping, but shouldn’t there be more overlap…? Why are the greens in the “pots” scene so patchy in one diagram and full-green in the other? :thinking:

When I load your barn image into my DaVinci v19, I get a totally different result. Here’s the frame

and here’s the CIE-scope

Timeline setup was

and the “Camera Raw” tab used these settings:

At this point in time, I have no idea why our results are so different. My own CIE-diagram looks like this:

and seems to be similar to the DaVinci-diagram I obtained - but your diagrams look quite different..? Again, no idea what is going on here.

Well, I won’t have internet or a computer for the next week, so I’ll have to wait a bit for this puzzle.

Good morning. Your CIE/Timeline ist set to Rec709. Try changing Output Color Space in Timeline Settings to P3 or DaVinci WG respectively.
Also, my scopes are from DaVinci 18.6 - might be visualized a little differently there, too.

… well, something is not working correctly here. Maybe the latest DaVinci version has some bugs here which have not yet been noticed? Even with the following setting you were suggesting

I get the following CIE-display

which does look different from your result.

As my own CIE-diagram indicates that all colors are well within the rec709 gamut, changing output color spaces should not matter at all. That is what I am experiencing, but that is in contrast with your results.

Besides, if you compare the image content of your raw with my CIE-diagram, it’s easy to find the source of the two saturated yellow-orange blobs noticable there. These are the pumkins (quite appropriate for this time of year that they show up so prominently :wink: ). Especially your “barn”-CIE-diagram with the Wide Gamut shows instead a prominent red-orange streak outside of the color gamut the human visual system can perceive - but were is that color in your source image?

In my DaVinci CIE-plots you can kind of guess these two yellow-orange pumpkin-blobs, especially if you know where to search for. This correlation between image content and diagram is kind of missing in your diagrams. I do not yet understand why, as I am unable to recreate this on my machine (Win-PC/latest DaVinci v19).

Note also that I think that the actual images you posted look slightly different from mine. More blueish than my version. Again, nothing I understand at this point in time.

EDIT: Ok, I found it.You are actually swapping both timeline as well as output color space. That is, I need to change my timeline setting from above to

In other words, both timeline and output are set to “DaVinci WG/Intermediate”. If I do this, I can get your kind of CIE-displays. Which however, in my humble view, is of little use. If you activate your color picker and go to the pumkins, they will labeling via a circle the red streak in the Wide Gamut case. Clearly, this is not the color of a pumkin, not in real life, nor on any video display.

(Sidenote: I noticed that the CIE-diagrams do not reliably update when changing “Timeline Settings”. Occationally, I need to restart my DaVinci v19 to see a change.)

Well, there is certainly room for improvement here. But: as long as the majority of my S8 material stays actually within the rec709 on input and most output devices can render sRGB or rec709 reliably, however fail occationally on higher quality material, I think I stick with the following timeline settings for the time being, until someone can convince me that a better setting is available:

1 Like

OK, I found out what the reason for the difference between our results is.

My project was set up with the values and color space transformations from the youtube video I linked above. So I’m getting the above graphs with the Raw/Clip set to this (plus the color transformation from this to WG and then to Rec.709):

If I go with default settings, Rec.709 and sRGB, the graph looks the same as yours.

It also changes wildly depending on the adjustments I make in the Camera Raw tab, and i can’t get the PD D60/Linear frame to look identical to the Rec.709/sRGB frame.

I think what is happening is an interpretation of the raw footage based on the selected color space. The question is whether there are actually more or less colors in the image that are “revealed” that way, or whether it’s just some kind of conversion/adaption…? I am still confused and looking for more info.
If we had a camera with a defined color space, things would be easier to grasp, I guess.

@cpixip What is your CIE chart based on? Don’t you have to develop the dng first? :thinking:

It’s possible to get at least this far by calibrating with a known display chart.

I’ve written a little about that process (look at the giant triangle! :rofl: ) in this post. And then I eventually dug a little deeper over here. That second post includes all the links and command lines to turn one of those charts into an ICC (or Cube LUT) profile.

This gives you a nice, precise answer but the results are only valid when using the camera in the same lighting situation. Change the backlight and you’d need to recalibrate.

There’s also the question of metamerism. These calibration charts themselves are printed on a film stock with particular characteristics (and that only have so much possible color range). The closer that film stock is to what you’re actually scanning, the more useful the calibration. I got lucky: Wolf’s targets are printed on Ektachrome and half my footage is Ektachrome.

All of that said, there is a weakness here: at the end of all this effort you will be able to very accurately represent the now-faded colors in this old film. :sweat_smile: Any arbitrary changes you make to “restore” the color afterward make this whole process kind of academic since you’re not really using the calibration anymore. So, at best, I just treat it as a good starting point (mostly because I’m not using “normal” illumination).

2 Likes

Absolutely. Even more, the “orignal” colors depend on the filmstock used. A given scene would be rendered differently on Kodakchrome vs Agfachrome or Fuij. Moreover, color balance will sometimes differ even between different 15 m rolls processed by the same lab. In the end, you will color-grade your footage, and then the gamut of your sensor does not matter much. It will be in any case wider than what your final video will feature. Have a look at my CIE-plots above. The data points extend much further than any P3 variant; more like rec2020 or so.

Indeed. I do this development with my own Python-based software. This is the only way that I can be sure that the .dng is developed correctly, according to the books.

@cpixip Rolf, I am finally getting up to speed in python and the camera, and your postings have been an invaluable reference, thanks again for sharing your knowledge and work with the HQ sensor.

Is there a 101-level reference for a newbie with limited math skills -like me- to go from the 12 bit pixel raw values and convert these values to a suitable color space?

I figured that the 12 bit unpacking is taken care with something like:

    captured_image_raw_array = self.pihqcam.capture_array("raw").view(np.uint16) #16bit per pixel

The captured_image_raw_array is 4056 (actually 4032) by 3040 by 16 bits, holding the raw 12bits captured in the lower 12bits of the 16bit unsigned. For clarity, these values are not yet debayered.

For my question above, set aside the debayering, all I care at this time is what to do with a raw R or G or B value to adjust the color space in the image to something adequate.

Thanks

1 Like

@PM490 Hi Pablo, there is a nice walk-through with respects to the steps involved at Jack Hogan’s blog. He goes into much more detail. However, here’s a short python script which shows some possiblities. You should download the following raw file, “xp24_full.dng” for the script. It was captured by the HQ sensor.

Any .dng-file is in reality a special .tif-file with some additional tags. In the following script, I show you two ways to access the data hidden in the .dng-file: either rawpy or exifread. rawpy is in fact an interface to libraw and could be used to do it all at conce - that is, develop your raw file directly into a nice image. I am using rawpy here only to access certain original data and I am doing the processing by myself. You could instead use exifread or your own software to do so.

Anyway, here’s the full script which should work out of the box, once you have installed the necessary libraries: cv2, numpy, rawpy and exifread. I will go through each line of code in the following:

import cv2
import numpy as np

import rawpy
import exifread

path = r'G:\xp24_full.dng'

# utility fct for zooming in
def showCutOut(title,data,scale=1,norm=True):
    
    x0   = 1100
    y0   = 1600
    size = 1024
    
    x0   //= scale
    y0   //= scale
    size //= scale
    
    cutOut = data[ y0: y0+size, x0: x0+size].astype(np.float32)
    if norm:
        cutOut = (0xffff*(cutOut - cutOut.min())/(cutOut.max() - cutOut.min())).astype(np.uint16)
    
    # minor cosmetic cleanup    
    img_bgr = cv2.cvtColor(cutOut, cv2.COLOR_RGB2BGR)    
    cv2.imshow(title,img_bgr)    
    cv2.waitKey(500)

# opening the raw image file
rawfile = rawpy.imread(path)    

# get the raw bayer 
bayer_raw = rawfile.raw_image_visible

# check the basic data we got
print('DataType: ',bayer_raw.dtype)
print('Shape   : ',bayer_raw.shape)
print('Minimum : ',bayer_raw.min())
print('Maximum : ',bayer_raw.max())

# grap an interesting part out of the raw data and show it
showCutOut("Sensor Data",bayer_raw)

# quick-and-dirty debayer
if rawfile.raw_pattern[0][0]==2:

    # this is for the HQ camera
    red    =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Red
    green1 =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Blue

elif rawfile.raw_pattern[0][0]==0:

    # ... and this one for the Canon 70D, IXUS 110 IS, Canon EOS 1100D, Nikon D850
    red    =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Red
    green1 =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Blue

elif rawfile.raw_pattern[0][0]==1:

    # ... and this one for the Sony
    red    =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # red
    green1 =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # blue

else: 
    print('Unknown filter array encountered!!')

# creating the raw RGB
camera_raw_RGB = np.dstack( [1.0*red,(1.0*green1+green2)/2,1.0*blue] )
showCutOut("Camera Raw",camera_raw_RGB,2)

# getting the black- and whitelevels
blacklevel   = np.average(rawfile.black_level_per_channel)
whitelevel   = rawfile.white_level

# get the WP
whitebalance = rawfile.camera_whitebalance[0:-1]
whitePoint   = whitebalance / np.amin(whitebalance)

# transforming raw image to normalized raw (with appropriate clipping)
camera_raw_RGB_normalized = np.clip( ( camera_raw_RGB - blacklevel ) / (whitelevel-blacklevel), 0.0, 1.0/whitePoint )

# applying the "As Shot Neutral" whitepoint
scene      = camera_raw_RGB_normalized @ np.diag(whitePoint).T
showCutOut("Scene",scene,2)

# get the ccm from camera
with open(path,'br') as f:
    tags = exifread.process_file(f)
    colorMatrix1 = np.zeros([3,3])
    if 'Image Tag 0xC621' in tags.keys():    
        index = 0
        for x in range(0,3):
            for y in range(0,3):
                colorMatrix1[x][y] = float(tags['Image Tag 0xC621'].values[index])
                index = index +1 

# http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html sRGB D65 -> XYZ 
wp_D50_to_D65  = [[0.4124564,  0.3575761,  0.1804375],
                  [0.2126729,  0.7151522,  0.0721750],
                  [0.0193339,  0.1191920,  0.9503041]]

# the matrix taking us from sRGB(linear) to camera RGB, white points adjusting (camera is D50, sRGB is D65)
sRGB_to_camRGB_wb   = colorMatrix1 @ wp_D50_to_D65

# normalizing the matrix in order to avoid color shifts - important!!!
colorMatrix_wb_mult = np.sum(sRGB_to_camRGB_wb, axis=1)  
sRGB_to_camRGB      = sRGB_to_camRGB_wb / colorMatrix_wb_mult[:, None]
    
# finally solving for the inverted matrix, as we want to map from camera RGB to sRGB(linear)
camRGB_to_sRGB      = np.linalg.pinv(sRGB_to_camRGB)     
    
# and we obtain the image in linear RGB
img = scene @ camRGB_to_sRGB.T
showCutOut("Image",img,2,norm=False)

# finally, apply the gamma-curve to the image
rec709 = img
i = rec709 < 0.0031308
j = np.logical_not(i)
rec709[i] = 323 / 25  * rec709[i]
rec709[j] = 211 / 200 * rec709[j] ** (5 / 12) - 11 / 200
showCutOut("rec709",rec709,2,norm=False)

Ok, let’s start. In the beginning, the necessary libs are loaded and the path to our source file is specified:

import cv2
import numpy as np

import rawpy
import exifread

path = r'G:\xp24_full.dng'

Obviously, the later has to be adapted to your needs. A tiny display routine follows - it’s just a quick hack to show you the different processing stages:

# utility fct for zooming in
def showCutOut(title,data,scale=1,norm=True):
    
    x0   = 1100
    y0   = 1600
    size = 1024
    
    x0   //= scale
    y0   //= scale
    size //= scale
    
    cutOut = data[ y0: y0+size, x0: x0+size].astype(np.float32)
    if norm:
        cutOut = (0xffff*(cutOut - cutOut.min())/(cutOut.max() - cutOut.min())).astype(np.uint16)
    
    # minor cosmetic cleanup    
    img_bgr = cv2.cvtColor(cutOut, cv2.COLOR_RGB2BGR)    
    cv2.imshow(title,img_bgr)  
    cv2.waitKey(500)

I am using different means of displaying my data. Anyway, opening a raw file with python is as easy as

# opening the raw image file
rawfile = rawpy.imread(path)    

# get the raw bayer 
bayer_raw = rawfile.raw_image_visible

Note that at that point, you’d be in a position to develop the raw with all the fancy stuff libraw/rawpy is offering, but that is not our aim here. Let’s see what data we actually got:

# check the basic data we got
print('DataType: ',bayer_raw.dtype)
print('Shape   : ',bayer_raw.shape)
print('Minimum : ',bayer_raw.min())
print('Maximum : ',bayer_raw.max())

# grap an interesting part out of the raw data and show it
showCutOut("Sensor Data",bayer_raw)

You should see something like that:

That is the raw Bayer-pattern. If you zoom in appropriately, you are going to see different gray patterns in each of the color patches. That’s how your sensor is encoding different colors.

Debayering is an art, and libraw/pyraw can do this for you. As I am not interested in full-res imagery here (only for analysis I develop on my own), I employ the simplest debayer possible:

# quick-and-dirty debayer
if rawfile.raw_pattern[0][0]==2:

    # this is for the HQ camera
    red    =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Red
    green1 =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Blue

elif rawfile.raw_pattern[0][0]==0:

    # ... and this one for the Canon 70D, IXUS 110 IS, Canon EOS 1100D, Nikon D850
    red    =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Red
    green1 =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Blue

elif rawfile.raw_pattern[0][0]==1:

    # ... and this one for the Sony
    red    =  bayer_raw[0::2, 1::2].astype(np.float32)                                 # red
    green1 =  bayer_raw[0::2, 0::2].astype(np.float32)                                 # Gr/Green1
    green2 =  bayer_raw[1::2, 1::2].astype(np.float32)                                 # Gb/Green2
    blue   =  bayer_raw[1::2, 0::2].astype(np.float32)                                 # blue

else: 
    print('Unknown filter array encountered!!')

# creating the raw RGB
camera_raw_RGB = np.dstack( [1.0*red,(1.0*green1+green2)/2,1.0*blue] )
showCutOut("Camera Raw",camera_raw_RGB,2)

This code section looks at the “raw_pattern” spec in pyraw’s data structure and resamples the full Bayer-pattern into separate color channels, half the size of the original. It than adds the two green channels and stacks all together into a single RGB image. This image looks still weird,


showing the typical green cast of an unprocessed raw.

The next processing step is to adjust for black and white levels in the data. Never assume you know which data range is used in your raw - you might be surprised! (For starters, the RP5 stores differently from a RP4, for example.)

# getting the black- and whitelevels
blacklevel   = np.average(rawfile.black_level_per_channel)
whitelevel   = rawfile.white_level

# get the WP
whitebalance = rawfile.camera_whitebalance[0:-1]
whitePoint   = whitebalance / np.amin(whitebalance)

# transforming raw image to normalized raw (with appropriate clipping)
camera_raw_RGB_normalized = np.clip( ( camera_raw_RGB - blacklevel ) / (whitelevel-blacklevel), 0.0, 1.0/whitePoint )

# applying the "As Shot Neutral" whitepoint
scene      = camera_raw_RGB_normalized @ np.diag(whitePoint).T
showCutOut("Scene",scene,2)

There are several important things happening here. Note the np.clip operation which happens first, using the whitePoint of the raw. This ensures that you do not end up with magenta highlights. More details can be read in Jack Hogan’s blog, linked above. The second step (applying the whitebalance) actually recovers something usable - but we are still far from where we will end up! Here’s the intermediate result:


This is the image in the camera’s RGB space. It is a manufacturer designed color space and needs to be mapped into a RGB space which our display can understand. We’re not there yet.

For this, we have to recover the only color matrix available to us in raw HQ sensor data. I am using here, just for the fun of it, the exifread library. As you will notice, it’s much less convient than the rawpy one - for starters, you will need to know the image tag you are after:

# get the ccm from camera_
with open(path,'br') as f:
    tags = exifread.process_file(f)
    colorMatrix1 = np.zeros([3,3])
    if 'Image Tag 0xC621' in tags.keys():    
        index = 0
        for x in range(0,3):
            for y in range(0,3):
                colorMatrix1[x][y] = float(tags['Image Tag 0xC621'].values[index])
                index = index +1

Anyway. Once we have that matrix, we need to do some additional matrix calculations. There are two main reasons: first, the matrix we get from the .dng is mapping into the wrong whitepoint (D50). So we need to throw in an adjustment matrix. Secondly, the mapping is the wrong way around. So we need to invert the mapping. In addition, we need to take care that the transformation we are about to calculate is proper normalized. All this is handled here:

# http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html sRGB D65 -> XYZ 
wp_D50_to_D65  = [[0.4124564,  0.3575761,  0.1804375],
                  [0.2126729,  0.7151522,  0.0721750],
                  [0.0193339,  0.1191920,  0.9503041]]

# the matrix taking us from sRGB(linear) to camera RGB, white points adjusting (camera is D50, sRGB is D65)
sRGB_to_camRGB_wb   = colorMatrix1 @ wp_D50_to_D65

# normalizing the matrix in order to avoid color shifts - important!!!
colorMatrix_wb_mult = np.sum(sRGB_to_camRGB_wb, axis=1)  
sRGB_to_camRGB      = sRGB_to_camRGB_wb / colorMatrix_wb_mult[:, None]
    
# finally solving for the inverted matrix, as we want to map from camera RGB to sRGB(linear)
camRGB_to_sRGB      = np.linalg.pinv(sRGB_to_camRGB)     
    
# and we obtain the image in linear RGB
img = scene @ camRGB_to_sRGB.T
showCutOut("Image",img,2,norm=False)

We should end up with the following image:


Well, that thing looks aweful, right? What is happening? Now, this is the image data with linear RGB-values, and that is not what your display is actually expecting. In a final step, we need to appy the appropriate gamma-curve to this data:

# finally, apply the gamma-curve to the image
rec709 = img
i = rec709 < 0.0031308
j = np.logical_not(i)
rec709[i] = 323 / 25  * rec709[i]
rec709[j] = 211 / 200 * rec709[j] ** (5 / 12) - 11 / 200
showCutOut("rec709",rec709,2,norm=False)

and we end up with this:


– which seems to be ok.

Whether the pipeline I described above is a correct pipeline - frankly, I do not know. I came up with this way of processing years ago and never bothered to check again, as I was getting similar results as other raw development programs or as DaVinci. Besides, the script above was collected from old work and might contain errors I overlooked - but it should get you started for your own experiments. Again, Jack Hogan’s blog is also a very good resource I highly recommend.

6 Likes

@cpixip At least we understand the different steps, thank you for these explanations and of course I tried this script and I make some comparisons with RawTherapy and the image captured in JPEG but before I have two questions:

  • I notice that the image is resized by a factor of 2
  • What is theoretically the interval of the floating values ​​at the end of the processing. To convert to 8-bit RGB should we normalize or clip?

You have also seen this article ?

1 Like

well, I mentioned that:

…and resamples the full Bayer-pattern into separate color channels, half the size of the original.

That is about the most simple debayer-approach you can do.

Again, my script is a bare minimum of what you need to do, a guideline, nothing more. You could read all of the stuff I use from the file (for which I used rawpy) solely with exifread or even something you cooked up by yourself, dropping the need for external libs all together.

The article you linked to (yes, I read that article some yrs ago) is actually pretty much the same as my script above, so I might not be too far off…

Things like “highlight” recovery as well as a lot of other fancy things raw converters are doing are missing in my script on purpose.

A very good question. Theoretically, the floating point interval should be between 0.0 and 1.0, corresponding to 0x00 and 0xffin the final 8-bit RGB. If you testdrive the image I supplied for your convenience, you will discover that a lot of image values end up outside of this range. In other words: in the strict sense, this raw image is not displayable in sRGB/rec709. That’s why a lot of raw converters have their special recipes to handle such a situation (mapping the huge color gamut nature and cameras are supplying into the smaller color gamut of displays). But again, that’s outside the scope of my little script which deals only with the basics.

In general and for our purposes I would not try to develop any raw by own software. The raw converter build into DaVinci is powerful enough to do the heavy lifting on it’s own. No need for an external converter. Just select (as described above) the “Clip” processing mode and be sure to check the “Recover Highlight” box. You will get all image data into DaVinci, even the non-valid colors. And you have a wide variety of dials to adapt your material to the color gamut you are delivering within the DaVinci “Color” page to adjust.

1 Like

Thanks for these explanations.
If now I decompress with rawpy :

  raw = rawpy.imread(path)
  image = raw.raw_image
  rgb = raw.postprocess()

The result is correct, similar to that of Rawtherapy
Then if I compare to the jpeg capture the difference is important at the pixel level
I don’t know which libcamera algorithm gives this result

1 Like

@dgalland - not clear to me what is shown in your last post. What is the left image, what is the right image? Clearly, the resolution left is much less than on the right. Also, the right image tends to show burn-outs in brighter image areas, whereas the left one is in general a little bit darker, compared to the right. With no burn-outs.

In case you are comparing the output of my little script with the full-blown processing of rawpy.postprocess – don’t. These are two very different things and I hoped I made that clear in my previous posts.

Well, libcamera basically does the same things as described in my simple script to the raw data, finally arriving at a sRGB/rec709 image which is stored as .jpeg or .png. Or whatever. So libcamera’s jpeg should be very close in appearance to the image “developed” by the script above.

Note: Development of a raw image into something like a sRGB/rec709 image needs the color matrix and other stuff from the tuning file you are using with the HQ sensor. So results will be different whether you use the standard tuning file or the scientific tuning file during capture.

Sorry my post was not clear, on the left the dng image processed here by rawpy and finally converted into 8bit jpeg and on the right the jpeg image processed by the PI (ISP and libcamera) same tuning file, same resolution, same shutter, same Opencv JPEG encode with same quality (I hope I’m not mistaken)

What questions me is the difference at the pixel level. There is indeed a difference in global contrast but we also have the impression of an increase in local contrast in the details. Not easy to say which is the “best” image.
Note:
I have disabled the libcamera sharpen algorithm
The result is no different if the dng image is processed by Photoshop or RawTherapy