Alpha-Release of new picamera-lib: "picamera2" ... - and beyond

I think I understand. In the images above you didn’t rely on the AWB for temperature but instead used the external profile and a manually selected neutral point?

So using raw/dng we can use our own pipeline which is tailored for scanning rather than other photography. But further up in the thread you said that:

Has anyone changed in the “trust department” when seeing the results of the post processing regarding colour accuracy?

When I’m done with the film transport and motor, I’ll turn my attention to the light. My initial plan is to use a incandescent light bulb around 3000K to mimic the old projection bulb. I’ll post the results of the images, heat situation and spectrum report in the backlight thread then.

Yes.

Ok, I was a little bit unprecise here. Libcamera injects two pieces into the DNG-file I would not trust: the ‘color matrix 1’ and the scene’s estimated color temperature. These things are calculated on the basis of the existing tuning file and are generally not so good. That’s why I described above how to not use them. First, by relying on the DCP input profile of Jack Hogan, the DNG-file embedded color matrix of libcamera is simply dropped. Second, by setting manually the white balance, one gets rid of the estimate of the scene’s color temperature which libcamera came up with. If you handle your raw images in such a way, no trace of libcamera should be left.

Under DCP input profile processing, that would result in effect in the use of the “color matrix 1” found in the DCP profiles. Not sure about the quality of this one. Results for the camera matrix 2 which corresponds to D65 illumination are shown above and seem to be acceptable (even so Jack Hogan did not use D65 illumination to create this matrix).

Interesting. So the the custom DCP profiles is just done for the D65 and not StdA? Couldn’t custom profiles be done with that light in mind also?
But that might also be a good way to go, to have a D65 light source. From my reading it does output a pretty nice spectral response. Does it matter spectrum-wise what type of light a D65 light source is, if it’s fluorescence or LED?

No, actually in general, the are always two pairs of matrices in a DCP/DNG. One is for StdA, one for D65. Using the estimated color temperature, the actual matrix used in reading the file is interpolated from these two base matrices. However, if the color temperature is close to StdA (which it would if you are using a tnugsten lamp), the matrix used will just be the base matrix for StdA. Usually, “ColorMatrix1” corresponds to StdA, and “ColorMatrix2” to D65. To find out that this is indeed the case, one has to check that the tag “CalibrationIlluminant1” is indeed StdA. In Jack Hogan’s DCPs that is the case. There are two datasets, one for StdA, the other for D65. I used D65, as my test images were taken under that condition. But I have not taken images with StdA illumination, so I do not know how good the corresponding matrix is.

Yes, of course.

Definitely. Using flourscent lights is a bad idea. Also, not all white-light LEDs are created equal. D65 is a very specifc spectrum. Look it up at Wikipedia, for example. Flourescent lamps usually feature prominent spikes in their spectra - you do not want that. White-light LEDs use also flourescent material, driven by a narrow-band blue LED. So the cheap ones are rather spiky as well. The better ones come closer to D65. You can find here on the forum some discussions and hints. Here’s an example of different spectra, our D65, the spectrum of a projector (Kinoton 75P) and the (bad) spectrum of my 3 LED setup.

grafik

I see. I thought that you couldn’t comment of the accuracy of the StdA becuase it defaulted to the one provided by picamera, but now I see it is just because you haven’t tested it.

I initially thought that regardless of the lamp, it needed to be close to the D65 spectrum to be called that. But yes, the difficulty for me to locate a cheap LED without the spikes is what made me consider an incandescent light. But I’ll try to keep that conversation to that specific thread and focus on picamera here.

I think I am finishing my work on libcamera/picamera2 for the moment. As we have learned, the current (Sept 2022) tuning file shipped for the IMX477 sensor has some irritating data entries and the results one can obtain (as JPG-output, for example) by using libcamera/picamera2 are not as good as possible.

Note that even libcamera/picamera2’s raw output shows deficiencies when one bases the raw processing on the information libcamera writes into the DNG-file. Specifically, do not use the “camera matrix” embedded in the DNG-file, or the “AsShotNeutral” tag for white-balancing.

Here’s a short summary of my findings:

  • The libcamera approach uses by default the ALSC module (Automatic Lens Shading Correction) in its processing pipeline. There are ALSC-tables in the tuning file for the IMX477, even so the unit is sold without a lens! I was not able to obtain information what lens was used during calibration, but most certainly it was not to the lens you are using in your setup.
  • Central to the whole libcamera processing pipeline is the estimation of the correlated color temperature by the AWB module (automated white balance). This module does not feature a reliable performance - manually setting the red and blue gains is advised. It seems that in this case the cct is communicated as fixed to a value of 4500 K. All other modules will operate at that cct - regardless whether this is correct or not.
  • The compromise color matrices (ccm) embedded in the tuning file show funny variations in their coefficients with respect to cct. Most probably quite different light sources spectrum-wise have been used when taking the calibration images for the tuning file. This leads to non-optimal behavior in terms of color error. Some more details about this can be found here.
  • The gamma-curve applied to JPG-output data features a too strong contrast. While this helps in hiding the color issues, overall it will result in fidelity loss.

I think at this point in time, the best option within the libcamera/picamera2 context is to use raw DNG-files and base the processing on these, together with the DCP-input profiles created by Jack Hogan, used with manual white-balancing.

1 Like

Short update: picamera2 is now in beta state, some things including syntax have changed from the alpha-release discussed in this thread. Here are some comments/updates:

  • Since libcamera uses for a lot of tasks the CPU instead of the GPU (as the old approach did), it is advisable to use nothing else than a RP4. You can get picamera2 functional on lesser hardware, but it’s no fun.
  • Both picamera2 as well as libcamera (which is actually doing the work) are still in a state of flux. So be prepared that things break. Be careful with updates, as sometimes things between the different components of the whole processing pipeline do not match. For example, with a new 32bit OS install, I needed to do a rpi-update in order to get an HQ camera connected to a RP3 recognized by the system.
  • The manual of picamera2 has noticeably improved, as well as the example software available in the repository. Have a look.
  • The tuning files for camera sensors (an essential part of libcamera/picamera2) have increased in number and are now labeled as “version 2”. This version is incompatible with older versions of libcamera (and vice versa). It seems that the data in ver 2 files is identical to the old versions.
  • The number of tuning files has increased, but some of them are funny, to say at least. There is no information available about the origin of the files or how the they were created.
  • Specifically, the HQ tuning file delivers substandard images. Issues discovered so far: imprecise color temperature estimation, superfluous lens-shading correction, wrong color matrices which lead to large color fluctuations with small changes in color temperature, and finally a wrong “gamma”-curve.
  • [Addition 16.11.22]: at this point in time, there seems to be a bug in the IMX477 driver when setting the exposure values. While this might not be relevant if you perform that setting once at the beginning of a series of captures, it will lead to tiny intensity variations (flicker) if you change exposure times often (needed for example in HDR work). Here’s an example of this. Five different exposures were captured rapidly one after the other; note that the two longest exposure curves show tiny intensity spikes. They should be varying smoothly as the other curves:

4 Likes

It’s been a while since the last post in this thread and I thought it is time for some further update.

Both libcamera and picamera2 have evolved; most notably, the manual of picamera2 has seen major improvements and I recommend to consult this document. Also, the example section has been greatly expanded - have a look. There are also a bunch of complete applications available. Great for getting some ideas on how to handle things.

Libcamera, and in turn picamera2, has still some inconsistencies in handling camera parameters with respect to film scanning. Specifically, some camera parameters which influence the behaviour of the camera are actually set in the tuning file and are therefore out of program control as soon as the camera is created.

The tuning file is a .json-file and can be edited for example with notepad++. Be careful when editing and keep a backup copy of this file, as mistakes might result in the camera not starting at all.

There are currently two cameras available from Raspberry Pi which can be considered to be suitable for film scanning. This is the HQ camera and the recently introduced GS (Global Shutter) camera. Both standard tuning files for these cameras feature a ALSC (Adaptive Lens Shading Correction) section - for film scanning, you want to get rid of this module. There are two reasons for that: the lens shading data is for an unknown lens and it will not work with your lens. Furthermore, the ALSC will introduce color shifts and vigeting, varying rather unpredictably with your material - this is probably something you want to avoid as well.

There are two ways to change the behaviour of the ALSC-module. The first, the rather bold one, is simply to delete the whole section in the tuning file. Of course, in this case you lose the ability to do a lens shading correction. In my case, using a Schneider Componon-S 50 mm (calculated for the 35 mm format) with the HQ camera, it is utterly nonsense to even consider a lens shading compensation, so that’s what I am doing.

In case you are using a less capable lens, you might want to keep the lens shading algorithm running. But you certain want to get rid of the adaptive part. This can be achieved by the following code:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the ALSC section
alsc = Picamera2.find_tuning_algo(tuning, "rpi.alsc")

# switch off adaptive behaviour by setting n_iter to zero
alsc['n_iter'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Note again that this procedure does not switch off the ALSC - only the adaptive behaviour is switched off. So you want to make sure that the lens shading data in the tuning file is corresponding to your lens/optical setup.

The above approach is actually a generic way to modify any entry in the tuning file before creating the Picamera2 object.

We will use the same approach to disable a second adaptive algorithm hidden in libcamera’s contrast module. This second adaptive algorithm could be the reason for the “spiky” intensity curves in my post above. It causes noticeable flickering in scanned frames. Anyway, here’s how to switch this algorithm off by picamera2 utilities:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the contrast section
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast ")

# switch off adaptive behaviour by setting ce_enable to zero
contrast ['ce_enable'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Of course, both settings should be handled in a combined fashion, like so:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the ALSC section
alsc = Picamera2.find_tuning_algo(tuning, "rpi.alsc")

# switch off adaptive behaviour by setting n_iter to zero
alsc['n_iter'] = 0

# find the contrast section
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast ")

# switch off adaptive behaviour by setting ce_enable to zero
contrast ['ce_enable'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Once the Picamera2 object is created, some other work has to be done before the camera can finally be started. Here’s my current code:

           from picamera2 import Picamera2
           import libcamera

            # modes for HQ camera
            rawModes   = [{"size":(1332,  990),"format":"SRGGB10"},
                          {"size":(2028, 1080),"format":"SRGGB12"},
                          {"size":(2028, 1520),"format":"SRGGB12"},
                          {"size":(4056, 3040),"format":"SRGGB12"}]

            mode      = 3
            noiseMode = 0

            config = picam2.create_still_configuration()

            config['buffer_count'] = 4
            config['queue']        = True

            config['raw']          =  rawModes[mode]
            config['main']         = {"size": rawModes[mode]['size'],'format':'RGB888'}

            config['transform']    = libcamera.Transform(hflip=False, vflip=False)

            config['controls']['NoiseReductionMode']  = noiseMode
            config['controls']['FrameDurationLimits'] =  (100, 32000000)

            picam2.configure(config)

Let’s go through all the lines.

            config = picam2.create_still_configuration()

grabs a standard configuration for still image capture from libcamera. This config is tuned for high-quality image capture; we need to set a few things in order to be used in a film scanner project.

The first setting we are going to change will be the buffer_count. The still config comes with only one buffer by default - this is too low for fast image grabbing. The line

config['buffer_count'] = 4

increases this to 4 buffers. Each buffer eats away a tremendous amount of CFA-memory. On a RP3, you will want to increase this memory chunk by including (or changing) the line

dtoverlay=vc4-kms-v3d,cma-384

in the RP config-file /boot/config.txt. For my RP4, I use instead cma-512 - your mileage might vary.

The next line in the above code,

config['queue'] = True

makes sure that the picamer2 lib is working with queues - in this way, you will get frames much faster delivered. This option is actually already set to True, this is just to make sure it is.

The next two lines in the above code segment actually select the resolution the camera is working with. Both resolutions, raw and main, are kept to the same size to avoid internal scaling:

            config['raw']          = self.rawModes[mode]
            config['main']         = {"size":self.rawModes[mode]['size'],'format':'RGB888'}

Also, the output format is set to RGB888 which makes sure we are not working with alpha-planes (that seems to be the default at the time of this writing).

Next, a transformation is specified - this is mandatory in a film scanner project, where the actual image might be flipped or mirrored:

            config['transform']    = libcamera.Transform(hflip=False, vflip=False)

Finally, some more settings are applied:

            config['controls']['NoiseReductionMode']  = noiseMode
            config['controls']['FrameDurationLimits'] =  (100, 32000000)

The default noise reduction mode in the still config is called HQ, and this is a software-based noise reduction algorithm. There is another noise reduction algorithm called Fast which is the default value for video configs. I am using 0 as a value, which corresponds to no noise reduction at all (mode: Off). This is the fastest way to get images out of the camera, and as I am doing anyway a lot of image processing on the scanned images, I prefer to do noise reduction in my own software. If you prefer to have some noise reduction in camera, I suggest to try the Minimalmode, which is purely hardware-based. This would correspond in the above code to setting noiseMode = 4.

The FrameDurationLimits are set in such a way that libcamera is been told “forget about this”. Basically, the FrameDurationLimits limits constraint the possible exposure time settings. The above line allows a range from 100 µs to 32 seconds. Enough for all practical purposes.

Finally, this configuration is piped into the camera by the line

picam2.configure(config)

At this point, the camera should be ready to roll, with automatic exposure and automatic whitebalancing. I prefer to scan with fixed values of exposure, the lowest gain possible and with fixed color gains. This can be achieved by the following code lines:

picam2.set_controls({'ExposureTime':exposureTime})
picam2.set_controls({'ColourGains':(redGain,blueGain)}
picam2.set_controls({'AnalogueGain' : 1.0})

The sensor’s tuning files are another interesting topic. I posted in this thread some details; the tuning file described there is actually now part of the standard RP distribution. It is called imx477_scientific.json and can be loaded just like the standard tuning file imx477.json. It already lacks the ALSC section, so no lens shading is performed with this tuning file.

4 Likes

I was reading through the documentation for picamera2, and saw that they have added a HDR-mode for use with Raspberry Pi 5. Is that anyone have tried, and if so, what was the results?

In the documentation it seems like it takes multiple exposures of an image, wonder if they are doing some kind of Mertens. The section can be found here under the header HDR:
https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf&ved=2ahUKEwjt2cSi8K2DAxUkJxAIHbg8B3UQFnoECBsQAQ&usg=AOvVaw3Q0dra-FYmQS7HB-t4BM-c

1 Like

This documentation is very uninformative. There is no way to figure out what they are exactly doing. It might be something they (Raspberry Foundation) or the libcamera team came up with.

Yeah, unfortunately not much what is going under the hood. Let’s hope that more information gets added in the future. Maybe worth to ask in the forums?

Well, sometimes you get interesting information from the Raspberry Pi Camera Forum. Sometimes not so much. Have a try.

I doubt that they are using exposure fusion a la Mertens. As a matter of fact I was once a strong supporter of the exposure fusion algorithm - mainly because it was able to create good-looking results even for difficult scans. I am now pivoting away from exposure fusion toward using raw files instead, for several reasons. One of them is that I am currently not so sure that color reproduction is really faithful within the exposure fusion context. Still researching this issue - but the use of gaussian and laplacian pyramids to merge the multiple exposures might lead to local color shifts which are challenging to counteract in postproduction. It will take some time until I have a good answer about this issue.

Another point is that the images to be merged via exposure fusion need to be registered very precisely. In my scanner, this is not the case. So I need a computationally expensive image alignment step before the exposure fusion. Using just a single raw capture as scan result avoids that step.

A single raw capture will show noticably more noise in dark image areas than an image produced by exposure fusion. However, any type of noise reduction will take care of this, especially since normally, dark areas in the scan will end up as dark image areas in your final result anyway.

In principle, you could capture several raw images and combine them into a new raw with much reduced noise in the dark areas of the image. I think that something like this is happening in the RP5-HDR mode. Once that noise-reduced raw image is available, you can just run the usual libcamera stuff to obtain an 8-bit-per-channel png or jpg (which one should not really call a HDR, as it’s in fact a LDR (low dynamic range image)). In any case, expect a reduction in capture rate, as the usual sensors like the HQ need to capture the differently exposed images sequentially. Also, this approach will not work well if your image is moving during capturing the different exposures. I do not think that the RP5 will do an image alignment step before combining the different exposures.

Speaking of the RP5 - with this machine, the foundation introduced a new way of processing the raw image data into a jpg/png. Jugding from the forums (the RP camera forum and the Kinograph forum), there are still some issues to be ironed out. That is why my film scanner is still at the “Buster” software level.

I have a dedicated additional machine (RP4) running the newest software level for testing, but at the moment, I will not upgrade my film scanner to any newer software version. It seems that under the hood, quite a lot of things have changed, even for RP3 or RP4 and I will wait until the dust has settled.

Also, for the time being, I will stay away from the RP5 hardware. As I understand it currently, the internal processing is done totally different on the RP5 compared to RP3 or RP4 - as a newly developed IC is used in the RP5 for this task. It might be better than the old way, or worse. I have not seen any comparision yet.

3 Likes

@Ljuslykta So, the documentation on the new HDR possibilities libcamera/Picamera2 RPI4 or RPI5 is difficult to understand.
If I understood correctly from reading the two guides it is not an HDR at the sensor level but rather a variant of the AGC algorithm
Firstly, different modes or “Channels” of the AE/AGC algorithm are defined.
Then there would be different HDR modes:
SingleExposure: Accumulation of the same channel? No merge or tonemap, ?
MultiExposure: Merge + tonemap of two channels (on RPI5 only)
On PI5 or PI4 we would also have MultiExposureUnmerged which would return the frames without merge

But in any case it does something !

Auto image and HDR SingleExposure image

Image Auto and HDR MultiExposure

So this requires studies, questions and clarifications on the forums

2 Likes

My scans having been completed, I can try the latest new features without consequences, the most sensitive being the introduction of PI5. So yes it is considerably faster than its predecessor. I resumed my fps bench :

PI4 Pi5
Metadata only 10.8 10.4
Make_array: 4.54 10.00
Jpeg encoder: 1.35 6.00
Jpeg encoder one thread 1.73 9.00
Jpeg encoder Thread Pool 4.08 9.80
Dng encoder Thread Pool ?? 1.00

Note:
HQ camera Max resolution
2 libcamera buffers
Opencv Jpeg encode Quality 95
Encode only without file write or network send

I’m not entirely sure of the values ​​for DNG encoding, it’s 100% python so doesn’t benefit from multithreading

It appears that the PI5 can easily support any sort of processing for the 10fps received from the libcamera lrequest loop (There does not seem to be any improvement on this point ?)

4 Likes

I found a little bit more information on this. It seems that in “Single Exposure” mode some sort of temporal denoising algorithm is employed. The exposure for each image is reduced, so the highlights of the source are correctly recorded. But of course, that gives you quite some noise in dark areas. Here the RP5 employs a temporal noise-reduction process, averaging over several consecutive frames. Averaging is skipped when the pixel’s value differs too much from the previous frame.

According to statements from the RP-team, that works well in video mode. You do not get a reduction in frame rate, but dark noisy areas have much less noise than the raw imagery, and because the images are exposed towards the highlights, they also do not burn out. You will get some artifacts around fast moving edges and in highly textured areas.

The resulting temporal denoised image is than tone-mapped (no information on this, maybe histogram-equalized?) to the final output image.

The “Multi-Exposure” thing is more similar to the classical way of acquiring HDR imagery – they really take several exposures of a scene one after the other and combine them into a HDR image (how this is done is unknown currently, but I suspect it’s not as complicated as Mertens). In any case, you will encounter a reduction in frame rate in this mode. Therefore, the mode is recommended by the RP guys only for still images. Again, some sort of – yet unpublished – tonemapping is applied to the HDR image to arrive at the normal LDR (8 bit per channel) image.

@dgalland - the .dng-encoding on the RP5 still takes about 1 sec/frame? I measure about 980 ms on my RP4 for writing the raw data as .dng-file to an SSD…

1 Like

… continuing my exploration of the RP5.

The RP5 features an upgraded ISP – which leads in turn to a new tuning file format. So there are currently two different directories where the tuning files are stored. One is /usr/share/libcamera/ipa/rpi/vc4, this is for the RP1-RP4 models. The other is /usr/share/libcamera/ipa/rpi/pisp, and this is for the new RP5 unit.

At least currently, the imx477_scientific.json is gone in the RP5 tuning file directory. The new tuning file format has a lot of new entries – some of them are connected to the new HDR possibilities, some others need to be checked. For example, the temporal denoise seem to be working in standard mode – something I would not like to happen in a film scanner setup.

Furthermore, there are indications that even the raw image you get in the standard configuration is not a true raw image. The RP5 ISP works normally with compressed (packed) raw. This is described as “visually lossless”. You can switch to unpacked formats which are described as “bit-exact”, but that’s not the default. I asked on the picamera2 discussion page for a clarification, but I bet that the raw the RP5 spits out in standard configuration is not really the raw sensor image.

Anyway. I manually edited something similar to the scientific tuning file to work with the RP5 ISP. The results I got are interesting. Here’s a capture with the HQ camera and the standard RP5 tuning file:

The air blower to the right of the image is in reality not purple. Neither does the daVinci speed editor in the background feature a purple touch - it’s basically a greyish plastic unit.

Here’s, for comparision, the same capture with my quick manual derivative of the scientific tuning file:

The air blower’s color is quite similar to the color my eyes see. A slight yellow-greenish cast can be noticed, but the colors are certainly less off than in the standard configuration.

Both captures were done with automatic exposure and white balancing. Illumination was with a desktop lamp only. Given, this is not a really good designed experiment, but I would have expected a better performance with the imx477 standard tuning file on the RP5.

3 Likes

Some further tests/information:

– the above images were taken under the illumination of a desktop lamp. This lamp is actually a flourescent lamp trying to mimic a normal “warm” light bulb. So a quite challenging illumination for testing. Below are other results using normal daylight illumination.

Doing the same comparision under daylight illumination conditions gives the following results:

  • Standard imx477.json tuning file, RP5-version

  • Quick-hack imx477_scientific.json tuning file, RP5-version

With the imx477.json tuning file the estimated color temperature was 4636 K, with the imx477_scientific.json tuning file it turned out to be a little bit more blueish, namely 5061 K.

A Opple lightmeter visible in the images gave me a color temperature of 7544 K.

– it seems that with the RP5, new compressed raw formats were introduced. There are hints that these new raw formats are not carrying the full raw sensor information through the image processing pipeline. Specifically, in the picamera2 manual, the compressed formats are labeled as “visual lossless” and the old uncompressed formats as “bit-exact”. If I read that correctly, the new compressed formats are not “bit-exact”.

Even worse, under the hood, if the RP5 is working with compressed formats (I think this is the default), when saving a .dng-file, the compressed format is first decompressed and than saved as .dng. In other words, your .dng does not have the real raw sensor values, but something which is probably coarser quantized (that would be a simple way to “compress” data). I think one can still get the real raw sensor values by explicitly forcing the RP5 ISP to work with an uncompressed format. Otherwise, with the same setup, you will get non-identical results between a RP4 and a RP5.

Just for fun, because there was a color card in the shot, I ran both of those through my favorite plugin for color correction. Both got pretty close to one another:

And here are the (48-way) .cube file LUTs that it was able to generate to convert from that particular setup to the results shown here: LUTs.zip (1.5 MB)

Just like your previous findings, the “standard” imx477.json file is quite far off in the blues and purples. Your imx477_scientific.json is very close and it looks like the only thing the color corrector plugin really did was nudge the white balance a little.

1 Like

As they should, given that you used the color chart in the image for calibration. :slightly_smiling_face:

The greens in the “standard” corrected version are a little too bright - the ones of the “scientific” corrected version come closer to the appearance of the real green things.

Interesting that your plugin was able to compensate the bad gamma-curve of the standard tuning file – which was deliberately chosen by the RP guys to get highly saturated colors.

Yes, I checked, we have 0.9s per frame for the dng conversion (maximum resolution, buffer only without write)
It’s very long because 100% python including that Pisp decompression

Did you see
https://forums.raspberrypi.com/viewtopic.php?t=358223