Alpha-Release of new picamera-lib: "picamera2" ... - and beyond

@cpixip thank you for sharing your findings. These are incredibly helpful.

May I confirm if I understand the implications correctly?
If the matrix encoded in the raw file is an estimation of the AWB, it will change as the image changes?
The correct approach should then be to have a replacement matrix based on an D65 illuminant, which I believe is what you have done.

I was considering using raw files, which is what I have done when using the DSLR.
Again, thank you for your research in this area, and your postings. It is a great contribution to everyone (or at least me) climbing the HQ learning curve and getting up to speed on the particulars of the libraries.

Well, at least to my knowledge, these things are not documented very well. Here’s the story as far as I understand it currently (summer 2022):

  • Libcamera uses a tuning file to convert the raw camera image into the image a user gets, usually a jpg.-file.
  • In this tuning file, there is a list of “compromise color matrices” (ccms) with a corresponding color temperature (ct). Here’s a small cut-out to show how this looks in the tuning file for the IMX477:

  • Directly from the tuning guide: “During operation, the algorithm will estimate a CCM - interpolating as necessary - using the current estimated colour temperature as reported by the AWB algorithm

  • The AWB algorithm is described in the tuning guide as well. Basically, AWB looks at the current image, comes up with an estimate of the color temperature, and this estimate is used in both the computation of the ccm as well as in the automatic lens shading correction.

Some further (unsorted) notes here, as they might be important for our use cases:

  • if you set any red or blue gain, the AWB algorithm is switched off. In this case the color temperature reported down the processing pipeline will be constant; I think it is than set to a 4500 K.

  • the AWB does have it’s problems with light sources based on narrow-band LEDs. If one turns off the default algorithm by setting "bayes": 0 in the tuning file, the AWB might come up with an estimate even in this case (it won’t be of any real value - I will elaborate on this in another thread).

  • color is also affected by the automatic lens shading algorithm (ALSC). The default tuning file for the IMX477 sensor features data from an unknown lens for this.

  • the software suite from the Raspberry Pi foundation drifts into making it possible to directly capture .dng-files. At this moment (summer 2022) the implementation seems to be not optimal. There are DCP profiles available, calculated by Jack Hogan, at the bottom of this page. I have not checked these profiles yet as I do not own any software to use them.

Here’s the attempt of summarizing what I found out so far: At this point in time, I would not trust the approach libcamera is taking at delivering jpgs or .dng-files. The color science is basically undocumented and patterned for my taste too similar to what current mobile phone cameras are doing. That is: improving a cheap sensor and/or cheap lens by added computational processing. In the case of the IMX477, it seems that this is not necessary, as the colors are quite good by even the simplest (linear) color science. Furthermore, the standard tuning file shipped with the Raspberry Pi software is flawed in several places. Basically, it tends to deliver over-saturated colors - which is fine for the average user, but probably not the goal in film scanning applications.

I am currently looking into this for exactly this purpose: to get rid of the whole libcamera/tuning file stuff and instead come up with a processing pipeline based directly on the raw image data the IMX477 delivers. It remains to be seen whether this will work in terms of scan time, storage space and image transfer times and ultimately image quality.

Thank you.

I’m on the same page.

Looking further out of the sensor, my thinking is going with a near-fixed camera setting, and to provide light control good enough for exposure fusion (Mertens).
Presently working on building the transport. Prototype is controlling transport with a Pico. Just ordered a DAC to make a new light driver, also controlled by the same Pico. Transport is long from reliable, but prototype results are promising.

Here’s some update. Taking again an image of the X-rite color checker with the current beta release of picamera2 (v0.3.1) and the included app_full.py script. Full resolution capture, albeit under not so perfect illumination conditions.

I captured the following jpg image directly out of the app in auto-mode:

If you look closely: the color patches in the lower half of the color checker show some inset patches - these are actually the colors the patches should have. Clearly, there’s some deviations visible. In general, the colors are to saturated and there is a slight shift towards blue in the whole image.

Well, this is the image libcamera/picamera2 is producing with default settings, based on the current default tuning file.

I captured a corresponding raw .dng-file of that setup as well. However, I did not use the in the .dng-file embedded color matrix 1, as this matrix would just result in an image similar to the jpg displayed above. Instead, I calculated a least-square optimized matrix and here’s the result:

Actually, I am quite satisfied with this. This image is very close to how the scene actually looked like during capture.

Here’s an enlargement of the color patterns, again with insets displaying the actual color reference the patches should have:

Not to bad for the moment. There are some deviations, for example in the brightest and darkest neutral patches. They might stem from the ad hoc setup - I need to improve the illumination for this kind of capture. The cut-out reveals also that the image captured is quite noisy - again due to the not optimal illumination setup.

Two remarks: in case you are using daVinci Resolve, you can do a similar color checker calibration for a sequence of .dng-files in the Color tab of the program. Here’s an example of how that tab looks like when such a calibration was performed (this is not using the same color checker image as above)

Second remark: there exist DCP profiles for the HQ sensor in this repository, created by Jack Hogan (I can really recommend his website “Strolls with my dog” for a lot of information about cameras).

Using the extended “Neutral Look” DCP as input profile in RawTherapee (plus white-balancing on the third neutral patch from the left) one gets this result (developed in RawTherapee):

1 Like

@cpixip Much to process here. If using the separate DCP input while capturing jpg/dng, we can obtain better colour accuracy. Is is still necessary then to try to bypass the image pipeline if the result is corrected?

Second question. The colour temperature estimation algo seems a bit flaky, and results to a fixed 4500 K when messing with colour gain control. Would you get better image results if the reported temperature was fixed at 4500 K and having a lightsource with the same temperature?

With all the research you’re doing in the matter, I feel that we are inching closer to a “recommended software setup” for the HQ camera, for which I am very grateful.

Well, I used the DCP as input profile for RawTherapee. There is no connection of this input profile to the actual capture process.

In a sense, the DCP profile is just a recipe on how to interpret the data contained in the raw file. As far as I understand the whole process so far, a raw converter takes in the simplest case the color matrices contained in the DCP profile and calculates a new color matrix out of these, depending on an estimate of the color temperature of the scene. This estimate of the color temperature can come from the camera (via the information stored as Exif-data in the DNG-file, or from you, simply by picking the white balance of a neutral patch in the image.

This is what I did in the above example: I used the Jack Hogan’s DCP input profile “Neutral Look” (instead of the “Camera standard” - these settings can be found in the “Color” tab of RawTherapee → “Color Managment”). I than switched to the “White Balance” tab and used the color picker on the third neutral patch from the right.

Reading a raw like this bypasses bascially most of what libcamera is doing. For the image above, I actually used - out of laziness - the autoexposure of libcamera. Of course, libcamera still came up with a color matrix for the scene (which is embedded in the DNG-file and the one used when you select “Camera standard” in RawTherapee), as well as with an estimate of the color temperature of the scene (which is again used in RawTherapee to get the white balance correct). I killed the embedded color matrix by turning on the DCP input profile, and I dropped libcamera’s white point estimation by sampling a neutral patch.

The whole processing pipeline of libcamera is still working and it is actually used to create the jpg-preview image embedded in the raw data, but since you are reading directly the raw sensor data, this is kind of irrelevant.

Not really. First, as explained above, you can bypass basically all of libcamera’s twists by working with the raw data contained in the DNG. And in our use case “film scanner” it is easy to get/set the white balance for the camera correctly: simply expose the empty film gate 2 or 3 stops darker than usually, and you get a full-sized neutral patch for measurements.

There’s another catch if you are working with the DCP input profiles. In those DCPs there are two color matrices embedded which have been optimized for two corresponding illuminants; usually, these illuminants correspond to StdA (a warm Tungsten lamp) and D65 (blue cloudy skies around noon).

The actual matrix used to read your raw data will be calculated out of these two matrices by interpolation. But if your working color temperature is close to one of the optimized ones, there will clearly be less interpolation. That is: it’s probably better to use D65 as illumination source - which is what a lot of white-light LEDs are striving to approximate anyway.

I think I understand. In the images above you didn’t rely on the AWB for temperature but instead used the external profile and a manually selected neutral point?

So using raw/dng we can use our own pipeline which is tailored for scanning rather than other photography. But further up in the thread you said that:

Has anyone changed in the “trust department” when seeing the results of the post processing regarding colour accuracy?

When I’m done with the film transport and motor, I’ll turn my attention to the light. My initial plan is to use a incandescent light bulb around 3000K to mimic the old projection bulb. I’ll post the results of the images, heat situation and spectrum report in the backlight thread then.

Yes.

Ok, I was a little bit unprecise here. Libcamera injects two pieces into the DNG-file I would not trust: the ‘color matrix 1’ and the scene’s estimated color temperature. These things are calculated on the basis of the existing tuning file and are generally not so good. That’s why I described above how to not use them. First, by relying on the DCP input profile of Jack Hogan, the DNG-file embedded color matrix of libcamera is simply dropped. Second, by setting manually the white balance, one gets rid of the estimate of the scene’s color temperature which libcamera came up with. If you handle your raw images in such a way, no trace of libcamera should be left.

Under DCP input profile processing, that would result in effect in the use of the “color matrix 1” found in the DCP profiles. Not sure about the quality of this one. Results for the camera matrix 2 which corresponds to D65 illumination are shown above and seem to be acceptable (even so Jack Hogan did not use D65 illumination to create this matrix).

Interesting. So the the custom DCP profiles is just done for the D65 and not StdA? Couldn’t custom profiles be done with that light in mind also?
But that might also be a good way to go, to have a D65 light source. From my reading it does output a pretty nice spectral response. Does it matter spectrum-wise what type of light a D65 light source is, if it’s fluorescence or LED?

No, actually in general, the are always two pairs of matrices in a DCP/DNG. One is for StdA, one for D65. Using the estimated color temperature, the actual matrix used in reading the file is interpolated from these two base matrices. However, if the color temperature is close to StdA (which it would if you are using a tnugsten lamp), the matrix used will just be the base matrix for StdA. Usually, “ColorMatrix1” corresponds to StdA, and “ColorMatrix2” to D65. To find out that this is indeed the case, one has to check that the tag “CalibrationIlluminant1” is indeed StdA. In Jack Hogan’s DCPs that is the case. There are two datasets, one for StdA, the other for D65. I used D65, as my test images were taken under that condition. But I have not taken images with StdA illumination, so I do not know how good the corresponding matrix is.

Yes, of course.

Definitely. Using flourscent lights is a bad idea. Also, not all white-light LEDs are created equal. D65 is a very specifc spectrum. Look it up at Wikipedia, for example. Flourescent lamps usually feature prominent spikes in their spectra - you do not want that. White-light LEDs use also flourescent material, driven by a narrow-band blue LED. So the cheap ones are rather spiky as well. The better ones come closer to D65. You can find here on the forum some discussions and hints. Here’s an example of different spectra, our D65, the spectrum of a projector (Kinoton 75P) and the (bad) spectrum of my 3 LED setup.

grafik

I see. I thought that you couldn’t comment of the accuracy of the StdA becuase it defaulted to the one provided by picamera, but now I see it is just because you haven’t tested it.

I initially thought that regardless of the lamp, it needed to be close to the D65 spectrum to be called that. But yes, the difficulty for me to locate a cheap LED without the spikes is what made me consider an incandescent light. But I’ll try to keep that conversation to that specific thread and focus on picamera here.

I think I am finishing my work on libcamera/picamera2 for the moment. As we have learned, the current (Sept 2022) tuning file shipped for the IMX477 sensor has some irritating data entries and the results one can obtain (as JPG-output, for example) by using libcamera/picamera2 are not as good as possible.

Note that even libcamera/picamera2’s raw output shows deficiencies when one bases the raw processing on the information libcamera writes into the DNG-file. Specifically, do not use the “camera matrix” embedded in the DNG-file, or the “AsShotNeutral” tag for white-balancing.

Here’s a short summary of my findings:

  • The libcamera approach uses by default the ALSC module (Automatic Lens Shading Correction) in its processing pipeline. There are ALSC-tables in the tuning file for the IMX477, even so the unit is sold without a lens! I was not able to obtain information what lens was used during calibration, but most certainly it was not to the lens you are using in your setup.
  • Central to the whole libcamera processing pipeline is the estimation of the correlated color temperature by the AWB module (automated white balance). This module does not feature a reliable performance - manually setting the red and blue gains is advised. It seems that in this case the cct is communicated as fixed to a value of 4500 K. All other modules will operate at that cct - regardless whether this is correct or not.
  • The compromise color matrices (ccm) embedded in the tuning file show funny variations in their coefficients with respect to cct. Most probably quite different light sources spectrum-wise have been used when taking the calibration images for the tuning file. This leads to non-optimal behavior in terms of color error. Some more details about this can be found here.
  • The gamma-curve applied to JPG-output data features a too strong contrast. While this helps in hiding the color issues, overall it will result in fidelity loss.

I think at this point in time, the best option within the libcamera/picamera2 context is to use raw DNG-files and base the processing on these, together with the DCP-input profiles created by Jack Hogan, used with manual white-balancing.

1 Like

Short update: picamera2 is now in beta state, some things including syntax have changed from the alpha-release discussed in this thread. Here are some comments/updates:

  • Since libcamera uses for a lot of tasks the CPU instead of the GPU (as the old approach did), it is advisable to use nothing else than a RP4. You can get picamera2 functional on lesser hardware, but it’s no fun.
  • Both picamera2 as well as libcamera (which is actually doing the work) are still in a state of flux. So be prepared that things break. Be careful with updates, as sometimes things between the different components of the whole processing pipeline do not match. For example, with a new 32bit OS install, I needed to do a rpi-update in order to get an HQ camera connected to a RP3 recognized by the system.
  • The manual of picamera2 has noticeably improved, as well as the example software available in the repository. Have a look.
  • The tuning files for camera sensors (an essential part of libcamera/picamera2) have increased in number and are now labeled as “version 2”. This version is incompatible with older versions of libcamera (and vice versa). It seems that the data in ver 2 files is identical to the old versions.
  • The number of tuning files has increased, but some of them are funny, to say at least. There is no information available about the origin of the files or how the they were created.
  • Specifically, the HQ tuning file delivers substandard images. Issues discovered so far: imprecise color temperature estimation, superfluous lens-shading correction, wrong color matrices which lead to large color fluctuations with small changes in color temperature, and finally a wrong “gamma”-curve.
  • [Addition 16.11.22]: at this point in time, there seems to be a bug in the IMX477 driver when setting the exposure values. While this might not be relevant if you perform that setting once at the beginning of a series of captures, it will lead to tiny intensity variations (flicker) if you change exposure times often (needed for example in HDR work). Here’s an example of this. Five different exposures were captured rapidly one after the other; note that the two longest exposure curves show tiny intensity spikes. They should be varying smoothly as the other curves:

4 Likes

It’s been a while since the last post in this thread and I thought it is time for some further update.

Both libcamera and picamera2 have evolved; most notably, the manual of picamera2 has seen major improvements and I recommend to consult this document. Also, the example section has been greatly expanded - have a look. There are also a bunch of complete applications available. Great for getting some ideas on how to handle things.

Libcamera, and in turn picamera2, has still some inconsistencies in handling camera parameters with respect to film scanning. Specifically, some camera parameters which influence the behaviour of the camera are actually set in the tuning file and are therefore out of program control as soon as the camera is created.

The tuning file is a .json-file and can be edited for example with notepad++. Be careful when editing and keep a backup copy of this file, as mistakes might result in the camera not starting at all.

There are currently two cameras available from Raspberry Pi which can be considered to be suitable for film scanning. This is the HQ camera and the recently introduced GS (Global Shutter) camera. Both standard tuning files for these cameras feature a ALSC (Adaptive Lens Shading Correction) section - for film scanning, you want to get rid of this module. There are two reasons for that: the lens shading data is for an unknown lens and it will not work with your lens. Furthermore, the ALSC will introduce color shifts and vigeting, varying rather unpredictably with your material - this is probably something you want to avoid as well.

There are two ways to change the behaviour of the ALSC-module. The first, the rather bold one, is simply to delete the whole section in the tuning file. Of course, in this case you lose the ability to do a lens shading correction. In my case, using a Schneider Componon-S 50 mm (calculated for the 35 mm format) with the HQ camera, it is utterly nonsense to even consider a lens shading compensation, so that’s what I am doing.

In case you are using a less capable lens, you might want to keep the lens shading algorithm running. But you certain want to get rid of the adaptive part. This can be achieved by the following code:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the ALSC section
alsc = Picamera2.find_tuning_algo(tuning, "rpi.alsc")

# switch off adaptive behaviour by setting n_iter to zero
alsc['n_iter'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Note again that this procedure does not switch off the ALSC - only the adaptive behaviour is switched off. So you want to make sure that the lens shading data in the tuning file is corresponding to your lens/optical setup.

The above approach is actually a generic way to modify any entry in the tuning file before creating the Picamera2 object.

We will use the same approach to disable a second adaptive algorithm hidden in libcamera’s contrast module. This second adaptive algorithm could be the reason for the “spiky” intensity curves in my post above. It causes noticeable flickering in scanned frames. Anyway, here’s how to switch this algorithm off by picamera2 utilities:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the contrast section
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast ")

# switch off adaptive behaviour by setting ce_enable to zero
contrast ['ce_enable'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Of course, both settings should be handled in a combined fashion, like so:

from picamera2 import Picamera2

# load the default tuning file
tuning = Picamera2.load_tuning_file("imx477.json")

# find the ALSC section
alsc = Picamera2.find_tuning_algo(tuning, "rpi.alsc")

# switch off adaptive behaviour by setting n_iter to zero
alsc['n_iter'] = 0

# find the contrast section
contrast = Picamera2.find_tuning_algo(tuning, "rpi.contrast ")

# switch off adaptive behaviour by setting ce_enable to zero
contrast ['ce_enable'] = 0

# load the modified tuning file into the sensor
picam2 = Picamera2(tuning=tuning)

Once the Picamera2 object is created, some other work has to be done before the camera can finally be started. Here’s my current code:

           from picamera2 import Picamera2
           import libcamera

            # modes for HQ camera
            rawModes   = [{"size":(1332,  990),"format":"SRGGB10"},
                          {"size":(2028, 1080),"format":"SRGGB12"},
                          {"size":(2028, 1520),"format":"SRGGB12"},
                          {"size":(4056, 3040),"format":"SRGGB12"}]

            mode      = 3
            noiseMode = 0

            config = picam2.create_still_configuration()

            config['buffer_count'] = 4
            config['queue']        = True

            config['raw']          =  rawModes[mode]
            config['main']         = {"size": rawModes[mode]['size'],'format':'RGB888'}

            config['transform']    = libcamera.Transform(hflip=False, vflip=False)

            config['controls']['NoiseReductionMode']  = noiseMode
            config['controls']['FrameDurationLimits'] =  (100, 32000000)

            picam2.configure(config)

Let’s go through all the lines.

            config = picam2.create_still_configuration()

grabs a standard configuration for still image capture from libcamera. This config is tuned for high-quality image capture; we need to set a few things in order to be used in a film scanner project.

The first setting we are going to change will be the buffer_count. The still config comes with only one buffer by default - this is too low for fast image grabbing. The line

config['buffer_count'] = 4

increases this to 4 buffers. Each buffer eats away a tremendous amount of CFA-memory. On a RP3, you will want to increase this memory chunk by including (or changing) the line

dtoverlay=vc4-kms-v3d,cma-384

in the RP config-file /boot/config.txt. For my RP4, I use instead cma-512 - your mileage might vary.

The next line in the above code,

config['queue'] = True

makes sure that the picamer2 lib is working with queues - in this way, you will get frames much faster delivered. This option is actually already set to True, this is just to make sure it is.

The next two lines in the above code segment actually select the resolution the camera is working with. Both resolutions, raw and main, are kept to the same size to avoid internal scaling:

            config['raw']          = self.rawModes[mode]
            config['main']         = {"size":self.rawModes[mode]['size'],'format':'RGB888'}

Also, the output format is set to RGB888 which makes sure we are not working with alpha-planes (that seems to be the default at the time of this writing).

Next, a transformation is specified - this is mandatory in a film scanner project, where the actual image might be flipped or mirrored:

            config['transform']    = libcamera.Transform(hflip=False, vflip=False)

Finally, some more settings are applied:

            config['controls']['NoiseReductionMode']  = noiseMode
            config['controls']['FrameDurationLimits'] =  (100, 32000000)

The default noise reduction mode in the still config is called HQ, and this is a software-based noise reduction algorithm. There is another noise reduction algorithm called Fast which is the default value for video configs. I am using 0 as a value, which corresponds to no noise reduction at all (mode: Off). This is the fastest way to get images out of the camera, and as I am doing anyway a lot of image processing on the scanned images, I prefer to do noise reduction in my own software. If you prefer to have some noise reduction in camera, I suggest to try the Minimalmode, which is purely hardware-based. This would correspond in the above code to setting noiseMode = 4.

The FrameDurationLimits are set in such a way that libcamera is been told “forget about this”. Basically, the FrameDurationLimits limits constraint the possible exposure time settings. The above line allows a range from 100 µs to 32 seconds. Enough for all practical purposes.

Finally, this configuration is piped into the camera by the line

picam2.configure(config)

At this point, the camera should be ready to roll, with automatic exposure and automatic whitebalancing. I prefer to scan with fixed values of exposure, the lowest gain possible and with fixed color gains. This can be achieved by the following code lines:

picam2.set_controls({'ExposureTime':exposureTime})
picam2.set_controls({'ColourGains':(redGain,blueGain)}
picam2.set_controls({'AnalogueGain' : 1.0})

The sensor’s tuning files are another interesting topic. I posted in this thread some details; the tuning file described there is actually now part of the standard RP distribution. It is called imx477_scientific.json and can be loaded just like the standard tuning file imx477.json. It already lacks the ALSC section, so no lens shading is performed with this tuning file.

4 Likes

I was reading through the documentation for picamera2, and saw that they have added a HDR-mode for use with Raspberry Pi 5. Is that anyone have tried, and if so, what was the results?

In the documentation it seems like it takes multiple exposures of an image, wonder if they are doing some kind of Mertens. The section can be found here under the header HDR:
https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf&ved=2ahUKEwjt2cSi8K2DAxUkJxAIHbg8B3UQFnoECBsQAQ&usg=AOvVaw3Q0dra-FYmQS7HB-t4BM-c

1 Like

This documentation is very uninformative. There is no way to figure out what they are exactly doing. It might be something they (Raspberry Foundation) or the libcamera team came up with.

Yeah, unfortunately not much what is going under the hood. Let’s hope that more information gets added in the future. Maybe worth to ask in the forums?

Well, sometimes you get interesting information from the Raspberry Pi Camera Forum. Sometimes not so much. Have a try.

I doubt that they are using exposure fusion a la Mertens. As a matter of fact I was once a strong supporter of the exposure fusion algorithm - mainly because it was able to create good-looking results even for difficult scans. I am now pivoting away from exposure fusion toward using raw files instead, for several reasons. One of them is that I am currently not so sure that color reproduction is really faithful within the exposure fusion context. Still researching this issue - but the use of gaussian and laplacian pyramids to merge the multiple exposures might lead to local color shifts which are challenging to counteract in postproduction. It will take some time until I have a good answer about this issue.

Another point is that the images to be merged via exposure fusion need to be registered very precisely. In my scanner, this is not the case. So I need a computationally expensive image alignment step before the exposure fusion. Using just a single raw capture as scan result avoids that step.

A single raw capture will show noticably more noise in dark image areas than an image produced by exposure fusion. However, any type of noise reduction will take care of this, especially since normally, dark areas in the scan will end up as dark image areas in your final result anyway.

In principle, you could capture several raw images and combine them into a new raw with much reduced noise in the dark areas of the image. I think that something like this is happening in the RP5-HDR mode. Once that noise-reduced raw image is available, you can just run the usual libcamera stuff to obtain an 8-bit-per-channel png or jpg (which one should not really call a HDR, as it’s in fact a LDR (low dynamic range image)). In any case, expect a reduction in capture rate, as the usual sensors like the HQ need to capture the differently exposed images sequentially. Also, this approach will not work well if your image is moving during capturing the different exposures. I do not think that the RP5 will do an image alignment step before combining the different exposures.

Speaking of the RP5 - with this machine, the foundation introduced a new way of processing the raw image data into a jpg/png. Jugding from the forums (the RP camera forum and the Kinograph forum), there are still some issues to be ironed out. That is why my film scanner is still at the “Buster” software level.

I have a dedicated additional machine (RP4) running the newest software level for testing, but at the moment, I will not upgrade my film scanner to any newer software version. It seems that under the hood, quite a lot of things have changed, even for RP3 or RP4 and I will wait until the dust has settled.

Also, for the time being, I will stay away from the RP5 hardware. As I understand it currently, the internal processing is done totally different on the RP5 compared to RP3 or RP4 - as a newly developed IC is used in the RP5 for this task. It might be better than the old way, or worse. I have not seen any comparision yet.

3 Likes

@Ljuslykta So, the documentation on the new HDR possibilities libcamera/Picamera2 RPI4 or RPI5 is difficult to understand.
If I understood correctly from reading the two guides it is not an HDR at the sensor level but rather a variant of the AGC algorithm
Firstly, different modes or “Channels” of the AE/AGC algorithm are defined.
Then there would be different HDR modes:
SingleExposure: Accumulation of the same channel? No merge or tonemap, ?
MultiExposure: Merge + tonemap of two channels (on RPI5 only)
On PI5 or PI4 we would also have MultiExposureUnmerged which would return the frames without merge

But in any case it does something !

Auto image and HDR SingleExposure image

Image Auto and HDR MultiExposure

So this requires studies, questions and clarifications on the forums

2 Likes

My scans having been completed, I can try the latest new features without consequences, the most sensitive being the introduction of PI5. So yes it is considerably faster than its predecessor. I resumed my fps bench :

PI4 Pi5
Metadata only 10.8 10.4
Make_array: 4.54 10.00
Jpeg encoder: 1.35 6.00
Jpeg encoder one thread 1.73 9.00
Jpeg encoder Thread Pool 4.08 9.80
Dng encoder Thread Pool ?? 1.00

Note:
HQ camera Max resolution
2 libcamera buffers
Opencv Jpeg encode Quality 95
Encode only without file write or network send

I’m not entirely sure of the values ​​for DNG encoding, it’s 100% python so doesn’t benefit from multithreading

It appears that the PI5 can easily support any sort of processing for the 10fps received from the libcamera lrequest loop (There does not seem to be any improvement on this point ?)

4 Likes