A few months ago the Raspberry Pi foundation introduced a newer, faster machine into their lineup: the Raspberry Pi 5. This machine is somewhat faster and features a new chip developed by the RP guys themselves. With the RP5, some changes were introduced which affect the way image captures are done.
A new IC was designed by the RP guys. This thing handles at least the CSI-receiving part and some initial computations of the raw data. Presumably, the data is than transfered to the GPU, which does the rest of the processing, as “instructed” by the libcamera framework. Details are so far not available, as proper documentation is virtually not available. From my own tests, there is not much difference noticable between results obtained with, say, a RP4 vs. the RP5. However, note the details discussed below.
To set the stage for the following remarks, note that any image processing pipeline has various processing stages. Usually, the sequence of steps does not vary much between cameras. A high-end DLSR will do it basically the same as your mobile phone or your RP5 with a HQ sensor attached.
The very first processing unit is a receiving stage which gets the raw data directly from the sensor. In fact, if you are shooting in raw, the data from this stage is directly saved in some kind of raw format. Within our context that is usually a .dng-file. If you feed this data directly in, say, daVinci Resolve, that’s all you need to be concerned with.
However, these raw images look aweful when viewed directly. The raw images are in a way equivalent to the old analog negative format and need to be “developed”. This is done in the RP-context with a software called libcamera
. libcamera
debayers the raw image, applies whitebalancing, variable gains to the red and blue channels, does some noise reduction, estimates the color temperature of the scene, applies a sensor specific color transformation based on the estimated color temperature and a gamma-curve to the original raw data. Once all that processing is done, the image is actually looking reasonably close to what your eyes have seen when taking the photo. This output of the libcamera
processing pipeline is the image you get when you capture in .jgp- or .png-format.
To handle all the processing described above (and I left out a few processing steps), there is a socalled tuning file available for various sensor. It has all the sensor specific processing parameters available, including the data to steer the various automatic algorithms which are implemented within libcamera
.
There are quite a lot of automatic algorithms available in the libcamera
-approach. While some of them, like exposure and whitebalance, are exposed to the user, some others are somewhat hidden hidden from view.
The standard tuning files for the HQ sensor shipped with the RP5 feature a new format, compared to RP4 and below. This is necessary as they contain information for new features only available on the RP5, like HDR-modes, for example.
The tuning files are stored in two different places with the latest RP OS release:
- Tuning files for the RP5 are stored at
/usr/share/libcamera/ipa/rpi/pisp
- Tuning files for the RP4 and below are stored at
/usr/share/libcamera/ipa/rpi/vc4
If you look into these two directories, you will notice that the directory for RP4 and below contains more files than the directory for RP5. Most notably, the imx477_scientifc.json
tuning file is missing in the RP5 directory. This tuning file was created by me in an effort to correct a few aweful choices in the standard tuning file, which runs under the name imx477.json
. I have no information whether the RP people will migrate the imx477_scientifc.json
to the RP5. But I will release a RP5 version once I parsed and understood all the new parameters available in the new format.
The standard tuning file imx477.json
is subject of being changed occationally. So if you are doing a software update on your RP, chances are that you will get some other output today, compared to your scan last week. So be careful when updating your RP’s OS - you might be in for a surprise. At least if you are using the libcamera
-processed output of your scanning app (.jpg or .png).
But at least when you are working with raw images (.dng) on your RP5, you are fine? No, sadly enough, not quite.
The reason is that the RP guys introduced with their new chip a new “raw” format which they call “compressed” and label as “visually lossless”. It is introduced to lower memory throughput, in principle enabling higher framerates. The only “documentation” currently available is uncommented C++/Python source code - I am not going to reverse-engineer that bunch of code to find out what they are doing here. Probably a kind of coarse log-transform, which reduces the quantisation levels of the original camera signal in a way which is indeed “visually” hard to spot. But as soon as you are doing some extensive image grading, I bet the coarser quantisation will show up.
Now, while in principle the new “compressed” format is a good idea, the way the RP guys are using it is interesting. If you just use your old software approach (either rpicam-apps or picamera2), the RP5 will work with the new compressed formats!
Ok, you might say - I do not really care in which raw format the RP5 is working, as I am not using the libcamera
-image output, but work directly with the .dng-files. Which is the original sensor data? Well, not really. In fact, the RP5 works internally in the “visually lossless” raw format and for saving your .dng-file it decompresses the data again for saving it into the .dng-file.
So: without any additional measures, with the RP5 you do not get the raw sensor data from the HQ sensor in the .dng-file, but something with a potentially worse performance - it’s the “compressed” data the RP5 is working with in the libcamera
-pipeline.
This has funny consequence. As Dominique Galland (@dgalland) discovered, just for preparing the .dng-data (without writing it to storage), the RP6 takes about 1 sec. That is actually the same time the RP4 needs to prepare and write the .dng-file to disk. And in fact, I measured around 980 msec for this task on the RP5, and about the same on the RP4.
What happens here is the following. If you do nothing, the RP5 will work with the new “compressed” format. This format is nothing any raw-converter (or daVinci Resolve, for that matter) can read. So before writing the raw data to the .dng, that data is decompressed and than writen to disk. In the picamera2 context, this is actually done by pure Python code - and this takes about 1 sec on the RP5. The RP4 is slower than the RP5, but because it does not need to “decompress” the raw data, it is as fast as the RP5 when writing a .dng-file.
There is a way to speed things up and improve the raw quality on the RP5. You need to force the RP5 to work with uncompressed raw. This can be done by explicitly requesting such a format. Here’s the appropriate code line to do so:
capture_config = picam2.create_still_configuration(raw={'format':'SRGGB12','size':(4056, 3040)})
If the uncompressed raw format is selected with an RP5, you end up with about 160 msec for a .dng-save on a SSD drive, compared to about 980 msec when working with the “compressed” format. While 980 msec is too slow for my film scanner, the 160 msec when using the normal raw format is fine.
As Dominique discovered, all libcamera
operations take longer with the normal raw format than when using the compressed format. Since the libcamera
output is anyway only 8 bit currently, one is probably fine using the compressed format if .jpg- or .png images are used in scanning. I will continue working with .dng-files, selecting as described above the normal raw format as basis.