My telecine machine

Ok. That:

solves this mistery :upside_down_face:

I get about 20 frames per minute in my current capture setup. These are five different exposures per frame streamed to a WIN10-PC. On the PC, it takes 1.6 seconds to process the five 2028 x 1520 frames into the exposure-fused result - but that is done offline, after the scanning. 20x5 frames = 100 frames per minute, that is comparable to your speed. However, the server used is only a Raspberry Pi 3+ with a 32bit OS.

Just swapping this with a RP4, I get about 27 frames per minute (which are a stack of 5 exposures per frame each). That would be equivalent to a total of 135 frames captured and transferred per minute. These timings are with a 930 ms delay for the scanner to advance the film after capturing the exposure stack of a frame. I can also confirm that changing the RP4 OS from 32-bit to 64-bit results in a small speedup of 5% or so.

The numbers are obtained in the mode SBGGR12_CSI2P (40 fps camera frame rate) which is equivalent to the mode 2 of the old picamera lib. With the camera running at full resolution (the old mode 3), the camera itself does not deliver more than 10 fps and things would be noticably slower.

(Note for all my following comments: the picamera2 lib is still in beta, so these things might change.)

Let’s discuss now the sensor modes a little bit more in detail. The HQ camera features within the picamera2 context 4 different basic modes (listed above). If you simply request a certain image size, libcamera will look at these basic modes and select whatever mode it thinks will work for you. It will than capture at that resolution, but will deliver you the image size you have requested, by scaling the raw image appropriately. That might result in either unscaling or (most often) in downscaling the raw image data.

If you do not want that behaviour, you will need to request a certain “raw” capture mode during intialization. That is described in the following discussion-page at the picamera2 repository. In addtion, now that the manual of picamera2 starts to be actually quite usable, so you might want to take a look here.

Another point I want to mention: you should be able to speed up image capture by not using the still configuration, but the video configuration. In the still configuration, the camera pauses, takes a few frames to figure out exposure and white balance, and only after that has been done, your requested frame is being delivered (at least this was the case in the alpha-version of the lib). In any case this is different in video mode, were the camera needs to run continously.

Furthermore, the number of buffers in still configurations are by default less than in video configurations, so you might run into resource problems when heavily using still configurations. You can however increase the number of buffers libcamera uses also in the still configurations. Also, there are other differences between still and video configurations, most noteably the amount of noise processing, you might be interested to check what parameters are used.

I am currently investigating the possibilities and issues of the new picamera2 lib (or, more specifically, the libcamera/picamera2 approach), in case you want to investigate this further.

The client-server approach that @Manuel_Angel is using is quite common in practice. Streaming 2028 x 1520 px images via LAN from a RP3 to a WIN10 machine at 4 fps results in a transfer rate of about 25 MBit/s - nothing a LAN can not handle. And that is live preview, in actual capture mode, the requirements are even lower, as not every image cpatured is streamed. At the high end machine exposure fusion can be done noticably faster (about 1.6 sec for a stack of 5 exposures of size 2028 x 1510 px).

I have been proposing exposure fusion for a long time (search the forum here), as it is one of the ways to handle the large dynamic of color-reversal stock. Worked fine with the old picamera lib based on the Broadcom ISP, but does yield less than satisfactory results with the new picamera2, based on the libcamera2 ISP. The reasons are many and can be attributed to bad implementation choices in libcamera. Also, some of standard tuning files shipped with the Raspberry OS are less than perfect.

So another approach I am currently investigating is to capture raw data (not jpegs) and create image data from the raw images by my own pipeline (so libcamera is basically out of the equation). The results look promising so far and better than what libcamera is able to deliver. However, a single raw capture for each film frame has not enough dynamic range. To handle the dark image areas, one has again capture several images of a single frame with different exposure settings. How to combine these raw images into a single raw image with bit depths of 16 bit or even float is my current subject of research.

In any case, the sizes of raw images are not managable with continuing streaming the image data via LAN to a PC. That’s why I am interested in your tests with SD and USB sticks. Generally, it is probably better not to use the SD card for a lot of storage operations (google “sd card corruption raspberry pi” for some fun read).

2 Likes