Although the picamera2 library is still under development in alpha version, it seems to be in fairly widespread use.
For this reason I have decided to adapt my DSuper8 capture software to the new library.
During the last few weeks I have carried out quite a few tests and, in general, the behavior of the library is correct. Only on a couple of occasions have I observed the blocking of some internal function of the library.
Camera management with the picamera2 library is very different from the old picamera library, however, from the user’s point of view, the use of the software is very similar to previous versions.
I want to thank @cpixip for his contribution to the archive imx477_scientific.json which, in my opinion, provides very good quality images.
Thanks @Manuel_Angel Manuel for all the efforts done and the great support provided. My Bauer T192 scanner with new DS8 software runs smoothly, very stable and with great results.
im currently testing your software, it looks great so far. some points:
i added a requirements.txt for easier install of dependencies ( pip install -r requirements.txt)
# Automatically generated by https://github.com/damnever/pigar.
exif==1.6.0
matplotlib==3.6.3
numpy==1.23.5
opencv-python-headless==4.7.0.72
PyQt6==6.5.1
I wanted to start from the very beginning: Calibrate light ( exposure time and white balance) with selecting only the empty filmgate with a higher zoom so no black border is visible; theni added two indicators to the histogram:
# Add a vertical dotted line at x=128
plt.axvline(x=128, color='gray', linestyle=':')
# Add a vertical dotted line at x=128
plt.axvline(x=240, color='gray', linestyle=':')
self.fig.canvas.draw_idle(
I can’t get it to 128 exactly without touching Analogue gain, so leave it there.
Now i try to check for a equal illumination distribution over the filmgate so i do this by setting aperture to 16 on the componon S and with longer exposure time i get this image of a dirty lens i guess but no vignette so that ok i guess:
then set aperture back to ~4 and increased exposure time until most of the spikes were around 240 and got my minimal exposure time for a image with all highlights.
I have always found it very exciting to see my software running on other users’ computers.
I think it’s very good that you adapt the software to your tastes and needs. It is exactly what I did in his day.
In order to include it in the software distribution, I would like to know more details about how you have generated the requeriments.txt file and if the dependencies have actually been installed correctly.
Following the suggestions of @d_fens I have updated the user manual including instructions to automate the installation of client software dependencies using the pigar tool.
Recently I have been studying @Moevi.nl 's suggestion regarding the possibility of executing the DSuper8 software in its entirety, that is, both the server program and the client program simultaneously in the same RPI4 machine.
Made the appropriate test, I have to affirm that, in effect, it is a valid option. The software is executed fluently and without problems.
Logically, the benefits in terms of capture times are lower with respect to the use of a PC as a client, but if we are not too demanding, the system as a whole is perfectly usable.
The HDR fusion algorithm is really responsible for the greatest slowness. While in a PC the 6 images HDR fusion is carried out in a second fraction, in the RPI it takes approximate time of 5 s.
The capture of an HDR frame with 6 images with PC takes a time of about 2.6 s. With the RPI we are going to the 8 s.
On the other hand, the DSuper8 client software uses Python 3.10 and PyQt6, while in the repository of the Bullseye operating system we find Python 3.9.2. and PyQt5.
That is, in other words, we have to compile and install the required versions of Python and PyQt6 in the RPI.