Picamera2: autoexposure question

Although personally I am not a big fan of using autoexposure, but when incorporating the capabilities of the picamera2 library into my capture software, a question arose.

Generally, in our captures, apart from the area of interest constituted by the frame, we have unwanted areas around it, such as the drag hole and dark edges of more or less width.

In order to improve the precision of the library’s autoexposure algorithm, I have thought about establishing a rectangle (ScalerCrop) that only covers the frame, although the entire sensor area is subsequently captured, as occurs in raw captures.

My question is this: does the autoexposure algorithm actually use the ScalerCrop or does it simply calculate the exposure using the entire sensor area?

Thank you for your answers.

After asking the question I have been thinking about the matter.
I have decided to do several experiments that, in short, have been very simple and have given definitive results.

First of all, it must be clarified that, as I have verified, unlike the old picamera library, the picamera2 library does not adjust the exposure time.
With the picamera2 library the exposure time remains constant and the autoexposure algorithm adjusts the analog gain.

Having said that, I have carried out the following experiment:
I have taken the image of a frame with very high contrast.
With my own software, using the zoom control, I have selected a small part of the image, that is I have selected a small ScalerCrop rectangle.
Subsequently, I moved the ScalerCrop over the surface of the image and observed the value of the analog gain.

In the darkest areas of the image the analog gain exceeded the value of 8.
In the brightest areas, the same analog gain was little more than 1.
In intermediate areas, intermediate results were obtained: 2.5, 4…

In conclusion, it can be stated that the autoexposure algorithm in effect only uses the part of the image contained inside the ScalerCrop.

That’s not quite true: the analog gain applied depends on the shutter.
With good lighting, the analog gain will remain close to 1, which is desirable.

See also the description of the metering_mode parameter

The AGC/AEC algorithm takes a long time to converge. Here, for example, is the sequence of requests to converge to auto exposure from fixed exposure.
The behavior is curious! Normally, we should be able to rely on AeLocked, but this is clearly not the case.
Returning to auto
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: True Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 14657 Analog: 1.0 Digital: 1.001832365989685
AeLocked: False Exposure: 10000 Analog: 1.3350716829299927 Digital: 1.0002212524414062
AeLocked: False Exposure: 10000 Analog: 1.2278177738189697 Digital: 1.000914454460144
AeLocked: False Exposure: 10000 Analog: 1.1428571939468384 Digital: 1.0008269548416138
AeLocked: False Exposure: 10000 Analog: 1.0756303071975708 Digital: 1.0000661611557007
AeLocked: False Exposure: 10000 Analog: 1.020937204360962 Digital: 1.0002498626708984
AeLocked: True Exposure: 9742 Analog: 1.0 Digital: 1.0010063648223877
AeLocked: True Exposure: 9342 Analog: 1.0 Digital: 1.002004861831665
AeLocked: True Exposure: 9028 Analog: 1.0 Digital: 1.0002583265304565
AeLocked: True Exposure: 8400 Analog: 1.0 Digital: 1.0010716915130615
AeLocked: True Exposure: 8028 Analog: 1.0 Digital: 1.0014674663543701
AeLocked: True Exposure: 7800 Analog: 1.0 Digital: 1.0021942853927612
AeLocked: True Exposure: 7657 Analog: 1.0 Digital: 1.002753496170044
AeLocked: True Exposure: 7571 Analog: 1.0 Digital: 1.002395749092102
AeLocked: True Exposure: 7514 Analog: 1.0 Digital: 1.000394582748413
AeLocked: True Exposure: 7457 Analog: 1.0 Digital: 1.0008976459503174
AeLocked: True Exposure: 7400 Analog: 1.0 Digital: 1.0035778284072876
AeLocked: True Exposure: 7400 Analog: 1.0 Digital: 1.0001429319381714
AeLocked: True Exposure: 7371 Analog: 1.0 Digital: 1.0017125606536865
AeLocked: True Exposure: 7371 Analog: 1.0 Digital: 1.0002228021621704
AeLocked: True Exposure: 7342 Analog: 1.0 Digital: 1.003018856048584

1 Like

@dgalland,

Thanks for the document that I was unaware of, I have always used the documentation from the picamera2 library.

In my tests I have observed that the exposure time always remains constant and only the analog gain changes. The shutter is probably set by default so that the exposure time remains constant.

For my part, in my software, I have implemented an algorithm that varies the exposure time until the analog gain returns to the value 1, which is certainly desirable.

I am going to study the document and hope to clarify my ideas regarding the operation of the camera.

Hi Manuel

In AeEnabled mode, I don’t think you can always obtain an AnalogueGain of 1. The distribution between ExposureTime and AnalogueGain is determined in the tuning file according to the exposure mode. So you need to let the algorithm do its job, and increase lighting if necessary if AnalogueGain is always too high.

On the other hand, unlike picamera, you can obtain the request metadata without Jpeg encoding, which is sufficient, faster and interesting for determining when the AGC/AGE algorithm has converged and also for HDR when you want to reach a set of fixed exposure:

metadata = picam2.capture_metadata()

You can also obtain image and metadata without Jpeg or DNG encoding, as it is desirable to do this in one or more threads or processes.

request = picam2.capture_request()
image = request.make_image(“main”) # image from the “main” stream
…queue to a thread
metadata = request.get_metadata()
request.release() # requests must always be returned to libcamera

The question of mutithread or multiprocess needs to be studied carefully. In your program, you use Herman’s idea of a pool of threads to send images to the socket. In my opinion, this serves no purpose at all, as there’s only one socket protected by a Lock, so a single Thread would do just as well!

So the simplest and probably sufficient one Thread for Jpeg or DNG encoding and sending to the socket, then if that’s not enough, a pool of Threads for encoding and a Thread for submitting to the pool, retrieving in order and sending, and then if that’s still not enough, do it in Processes and not Threads.

Finally, note that for HDR, David Plowman’s trick is really interesting: for example, in full res, you can get 5 exposures in just 15 requests…

Finally, if possible, you should absolutely have at least two request buffers.

Regards
Dominique

@dgalland,

First of all, thank you for your comments.

I have been doing capture tests with the picamera2 library for quite some time. With AeEnable = True I have always found that the exposure time remains constant, although there is a significant variation from one exposure to another, only the analog gain varies. For this reason I have developed a function that looks for the exposure time that gives rise to a unity gain.
The aforementioned function makes use of metadata to find
Certainly in libcamera’s AGC/AEC algorithm there are the SetFixedShutter(double fixed_shutter) and SetFixedAnalogueGain(double fixed_analogue_gain) functions, but I can’t find a way to use them from picamera2, where it is only possible to activate or deactivate AeEnable.

In fact my software is based on Joe Herman’s original software. I have maintained the thread pool for historical reasons, but certainly some time ago I have carried out tests and I have been able to verify that the same thread was always used to send the images, that is, a single thread is enough to do the same work . Possibly in the next version reduce the pool to a single thread.

David Plowman’s trick has been discussed in several @cpixip posts, but I really find it very confusing and I can’t find a simple way to apply it to my software.
In any case, by making use of the metadata, the convergence of the exposure time can be found in a short time. In my software I have established a loop that serves to stabilize the exposure time, and then capture the image.
I include a fragment of the server software log where you can see the stabilization of the exposure time in a capture of 6 bracketed images.

2023-09-13 20:08:20 - INFO - Camera in capture mode
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1000 us
Camera real time = 3988 us
Difference = 2988 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1000 us
Camera real time = 3988 us
Difference = 2988 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1000 us
Camera real time = 3988 us
Difference = 2988 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1000 us
Camera real time = 3988 us
Difference = 2988 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1000 us
Camera real time = 985 us
Difference = 15 us
2023-09-13 20:08:20 - INFO - Number of retries = 5
2023-09-13 20:08:20 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:20 - INFO - Time taken for encode: 54.61550899781287 ms.
2023-09-13 20:08:20 - INFO - Thread-05 released
2023-09-13 20:08:20 - INFO - Bracketing image taken. Exp. time = 985 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.01
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1741 us
Camera real time = 985 us
Difference = 756 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1741 us
Camera real time = 985 us
Difference = 756 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1741 us
Camera real time = 985 us
Difference = 756 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1741 us
Camera real time = 985 us
Difference = 756 us
2023-09-13 20:08:20 - INFO - Theoretical exposure time = 1741 us
Camera real time = 1729 us
Difference = 12 us
2023-09-13 20:08:20 - INFO - Number of retries = 5
2023-09-13 20:08:21 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:21 - INFO - Time taken for encode: 55.05618699680781 ms.
2023-09-13 20:08:21 - INFO - Thread-05 released
2023-09-13 20:08:21 - INFO - Bracketing image taken. Exp. time = 1729 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.01
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 3031 us
Camera real time = 1729 us
Difference = 1302 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 3031 us
Camera real time = 1729 us
Difference = 1302 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 3031 us
Camera real time = 1729 us
Difference = 1302 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 3031 us
Camera real time = 1729 us
Difference = 1302 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 3031 us
Camera real time = 3018 us
Difference = 13 us
2023-09-13 20:08:21 - INFO - Number of retries = 5
2023-09-13 20:08:21 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:21 - INFO - Time taken for encode: 56.92817700037267 ms.
2023-09-13 20:08:21 - INFO - Thread-05 released
2023-09-13 20:08:21 - INFO - Bracketing image taken. Exp. time = 3018 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.0
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 5278 us
Camera real time = 3018 us
Difference = 2260 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 5278 us
Camera real time = 3018 us
Difference = 2260 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 5278 us
Camera real time = 3018 us
Difference = 2260 us
2023-09-13 20:08:21 - INFO - Theoretical exposure time = 5278 us
Camera real time = 3018 us
Difference = 2260 us
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 5278 us
Camera real time = 5278 us
Difference = 0 us
2023-09-13 20:08:22 - INFO - Number of retries = 5
2023-09-13 20:08:22 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:22 - INFO - Time taken for encode: 57.792126997810556 ms.
2023-09-13 20:08:22 - INFO - Thread-05 released
2023-09-13 20:08:22 - INFO - Bracketing image taken. Exp. time = 5278 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.0
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 9189 us
Camera real time = 5278 us
Difference = 3911 us
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 9189 us
Camera real time = 5278 us
Difference = 3911 us
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 9189 us
Camera real time = 5278 us
Difference = 3911 us
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 9189 us
Camera real time = 5278 us
Difference = 3911 us
2023-09-13 20:08:22 - INFO - Theoretical exposure time = 9189 us
Camera real time = 9160 us
Difference = 29 us
2023-09-13 20:08:22 - INFO - Number of retries = 5
2023-09-13 20:08:22 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:22 - INFO - Time taken for encode: 59.316120998119004 ms.
2023-09-13 20:08:22 - INFO - Thread-05 released
2023-09-13 20:08:22 - INFO - Bracketing image taken. Exp. time = 9160 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.0
2023-09-13 20:08:23 - INFO - Theoretical exposure time = 16000 us
Camera real time = 9160 us
Difference = 6840 us
2023-09-13 20:08:23 - INFO - Theoretical exposure time = 16000 us
Camera real time = 9160 us
Difference = 6840 us
2023-09-13 20:08:23 - INFO - Theoretical exposure time = 16000 us
Camera real time = 9160 us
Difference = 6840 us
2023-09-13 20:08:23 - INFO - Theoretical exposure time = 16000 us
Camera real time = 9160 us
Difference = 6840 us
2023-09-13 20:08:23 - INFO - Theoretical exposure time = 16000 us
Camera real time = 15985 us
Difference = 15 us
2023-09-13 20:08:23 - INFO - Number of retries = 5
2023-09-13 20:08:23 - INFO - Saved <picamera2.request.Helpers object at 0x7f9370d5e0> to file <_io.BytesIO object at 0x7f9371f7c0>.
2023-09-13 20:08:23 - INFO - Time taken for encode: 57.02523099898826 ms.
2023-09-13 20:08:23 - INFO - Thread-05 released
2023-09-13 20:08:23 - INFO - Last bracketing image taken. Exp. time = 15985 us - Framerate = 40.0 fps - AG = 1.0 - DG = 1.0
2023-09-13 20:08:23 - INFO - Sent image 74
2023-09-13 20:08:23 - INFO - n
2023-09-13 20:08:23 - INFO - 1 frame advance
2023-09-13 20:08:23 - INFO - Frame advance signal sent

In section 4.2.1.3 (More on the buffer_count) of the picamera2 manual the following appears:

• create_still_configuration requests just one set of buffers (as these are normally large full resolution buffers)

It seems that a single buffer is recommended for still image capture.

Best regards

Note that what is called “still image capture” is usually a two-step process. First, libcamera adjusts all it’s automatic algorithms (exposure, lens-shading, local color adaption, etc.) to something it thinks is ok. That adjustment takes into account all user settings which might be active, for example what AEC-mode is selected.

Only after libcamera thinks it has found a state close enough to the user’s request, it takes a picture which is than transferred to your software. That is the reason why taking long exposures, say 1 sec, takes usually double the exposure time. In this example, it would be 2 sec.

Clearly, in the “take a single photo” use-case, a single buffer is sufficient. The user just waits until his requested photo has been delivered to him.

But that’s not the way a scanner app is going to be operated. Here, you want to get images from libcamera as fast as possible. And using just a single buffer, libcamera ends up waiting on your application to release the single buffer you have allocated. So the framerate drops. This is probably not what you want. In a scanner application, you want to work with as much buffers as you can afford.

However, since these buffers eat up a lot of continuous memory space, you are quite limited here, especially when using anything else than a RP4 4GB or larger. You might need to increase this space in your config.txt; I already mentioned that above, but here’s the link to an old post of mine where I explain how to do it.

For auto-exposure, it seems contradictory to me to try to manually modify ExposureTime while the AGC/AGE algorithm is running with AeEnable=True. I think the result is unpredictable, and certainly not “true” auto-exposure. You have to let the algorithm do its work and, as explained, the AnalogueGain is determined by the exposure mode in the tuning file. In the default file and in normal exposure mode, the gain will be 1 if ExposureTime is < 10000.

As @cpixip explains, the aim is to capture continuous still images with the best fps, so you need to obtain and process them as quickly as possible.

Here are some measurements in full res PI4B 8G

loop on capture_request() only
1 buffer 5.5 fps
2 buffers 11 fps
Conclusion: At least two buffers are essential (beyond that, the gain is marginal).
The fastest, close to the sensor’s nominal flow rate.
Use when you’re only interested in metadata and not the image

loop on request.make_buffer or make_array
5fps
Some people wonder why a simple memcpy takes so long - a rather technical subject to study?

loop on capture_file (memory ByteIO) jpeg encoding
1.2 fps
Conclusion: Avoid using capture_file in the picamera/licamera main loop see paragraph 6.4 in picam2 doc

loop on make_array and jpeg encoding in one thread
4.8 fps
Simple and efficient, a single thread seems sufficient, the thread can also do network sending
Investigate whether more complex patterns, multiple threads or processes instead of threads increase fps.

Finally, the trick for multiple exposures is a bit complicated but really effective; in your example I see 5 captures to reach one exposure, the trick reaches it in just 3. It’s described in a discussion on picamera2 github.

@dgalland,

What I have observed in my tests is the following:

  • We have a manual exposure time of, for example, 4 ms.
  • We activate AeEnable.
  • We begin the capture and capture a long series of images.
  • Despite the different luminosity of the different images, the exposure time is always 4 ms, while the analog gain varies widely.

I just uploaded a video with a test that demonstrates this behavior.

In reality it is not five image captures but five metadata captures. The image is not captured until the exposure time does not match the desired one, within a tolerance of 50 us.

1 Like

The AGC algorithm is not completely documented but reading the code it seems that we can set ExposureTime or AnalogueGain even when AeEnable (not both parameters obviously). In this case probably the algorithm would only vary the other parameter? This may be what is happening in your case. So try setting ExposureTime=0 before AeEnable (I’m far from my computer and can’t test)
Finally, it is not easy to know when the AGC algorithm has converged, it takes at least fifteen frames.
And for the trick it is also 15 metadata request for 5 exposures
Regards

1 Like

@dgalland

Indeed that is the trick.

For the picamera2 AEC/AGC algorithm to adjust the exposure time, in addition to setting AeEnable = True, you must manually set the exposure time to 0.

I have been developing a dark theme for the GUI of my software and have not tested it until today. I was immediately able to verify that after adjusting the exposure time to 0, the AEC/AGC algorithm was automatically adjusting both the exposure time and the analog gain.

Now I have to see how to check the convergence of the exposure time.

Thank you very much for your wise suggestion.

Best regards

1 Like

I tried to optimize the"preroll" phase of the exposure cycle with starting a thread that continuously loops though the exposures and listens for a “capture request” and then pushes the ( in my case five) image arrays to a queue.

The idea was to get the images immediately and not wait for the ~10 frames until the camera settled to the requested exposure.

Now the issue is that the loop processing seems to take too long ( even with no make_array in it) and I don’t get the five exposures in five requests as I would have expected but rather duplicate or out of order exposures, only one of about 15 runs I get all different exposures directly and am done in half a second in full resolution with make_array.

I suspect this running in a flask (and debug) context slows it down on the raspi 4, does it work immediately for you guys?

just for reference, this is as good as it gets for me, the performance depends on the pause between the captures, number of image saving threads and image size of course.

import os
import threading
import time
import queue
from picamera2 import Picamera2
import cv2
import uuid

class Camera():
    _instance = None
    _lock = threading.Lock()
    
    RESOLUTIONS = [(2028, 1520), (4056, 3040)]

    def __new__(cls, *args, **kwargs):
        with cls._lock:
            if not cls._instance:
                cls._instance = super(Camera, cls).__new__(cls)
                cls._instance._initialized = False
        return cls._instance
    
    def __init__(self):
        with self._lock:
            if not self._initialized:
                tuningfile = Picamera2.load_tuning_file("imx477_scientific.json")
                self.picam2 = Picamera2(tuning=tuningfile)
                self._configure_settings()
                self.image_queue = queue.Queue()
                self.capturing = False
                self.exposure_list = [639, 1278, 2556, 5112, 10224] 
                self.capture_trigger = threading.Event()
                
                self.saving = True  # saver_thread should keep running            
                self.saver_threads = []  # list for saving threads
                for _ in range(4): # Play with this
                    saver_thread = threading.Thread(target=self.save_images)
                    saver_thread.daemon = True  # Set the thread to daemon so it automatically closes when the main program ends
                    saver_thread.start()
                    self.saver_threads.append(saver_thread)

                self.output_directory = os.path.join("app", "static", "output")
                if not os.path.exists(self.output_directory):
                    os.makedirs(self.output_directory)
                self.picam2.start()
                
                self._initialized = True

    def _configure_settings(self):
        self.minExpTime, self.maxExpTime = 100, 32000000
        self.picam2.still_configuration.buffer_count = 2
        self.picam2.still_configuration.transform.vflip = True
        self.picam2.still_configuration.main.size = self.RESOLUTIONS[1]
        self.picam2.still_configuration.main.format = ("RGB888")
        self.picam2.still_configuration.main.stride = None
        self.picam2.still_configuration.queue = True
        self.picam2.still_configuration.display = None
        self.picam2.still_configuration.encode = None
        self.picam2.still_configuration.lores = None
        self.picam2.still_configuration.raw = None
        self.picam2.still_configuration.controls.NoiseReductionMode = 0
        self.picam2.still_configuration.controls.FrameDurationLimits = (self.minExpTime, self.maxExpTime)
        self.picam2.configure("still")
        self.picam2.controls.AeEnable = False
        self.picam2.controls.AeMeteringMode = 0
        self.picam2.controls.Saturation = 1.0
        self.picam2.controls.Brightness = 0.0
        self.picam2.controls.Contrast = 1.0
        self.picam2.controls.AnalogueGain = 1.0
        self.picam2.controls.Sharpness = 1.0

    def start_exposure_cycling(self):   
        self.capturing = True
        self.capture_thread = threading.Thread(target=self._continuous_capture)
        self.capture_thread.start()
        
    def stop_exposure_cycling(self):   
        self.capturing = False
        self.saving = False
        for thread in self.saver_threads:
            thread.join()

    def trigger_capture(self):
        self.capture_trigger.set() 

    def empty_queue(self):
        try:
            while True:
                item = self.image_queue.get_nowait()
                # Optionally, do something with item
        except queue.Empty:
            pass
        
    def _continuous_capture(self):
        # Helper function to match exposure settings
        def match_exp(metadata, indexed_list):
            err_factor = 0.01
            err_exp_offset = 30
            exp = metadata["ExposureTime"]
            gain = metadata["AnalogueGain"]
            for want in indexed_list:
                want_exp, _ = want
                if abs(gain - 1.0) < err_factor and abs(exp - want_exp) < want_exp * err_factor + err_exp_offset:
                    return want
            return None

        exposure_index = 0  # To track the current exposure setting
        capture_id = None
        remaining_exposures = set()
        # Clear the queue
        self.empty_queue()
        while self.capturing:
            if not self.capture_trigger.is_set():
                target_exp = self.exposure_list[exposure_index]  # Get the target exposure
                _ = self.picam2.capture_metadata() # Get metadata otherwise the loop is too fast
                self.picam2.set_controls({"ExposureTime": target_exp, "AnalogueGain": 1.0})  # Set the camera controls

            if self.capture_trigger.is_set() and not remaining_exposures:
                start_time = time.time()  # Save the start time
                # Begin a new capture cycle
                capture_id = uuid.uuid4()  # Generate a unique ID for this capture cycle
                remaining_exposures = {(exp, i) for i, exp in enumerate(self.exposure_list)}  # Reset the exposures to be captured

            if remaining_exposures:
                # Capture and validate the image during a capture cycle
                request = self.picam2.capture_request()
                meta = request.get_metadata()
                image_data = request.make_buffer("main")
                request.release()
                print(f'Captured metadata: {meta["ExposureTime"]} and {meta["SensorTimestamp"]}')

                # Set the next exposure immediately
                exposure_index = (exposure_index + 1) % len(self.exposure_list)
                target_exp = self.exposure_list[exposure_index]
                self.picam2.set_controls({"ExposureTime": target_exp, "AnalogueGain": 1.0})

                matched_exp = match_exp(meta, remaining_exposures)
                if matched_exp:
                    _, i = matched_exp
                    image_info = {
                        "capture_id": capture_id,
                        "exposure": matched_exp[0],
                        "metadata": meta,
                        "image_data": image_data
                    }
                    self.image_queue.put(image_info)
                    remaining_exposures.remove(matched_exp)  # Remove the matched exposure from the remaining exposures
                    print(f'Used : {meta["ExposureTime"]} and {meta["DigitalGain"]}  ')
                    
            # If all exposures are captured, clear the capture trigger
            if self.capture_trigger.is_set() and not remaining_exposures:
                self.capture_trigger.clear()
                end_time = time.time()  # Save the end time
                elapsed_time = end_time - start_time  # Calculate elapsed time
                print(f"capture_multiple_exposures took {elapsed_time:.2f} seconds")

            # Cycle to the next exposure setting
            exposure_index = (exposure_index + 1) % len(self.exposure_list)
        

    def save_images(self):
        while self.saving or not self.image_queue.empty():
            if not self.saving:
                print(f"Still {self.image_queue.qsize()} images left to process")
            try:
                image_info = self.image_queue.get(timeout=1)
                capture_id = image_info["capture_id"]
                exposure = image_info["exposure"]
                image_data = image_info["image_data"]
                capture_dir = os.path.join(self.output_directory, str(capture_id))
                os.makedirs(capture_dir, exist_ok=True)
                filename = f"image_exposure_{exposure}.jpg"
                filepath = os.path.join(capture_dir, filename)
                cv2.imwrite(filepath, self.picam2.helpers.make_array(image_data, self.picam2.camera_configuration()["main"]))
                self.image_queue.task_done()
            except queue.Empty:
                if not self.saving:  # Check if saving should be stopped
                    break

if __name__ == "__main__":
    camera = Camera()
    camera.start_exposure_cycling()

    time.sleep(2)
    for _ in range(20):
        camera.trigger_capture()
        time.sleep(1.5) # Play with this
    
    camera.stop_exposure_cycling()