Note that what is called “still image capture” is usually a two-step process. First, libcamera adjusts all it’s automatic algorithms (exposure, lens-shading, local color adaption, etc.) to something it thinks is ok. That adjustment takes into account all user settings which might be active, for example what AEC-mode is selected.
Only after libcamera thinks it has found a state close enough to the user’s request, it takes a picture which is than transferred to your software. That is the reason why taking long exposures, say 1 sec, takes usually double the exposure time. In this example, it would be 2 sec.
Clearly, in the “take a single photo” use-case, a single buffer is sufficient. The user just waits until his requested photo has been delivered to him.
But that’s not the way a scanner app is going to be operated. Here, you want to get images from libcamera as fast as possible. And using just a single buffer, libcamera ends up waiting on your application to release the single buffer you have allocated. So the framerate drops. This is probably not what you want. In a scanner application, you want to work with as much buffers as you can afford.
However, since these buffers eat up a lot of continuous memory space, you are quite limited here, especially when using anything else than a RP4 4GB or larger. You might need to increase this space in your config.txt; I already mentioned that above, but here’s the link to an old post of mine where I explain how to do it.