This is fantastic news! I tried your code and it works indeed. Something must have been changed in the latest updates to libcamera/picamera2 - it seems that it is no longer necessary to ask for the same exposure several times in a row. If you now ask a sensor for a certain exposure time, it’s immediately spot on, without any digital gain circus the previous versions were showing.
Taking this into account, I am getting now instead of the 2.4 secs/frame about 1.8 sec/frame. That is, capturing and image transfer to the PC-client takes about 800 msec, the rest is spent in moving the film (500 msec) and waiting for mechanical movements to die out (again 500 msec). 800 msec is still about two times of what really is needed, so there are still some bottlenecks in my code. Probably better not to transfer a captured frame immediately (as it is done currently), but utilize the movement period for transferring the data. Time to get back to programming!
– is that caused by the gate, or is it maybe related to the fillm still moving? As mentioned above, I need to wait (with my 3D-printed plastic scanner) to wait quite a while until mechanical vibrations die out.
It looks like thread has moved on to other (very cool) information, but while reading through from the beginning, I got a little stuck trying to work through this part of @cpixip’s answer in my head.
If this were a reflective process, the direction would obviously matter… but when you’re passing the light all the way through the substrate in both cases, it should be the same either way, shouldn’t it? Starting from the front or the back, the light is always going to pass through the emulsion layer, the carrier layer and whatever else might be in there, including the scratches on both sides. So any attenuation from those layers would be cumulative, no matter the order they’re traversed, right?
Assuming a perfectly diffuse light source, Ray Reversibility even guarantees that scratches on either side will end up affecting things the same regardless of which side is being scanned. I’d have to think through the perfectly collimated light source case. The exact pattern of the scratches on each side might matter more in that case, but my gut still wants to say Reversibility applies. Even if it didn’t, the density and patterning of the scratches appears very similar on both sides so any impact would effectively be randomly distributed.
(Thanks for those surface light images, @PixelPerfect. I’ve been curious to see a real-world example. I’m still in the–year 8–planning stages of my scanning machine and have aspirations of including a wet gate… but probably not at first.)
As an exercise, I overlaid/aligned both “backlight on” images in Photoshop so I could toggle between them. I couldn’t pick out any differences besides minor camera sensor noise and a very slight change in lens focus.
Did you have some other objection in mind that I haven’t considered here? It feels like the film’s facing direction shouldn’t matter at all and PixelPerfect shouldn’t need any worries about redesigning the original Wolverine frame.
What a dramatic comparison! My favorite difference is the high-frequency fence slats at the back left. They’re almost completely missing in the Wolverine shot. If that isn’t justification for the time and effort we’re spending on these machines to squeeze out all of the available detail, I don’t know what is! Nice work so far.
Just a thought experiment: open up the f-stop on your camera and get the emulsion side of the film perfectly in focus. Will it matter if there is a scratch on the other side of the film? Not much. Because it will be blurred out, due to the tiny focus depth of the setup. Now move the scratch around to the emulsion side of the film, and you will certainly notice it, as this side is assumed to be perfectly in focus.
Yes, but if you reverse the facing direction of the film you could perform the same experiment and get the same results. You can focus right past the scratches and carrier to get the emulsion into razor focus even if it’s on the back side. In both cases the same photons are passing through all the same stuff. It should be independent of aperture and focal plane, right?
Given a narrow depth of field, you’d just have to be a little more careful which side of the film you had in focus (which is why we generally prefer wider depths of field), but the resulting images would be identical*.
*Again, assuming randomly distributed, similar densities of scratches on both sides. Technically for the same individual scratch the light would be deflected earlier/later by the thickness of the film and in the opposite refraction direction, but from a probabilistic standpoint, the results are the same. You could argue there is technically more information to be gained by scanning both sides and doing something interesting with the fusion of the resulting data… but to do anything useful with it, it’d probably require more precision in focusing, higher glass quality, and lower sensor noise floors than we hobbyists have access to.
It might be a neat experiment with a giant 8k sensor and sub-micron lens control. You’d be gathering a kind of “light field” data set (like those Lytro cameras). I’m not sure any of it could beat simply matching the index of refraction using a wet gate to hide the (non-emulsion) scratches, but it’s still interesting to think about, nonetheless.
Film substracts the passing light to form the image we are after. The light that didn’t make it through is the image one would like to capture.
The typical lens used here has the best resolution/sharpness at a point between 4.0 and 5.6, which is not the wider depth of field one would ideally like. Reference info here.
In short, there are noticeable differences between either side. The better the conditions of your scanner and the film, these may be less noticeable or significant, but a difference nevertheless.
And when in doubt… try! Many of the things I have learn in the forum is by trying both cases. In the world of scanning (8mm in particular), the minimal incremental improvements in all areas of the scanner/scanning add up to integrate remarkable improvements.
As you very well say, if we take the image from the emulsion side, the light that passes through the support from the other side is simply photons, which subsequently pass through the emulsion and form the image.
In the reverse case, taking the image from the opposite side of the emulsion, assuming the image is perfectly focused, the photons already carry the information of the image. The image is already formed, but it has to go through the support material with its possible imperfections (for example scratches, inhomogeneity in the material, manufacturing defects, etc.), these imperfections, due to phenomena such as refraction, will alter the original image found in the emulsion.
No, there is an important difference. In a usual scanning setup, you have on one side of your film an integrating sphere and on the other side the scanning camera. The illumination enters the film from all possible directions - that is, a half sphere. The light also exists the film in all directions, but only a narrow angle of the total light is captured by the lens and projected onto the sensor.
In the area of a scratch on the carrier side of the film, the scratch will sample light from a different part of the illuminating sphere - which does not matter too much, as the light intensity is mostly direction-independent. A scratch on the side the camera is looking through does matter, as the scratch will bend the nearly parallel rays the lens of the camera captures to another position of the image information.
Another issue is that a deep scratch on the carrier side of the film will not damage the frame’s information - there is a layer of protection, a fraction of a mm deep, between the scratch and the permanent image. The same scratch on the emulsion side will typically show up as bright blueish line in the captured image.
I’ve experimented yesterday with trying to reach maximum speed. The best I can get is 0.6 seconds per frame, continuously. No blurry moving frames and 4 different exposures per frame.
The Capture loop looks like:
100ms: Capture EV-1, set EV-dummy
100ms: Capture EV-0, set EV-1
Async: Start movement in separate thread, transport duration 100ms
100ms: Capture EV+1, set EV-0
100ms: Capture EV+2, set EV+1
100ms: Capture dummy while movement starting, set EV+2
100ms: Capture dummy while movement stopping, set EV-dummy
It can be done if we know exactly how many delay the camera has. However I think I will settle for something a little slower and more stable, using only a single RAW image per frame.
Good question, I’ll investigate this.
It’s an interesting topic, and I was “worried” about doing all my captures wrong. But after testing it is for me really difficult to see any difference. So I’ve decided the benefits don’t outweigh the cost, and I’ll continue capturing the “wrong” side of the film.
Exactly! Yesterday I captured my first 10 seconds of film (single raw/dng images from “wrong” side) and put them in Davinci. This makes the difference even more visible. Much more detail, much better color, no cropping of the frame, much less shaking (from captures during transport). The difference is so huge, perhaps I’ll upload a comparison video to youtube. In this case the benefits do very much outweigh the cost .
@PixelPerfect had a chance to try your class and HDR capture, it works great. Thank you for sharing.
Did some testing to capture full resolution, and seems like it was not keeping up, but it may have been my limited knowledge of the library. I was able to capture full resolution by setting a separate stream for preview than for capture.
nice idea, but i’m not sure if I understand the approach:
do you expect the camera to react with 500ms delay, or how does setting the EV and capturing the image with requested properties later in the loop is connected? why are you setting the dummy value?