My 8mm/Super8 Scanner build (in progress, LED help)

Ah, ok. I understand. Difficult to say where this comes from. If you look at the red channel of your jpg-file,

you see at these positions a rather low signal in the red channel. Note that the same dark signal streaks appear at the frame border (bottom of your scan), between the bright sky and the frames boundary. It seems to affect mostly strong contrast horizontal edges.

Difficult to say what causes this. The blue halo is noticable in both your .jpg- and raw-scan, but not in the professional scan. So it’s probably not caused by chromatic aberation of the movie camera’s lens. However, the professional scan’s resolution might be to low to even notice.

If it would be caused by chromatic aberration of your scanning lens, one would expect the same issue also to be present around the sprocket holes. That’s not really what I see. Your lens seems to performs well, at least around the sprocket area. Not sure about the right border of your raw scan - is the blue line visible here a result of your raw processing, or is that already in the image data?

So, this type of halos remains a mystery to me, at least with the currently available information.

1 Like

Hi @Rowan,

In my device for transporting the film I use an old 8mm/Super8 projector.
In this projector, as in many others, the space for the lighting system is quite limited. For this reason, from the beginning I have always used, with more or less success, commercial white light LED lamps.
The one I currently use is the following:

It is a very cheap lamp, €1.3. I do not know the CRI parameter that does not appear in the technical data.
However, in my opinion, for 8mm film scanning application, it gives good results.
For the adjustment of the white balance, in its day I used exactly the same procedure described by @cpixip
Attached is the histogram of the image captured by the RPi HQ camera with no film threaded in the projector.
In the histogram there are actually three superimposed curves corresponding to the blue, green and red channels. Only the red curve is visible, which is the one drawn last.
An ideal histogram would be three overlapping vertical lines. It would mean that all pixels receive the same level of illumination. Of course this is impossible to achieve in practice. Instead we should try to get the base of the curves as narrow as possible, meaning there is little spread in light levels.

The histogram and images in the above link were taken on the HQ camera using the old Picamera library.
As a comparison, I attach histogram and an image of the same frame captured with the HQ camera but using the new Picamera2 library. The camera has been tuned with the imx477_scientific.json file (@cpixip thanks for sharing)

Histogram

I am finalizing a new version of my DSuper8 software that already uses the new library. I hope to post it on the forum shortly, as with previous versions.

Regards

3 Likes

You don’t have a gate holding the film flat so of course you have that limitation.

In terms of the colour, you need a high CRI white light, there’s really no alternative there (other than building a full-spectrum RGB light) and it will improve the dynamic range you’re getting by an order of magnitude. Your scan, no matter the initial colour-cast which may be removable, is missing entire wavelengths of colour because the light isn’t conveying it to the camera, and that’s what’s causing the limited dynamic range.

Because 90 CRI isn’t very high, at all. The price escalates as you go higher CRI, and you should should stick to a reputable brand like Yujiled and stick away from ebay. :slight_smile:

1 Like

Having a rather brutal time trying to ‘learn’ the ins and outs of picamera2
The Picamera2 Library (raspberrypi.com)

All I want to do is switch to cpixip’s scientific tuning file, but I can’t find any instructions on how to do so - I thought that reading the manual might shed some light on it, but the commands at the start of the manual don’t seem to work anymore, I heard a rumour that the updated OS has borked it? I’m hesitant to carry reading. What a headache!! Argh!
Does anyone have any suggestions on where I might look for answers, or which path I should be looking at now?

Hi @Rowan,

Certainly the beginnings are not usually easy. We must break the ice before starting to develop our own ideas.

I usually update my RPi once a week and the HQ camera with the picamera2 library works fine.

Perhaps the manual does not explain certain aspects of the library in sufficient detail.

In my case, as I was reading the manual, I was testing sending commands to the camera in an ipython console.

I plan to publish soon the latest version of my software that uses the picamera2 library.

Actually the final version of the software is already finished. Now I am preparing a user manual. As soon as I have it finished I will publish the software.

However, I am attaching the camera.py file where you can see how I am managing the camera. I think it works quite well.

I have tried to make it clear and didactic enough.

Study it carefully. If there is something that is not well understood, please let me know.

Regards

“”"
DSuper8 project based on Joe Herman’s rpi-film-capture.

Software modified by Manuel Ángel.

User interface redesigned by Manuel Ángel.

camera.py: Camera configuration.

Latest version: 20230430.
“”"

from picamera2 import Picamera2, Metadata

from time import sleep

from logging import info

class DS8Camera():
off = 0
previewing = 1
capturing = 2

# Resolutions supported by the camera.
resolutions = [(2028, 1520), (4056, 3040)]

def __init__(self):

    tuningfile = Picamera2.load_tuning_file("imx477_scientific.json")
    self.picam2 = Picamera2(tuning=tuningfile)

    # Configured resolution.
    self.resolution = self.resolutions[0]

    # Geometry of the zone of interest to be captured by the camera.

    # Coordinates of the upper left corner.
    self.x_offset = 0
    self.y_offset = 0

    # Zoom value. Determines the width and height of the zone of interest.
    # Values between 0.4 and 1. They are given by the zoom control.
    self.roiZ = 1

    # Width of the image.
    self.width = self.resolution[0] * self.roiZ

    # Height of the image.
    self.height = self.resolution[1] * self.roiZ

    # Automatic exposure.
    self.autoExp = False

    # Automatic white balance.
    self.awb = False

    # Number of bracketed exposures.
    self.bracketing = 1

    # Stop points.
    self.stops = 0

    # Manual exposure time.
    self.ManExposureTime = 2500

    # Automatic exposure time.
    self.AeExposureTime = 2500

    # Actual exposure time requested from the camera.
    self.exposureTime = 2500
    
    # Minimum exposure time. It is fixed at 10 us.
    self.minExpTime = 10

    # Maximum camera exposure time.
    # According to technical specifications of the camera, it can reach a
    # maximum of 670.74 s.
    # For our application we will set it to 1 s.
    self.maxExpTime = 1000000        

    # Metadata of the captured images.
    self.metadata = None

    # Camera capture speed in fps.
    self.frameRate = 10

    # Camera settings.
    # These settings are applied with the camera disabled.
    # It's not possible modify them with the camera active.

    # Allocate a single buffer.
    self.picam2.still_configuration.buffer_count = 1

    # Flip the image vertically
    self.picam2.still_configuration.transform.vflip = True
    self.picam2.still_configuration.transform.hflip = False

    # No images in preview.
    self.picam2.still_configuration.display = None

   # No streams are encoded.
    self.picam2.still_configuration.encode = None

    # Color space.
    # This feature is automatically configured by Picamera2.

    # Noise reduction:
    # This feature is automatically configured by Picamera2.

    # Duration time of the frames.
    self.picam2.still_configuration.controls.FrameDurationLimits = (self.minExpTime, self.maxExpTime)

    # Dimensions of the captured image.
    self.picam2.still_configuration.main.size = self.resolutions[0]

    # Image format 24 bits per pixel, ordered [R, G, B].
    self.picam2.still_configuration.main.format = ("RGB888")

    # Unknown parameters.
    # Default configuration.
    self.picam2.still_configuration.main.stride = None
    # self.picam2.still_configuration.framesize = None
    self.picam2.still_configuration.lores = None
    self.picam2.still_configuration.raw = None

    # Do not allow queuing images.
    # The captured image corresponds to the moment of the capture order.
    # To queue images the buffer_count parameter must be greater than 1.
    self.picam2.still_configuration.queue = False

    # Loading still image settings.
    self.picam2.configure("still")

    # Camera controls. These parameters can be changed with the
    # camera working.

    # AeEnable:
    # AEC: Automatic Exposure Control.
    # AGC: Automatic Gain Control.
    # False: Algoritm AEC/AGC disabled.
    # True: Algoritm AEC/AGC enabled.
    self.picam2.controls.AeEnable = False
    
    # This variable gives error "Control AEConstraintMode is not advertised by libcamera".
    # However, with the camera started it can be referenced normally. 
    # AEConstraintMode:
    # 0: Normal. Normal metering.
    # 1: Highlight. Meter for highlights.
    # 2: Shadows. Meter for shadows.
    # 3: Custom. User-defined metering.
    # self.picam2.controls.AEConstraintMode = 0

    # AeExposureMode:
    # 0: Normal. Normal exposures.
    # 1: Short. Use shorter exposures.
    # 2: Long. Use longer exposures.
    # 3: Custom. Use custom exposures.
    self.picam2.controls.AeExposureMode = 0

    # AeMeteringMode:
    # 0: CentreWeighted. Centre weighted metering.
    # 1: Spot. Spot metering.
    # 2: Matrix. Matrix metering.
    # 3: Custom. Custom metering.
    self.picam2.controls.AeMeteringMode = 0

    # ExposureTime: value between 0 and 1000000 us
    self.picam2.controls.ExposureTime = 4000

    # NoiseReductionMode: configuration parameter.

    # FrameDurationLimits: configuration parameter.

    # ColourCorrectionMatrix

    # Saturation: value between 0.0 and 32.0. Default 1.0.
    self.picam2.controls.Saturation = 1.0

    # Brightness: value between -1 and 1. Default 0.0.
    self.picam2.controls.Brightness = 0.0

    # Contrast: value between 0.0 and 32.0. Default 1.0.
    self.picam2.controls.Contrast = 1.0

    # ExposureValue: value between -8.0 and 8.0. Default 0.0.
    self.picam2.controls.ExposureValue = 0

    # AwbEnable:
    # AWB: Auto white balance.
    # False: Algoritm AWB disabled.
    # True: Algoritm AWB enabled.
    self.picam2.controls.AwbEnable = True

    # AwbMode:
    # 0: Auto. Any illumant.
    # 1: Incandescent. Incandescent lighting.
    # 2: Tungsten. Tungsten lighting.
    # 3: Fluorescent. Fluorescent lighting.
    # 4: Indoor. Indoor illumination.
    # 5: Daylight. Daylight illumination.
    # 6: Cloudy. Cloudy illumination.
    # 7: Custom. Custom setting.
    self.picam2.controls.AwbMode = 0

    # ScalerCrop:
    self.picam2.controls.ScalerCrop = (0, 0, 4056, 3040)

    # AnalogueGain: value between 1.0 and 16.0.
    self.picam2.controls.AnalogueGain = 1.0

    # ColourGains: value between 0.0 and 32.0
    self.customGains = (2.56, 2.23)
    self.picam2.controls.ColourGains = self.customGains

    # Sharpness: value between 0.0 and 16.0. Default 1.0.
    self.picam2.controls.Sharpness = 1.0

    self.mode = self.off

    # Starting up the camera.
    self.picam2.start()

    sleep(1)

# Initial settings.

# zoomDial

def setZ(self, value):
    self.roiZ = float(value) / 1000

    self.x_offset = int(self.resolutions[1][0] * (1 - self.roiZ) / 2)
    self.y_offset = int(self.resolutions[1][1] * (1 - self.roiZ) / 2)

    self.width = int(self.resolutions[1][0] * self.roiZ)
    self.height = int(self.resolutions[1][1] * self.roiZ)

    self.picam2.controls.ScalerCrop = (self.x_offset, self.y_offset,
                                       self.width, self.height)

# roiUpButton - roiDownButton
def setY(self, value):
    self.y_offset = value
    self.picam2.controls.ScalerCrop = (self.x_offset, self.y_offset,
                                       self.width, self.height)

# roiLeftButton - roiRightButton
def setX(self, value):
    self.x_offset = value
    self.picam2.controls.ScalerCrop = (self.x_offset, self.y_offset,
                                       self.width, self.height)

# Camera settings.

# awbBox
def setAwbMode(self, idx):

    if idx < 7:
        self.awb = True
        self.picam2.controls.AwbEnable = self.awb
        self.picam2.controls.AwbMode = idx
    else:
        self.awb = False
        self.picam2.controls.AwbEnable = self.awb

    if idx == 0:
        mode = "auto"
    elif idx == 1:
        mode = "incandescent lighting"
    elif idx == 2:
        mode = "tungsten lighting"
    elif idx == 3:
        mode = "fluorescent lighting"
    elif idx == 4:
        mode = "indoor lighting"
    elif idx == 5:
        mode = "daylight"
    elif idx == 6:
        mode = "cloudy"
    elif idx == 7:
        mode = "custom lighting"
    elif idx == 8:
        mode = "manual"
    else:
        return

    info("Adjusted white balance " + mode)

# blueGainBox, redGainBox
def fixGains(self, idx, value):

    self.metadata = self.captureMetadata()

    if (idx == 0):
        gred = value
        gblue = round(self.metadata.ColourGains[1], 2)

    elif (idx == 1):
        gred = round(self.metadata.ColourGains[0], 2)
        gblue = value

    self.picam2.controls.ColourGains = (gred, gblue)

    sleep(0.2)

    info("Camera color gains: blue = " + str(gblue) + ", red = " + str(gred))
         
# Capture.

# captureStartBtn
def startCaptureMode(self):
    sleep(1)
    self.mode = self.capturing
    info("Camera in capture mode")

# Advanced settings.

# constraintModeBox
def setConstraintMode(self, idx):
    self.picam2.controls.AeConstraintMode = idx        

    if idx == 0:
        mode = "normal"
    elif idx == 1:
        mode = "highlight"
    elif idx == 2:
        mode = "shadows"
    else:
        return

    info("Adjusted auto exposure restriction " + mode)

# exposureModeBox
def setExposureMode(self, idx):
    self.picam2.controls.AeExposureMode = idx
    
    if idx == 0:
        mode = "normal"
    elif idx == 1:
        mode = "sort exposures"
    elif idx == 2:
        mode = "long exposures"
    else:
        return

    info("Adjusted auto exposure mode " + mode)

# meteringModeBox
def setMeteringMode(self, idx):
    self.picam2.controls.AeMeteringMode = idx
    
    if idx == 0:
        mode = "centre weighted"
    elif idx == 1:
        mode = "spot"
    elif idx == 2:
        mode = "matrix"
    else:
        return

    info("Adjusted auto exposure metering mode " + mode)

# resolutionBox
def setSize(self, idx):
    self.picam2.stop()
    self.picam2.still_configuration.main.size = self.resolutions[idx]
    self.picam2.configure("still")
    self.picam2.start()
    if idx == 0:
        resol = "2028x1520 px"
    elif idx == 1:
        resol = "4056x3040 px"

    info("Camera resolution " + resol)

# This function is used to capture the metadata of the images.
def captureMetadata(self):
    metadata = Metadata(self.picam2.capture_metadata())

    return metadata
2 Likes

@Rowan, I am also starting from scratch both on the library, the sensor, and python.

When eating an elephant, I do so one part at the time.
I am getting familiar with the setting of the camera with libcamera-hello, then progressing to libcamera-still. It is simple enough to play with. One tip, the json file from @cpixit needs to be at the directory from where you are starting the libcamera command, or the entire path should be included. See this posting.

If you are already programming something, this posting is also a great step by step tutorial by @cpixip.

And lastly (as mentioned above), the implementation by @Manuel_Angel is a great see-how example.

PS. I also found this post by @PixelPerfect ,and the discussion above and below regarding the library (which is about multi-exposure) a great basic python code to get started and understand how the library works.

2 Likes

Hi guys,

Just to get a feeling about how to tackle multi exposure with latest software stack available on raspi with hq cam:

do you control the brightness of the :bulb: and keep camera settings static or do you use constant lighting and change exposure time for bracketing?

Would be awesome to get a overview and the pros and cons you experienced
Greets
Simon

I suggest to change exposure times while keeping the illumination constant. There several reasons for that.

  • with current picamera2/libcamera software you can do that without waiting 3 to 5 frames for an exposure change, as it was standard in the past.

  • the frames come with metadata, so you can build up your stack and trigger frame advance once the stack is full.

  • switching the illumination and keeping exposure time constant reqires some sort of sync between frames and illumination switch. Not easy, as most RP cameras are rolling shutter ones. So occationally, the switch will happen in the middle of a frame. You would need to detect that and throw away that frame. Or: devise some hardware triggering. While that can be done, the voltages involved are tiny and the electronics fragile.

I tried both approaches, with the varying exposure one resulting in three to four times higher capture rates. Qualitywise, I could not see any difference between both approaches.

1 Like

Thanks for your summary, great writeup as always!

Im convinced that the speedup together with the easier handling is an advantage that can’t be beaten, the goal is set, off to code!

@d_fens - some further hints with respect to your software development:

  • after you request a specific exposure value, it takes quite an amount of frames for this request to “travel” towards the camera. Once the camera has taken this frame with the requested exposure, it takes again a few frames to show up at the request frontend of the picamera2 software. Expect a delay of 11 to 12 frames after changing the exposure!

  • if you request a certain exposure from libcamera, say “2432”, you will not get that. There is only a certain set of exposure values realizable directly in hardware. All other exposures are simulated by libcamera by using a digital gain different from 1.0. That is the reason you can no longer set any digital gain in picamera2 as compared to picamera1. In my example, the real exposure time would be somthing like “2400”, with a digital gain of about 1.013 applied afterwards. Luckily, this will show up in the metadata associated with the frame. In general, exposure requested will not be equal to exposure delivered.

So there are some challenges to circumvent if you are going along the exposure-based HDR route. Here’s what works for me:

  1. The driving camera software constantly circles the set of required exposure times. After a short delay the frames delivered by the camera are following this cyclic exposure pattern.

  2. After a frame advance, the HDR-stack is emptied. The incoming data is scanned for the real exposure time of the current frame. Whatever this exposure time is, the frame is inserted in the correct place of the HDR-stack. Repeat with the following frames (they will have a different exposure time because of 1.).

  3. Once the full HDR-stack is obtained, trigger a frame advance. Wait for some time to allow the mechanical system to settle to a steady state. Start again with 2.).

With respect to point 2.) above, you have to take both exposure time and digital gain into account. Only the product of both does matter and should be equal to your requested exposure time.

In previous version of libcamera/picamer2, the digital gain applied used to adapt rather slowly after an exposure change, which made things even more difficult - occationally, you would have to skip 2 or 3 frames to come even close to your requested exposure. This has been improved in recent versions, I have however not checked the most recent one on this issue.

Anyway, there are two approaches you could take here:

  1. use only exposure times which are directly realizable by the hardware. In my example above, “2400” would be such an exposure time. In this case, digital gain will be 1.0 for certain.

  2. use whatever exposure times you want to use, but set a limit for the deviation of the digital gain from 1.0.

Here’s my actual code I am using with approach 2.) for this purpose:

            # if we find an appropriate exposure, delete it from list and
            # signal that the frame should be saved
            if abs(self.metadata['ExposureTime']-expo)/expo < self.dExpo   and \
                   self.metadata['DigitalGain']             < self.dGain:

with dGain = 1.01 (allowed variation of digital gain) and dExpo = 0.02 (allowed percentage of deviation between requested exposure and actual exposure.

These limits work fine with exposure fusion algorithms (“Mertens”); for calculating a real HDR from the HDR-stack, you have to calculate for each frame the real exposure as

rExpo = self.metadata['DigitalGain']  * self.metadata['ExposureTime']

and use this for HDR-computation.

Finally, be sure to use either the alternative “imx477_scientific.json” tuning file or be sure to switch off some automatic algorithms hidden in libcamera when using the standard one.

2 Likes

I agree with the opinion of @cpixip.

Although a procedure based on adjusting the lighting intensity should work in principle, it is not easy to put into practice.

Assuming that we could somehow adequately vary the intensity of the illumination, in my opinion we still have to deal with some undesired effect such as the change in the spectral composition of the light.

With the picamera2 library it is easy to change the exposure time, however, according to my tests and experiences, the change is not immediate.
In my software, prior to taking a capture, I run a loop where the desired exposure time is compared with the exposure time reported by the camera’s metadata. When both times coincide within a tolerance of 50 us, the loop is exited and the image is captured.

Specifically, I use this loop after the change in the exposure time and before capturing the image:

#This loop has the function of ensuring that the camera uses
# the new exposure time.

            numOfRetries = 0

            for i in range(config.numOfRetries):
                numOfRetries += 1
                metadataExposureTime = self.cam.captureMetadata().ExposureTime
                dif = abs(bracketExposure - metadataExposureTime)
                # info("Theoretical exposure time = " +
                #       str(bracketExposure) + " us\n" + " "*29 +
                #       "Camera real time = " +
                #       str(metadataExposureTime) + " us\n" + " "*29 +
                #       "Difference = " + str(dif) + " us")
                if  dif <= config.timeExpTolerance:
                    break

            # info("Number of retries = " + str(numOfRetries))

To my surprise, although the picamera2 library does not use the GPU, it is faster than the old picamera library.
With the RPi 4 a capture that with the old library took an average time of 4 s, in the same conditions with the new library the time has been reduced to 2.6 s.

1 Like

I agree with the suggestions by @cpixip and @Manuel_Angel to use the camera exposure change approach. I explored the option for changing the light intensity, and the implementation is not as simple as the option to change camera exposure.

@Manuel_Angel, from experimenting with White LEDs, it is possible to control the intensity (example implementation in this post). The results of the experiment do not show a significant change in the white balance (light spectra) at the range appropriate to the camera used, note this video of the implementation with a DSLR D3200.

The best of both worlds may be a tap approach to increase light intensity for capturing the ultra-dark portions of the scene. That can be done by turning on/off an extra LED set without the complications of intensity control.

thanks for all the feedback,

one thing that crossed my mind why changing the lighting might be an advantage:

for the very dark areas of a (underexposed) film the light passing through without burning out highlights might not be bright enough to recover dark details, no matter how long the exposure time?

do you think this is a valid point or event important?

– adding here some additional information about delays and cyclic capture. When you request a certain exposure time, libcamera/picamera2 will need quite some time to give you that exposure time back. Here’s the output of an experiment I did about a year ago (so this is an old version of picamera2):

 Index - requested : obtained : delta
  0 -     2047 :     2032 :   0.031 sec
  1 -     2047 :     2032 :   0.034 sec
  2 -     2047 :     2032 :   0.034 sec
  3 -     2047 :     2032 :   0.044 sec
  4 -     2047 :     2032 :   0.031 sec
  5 -     2047 :     2032 :   0.034 sec
  6 -     2047 :     2032 :   0.034 sec
  7 -     2047 :     2032 :   0.044 sec
  8 -     2047 :     2032 :   0.031 sec
  9 -     2047 :     2032 :   0.034 sec
 10 -      977 :     2032 :   0.031 sec
 11 -     1953 :     2032 :   0.021 sec
 12 -     3906 :     2032 :   0.035 sec
 13 -     7813 :     2032 :   0.032 sec
 14 -    15625 :     2032 :   0.033 sec
 15 -      977 :     2032 :   0.034 sec
 16 -     1953 :     2032 :   0.033 sec
 17 -     3906 :     2032 :   0.032 sec
 18 -     7813 :     2032 :   0.033 sec
 19 -    15625 :     2032 :   0.034 sec
 20 -      977 :     2032 :   0.037 sec
 21 -     1953 :      970 :   0.029 sec
 22 -     3906 :     1941 :   0.038 sec
 23 -     7813 :     3897 :   0.028 sec
 24 -    15625 :     7810 :   0.036 sec
 25 -     2047 :    15621 :   0.033 sec
 26 -     2047 :      970 :   0.032 sec
 27 -     2047 :     1941 :   0.037 sec
 28 -     2047 :     3897 :   0.030 sec
 29 -     2047 :     7810 :   0.032 sec
 30 -     2047 :    15621 :   0.034 sec
 31 -     2047 :      970 :   0.033 sec
 32 -     2047 :     1941 :   0.036 sec
 33 -     2047 :     3897 :   0.031 sec
 34 -     2047 :     7810 :   0.034 sec
 35 -     2047 :    15621 :   0.035 sec
 36 -     2047 :     2032 :   0.031 sec
 37 -     2047 :     2032 :   0.034 sec
 38 -     2047 :     2032 :   0.034 sec
 39 -     2047 :     2032 :   0.044 sec

For frames 0 to 10, the exposure time was fixed at 2047 - but I got only a real exposure time of 2032, the rest “taken care of” by a digital gain larger than 1.0.

At time 10 I request a different exposure time of 977. The first frame with that exposure time arrived only at time 21. That’s the time the request needed to travel down the libcamera pipeline to the camera and back again to picamera’s output routine. Eleven frames is more than a second in the highest resolution mode! Again, I requested 977 and got 970 - close, but not a perfect match.

After time step 10 I cycle through all the exposure values, that is [977,1953,3906,7813] for a few cylces. As you can see, the camera follows that cycle frame by frame. Once I switch back to my base exposure 2047 at time point 25, the camera continues to cycle until time 36 where it delivers again the requested exposure time.

So - in order to get the data of a full exposure stack, in cycle mode you need to wait only for four consecutive frames at most. Provided, you cycle constantly through all exposures. Given, you have no idea which of the four exposure will be the first to show up in the pipeline, but you know that the following three frames will be the other missing exposures. That’s the neat trick of cycling constantly through all required exposure. I got this trick from David Plowman, the developer of picamera2.

Of course, you need to program some logic which will save the incoming data into the appropriate slots of your HDR-stack, and you will need some additional logic to trigger frame advance and wait a little time that all the mechanical vibration have been settled.

As my scanner is mechanically not very stable (it’s all plastic), my waiting time (time between “trigger of frame advance” until “store next frame in HDR-stack”) is 1 sec; running the HQ sensor at it’s highest resolution gives me a standard frame rate of 10 fps. Nevertheless, capturing a single HDR-stack of a frame takes me on the average between 1.85 and 2.14 sec. Here’s a log-file of such a capture (capture time vs. frame number):

Note that these times include my 1 sec mechanical delay. So the capture would be faster if your mechanical design does not need a long movement delay. Again, this is the result when using the HQ sensor at full resolution setting, i.e., with 10 fps.

Occationally there are spikes where the capture takes noticeably longer, about 3 sec (two in the above plot). Might be related to some syncing issue, but I do not know the real reason. As it does not happen very often, I did not bother to investigate further.

1 Like

Short answer: no

The range of exposure times is much larger than the range of illumination settings you might have available for your illumination. An extra boost of doubling the intensity of a LED requires approximately to double the current flowing through the LED - but all LEDs have some current limit…

If you really want to have a strong shadow-push, you would need to work with a quite dim LED setting for normal scenes - this gives you potentially more noise from other sources (for example stray light from your environment will be more noticeable).

On the other hand, doubling the exposure is basically for free - you just need to scan a little bit slower.

Also, from my experience, a HDR-stack with 4 to 5 exposures is really sufficient for most film material. The important point is to fix the fastest (darkest) exposure in such a way that no highlights are blown out. That is, the brightest image areas should have values around 240 in an 8bit/channel image. I call this image “highlight image”.

The next image in the HDR-stack should have double this exposure time. You will discover that this image is generally quite a good scan of its own. That’s the reason I call this “prime image”. Doubling again exposure time for the first shadow image (“shadow 1 image”) and again for the second shadow image (“shadow 2 image”) and you should be done for all practical purposes. The shadow details you are capturing with this approach will display quite a noticeble film grain and under usual circumstances will end up anyway as rather dark areas in your final grade. That’s why I switched at some point from five exposures to only four - its sufficient for all practical purposes.

A note on this: I do have Agfachrome MovieChrome film stock which cannot be captured in full glory even by the five exposure approach. In very blue areas (sky), the red channel absorbs an incredible amount of light, staying pitch dark even in the brightest exposures. So far, I have not found a way to capture this faithfully - however, it plays no role in the exposure fused result I am currently working with. It is noticeble in a real HDR - but that issue is reserved for future endeavors.

What about noise differences with long and short exposures?

Good question! Generally, in a digital camera the noise is mainly connected on how the analog amplifiers are set. That is in picamera speak the analog gain. Digital gain has a similar effect and in addition reduces somewhat the number of available intensity values.

Doubling the exposure time should not lead to increased noise per se. Just another range of illumination intensities will be mapped to the output range. For scanning operations, one should always use the lowest gains possible.

From another point of view, small format film stock is incredible noisy in dark areas - if you double exposure time, the film grain becomes quit visible and annoying in dark areas. Luckily, those areas end up in the final edit also in the rather dark regime…

Agree.

Agree.

Provided a constant gain, I think is the opposite. The sensor has inherent noise, and if larger exposure provides a larger Signal the result should be an improved signal/noise at larger exposures, when illuminant intensity remains constant.

This article explains the particulars, some of it is beyond my depth of knowledge. This astro-photography review seems to reach the same conclusion.

The above hypothesis does not consider ambient light noise.

If my understanding is correct, it would indicate that there may be advantages on decreasing the light intensity for lighter image exposure.

Another interesting conclusion is that in general, stop motion with larger exposures (and less light), would give a better Signal to Noise than continuous motion which requires much shorter exposures (and more light).

Well, the article supports my statement. Doubling the exposure leads to an increase of the signal-to-noise-ratio, that is, to better noise performance.

Generally, camera noise is a complicated beast and depends on the specific application setting. Your references deal with low-light situations, something we should not have in a scanning application.

Your second reference falls into the trap of the many different sources of noise. In order to get his comparision shots, he pushes very dark exposures by one or even two stops. This is equivalent to changing digital gain on your camera from 1.0 to 2.0 or even 4.0. In his experiment, he reduces dynamic range during image capture, but enlarges it again in postproduction. This is not a valid approach for analyzing camera performance.

In any case, the use case “astrophotography” is quite different from a scanning application. In a scanning application, you will want to work with as much light you can. Your camera will perform better and light coming from other sources than your film frame will be less noticeable in the scan result. The later point is especially import if your scanner has an open design (as my scanner has. Usally working in a darkroom because of this).

You are right, I misread as halving the exposure… my bad.

Agree, the gain change is done in post, rather than at the sensor processing chain, the effect is the same.