My Telecine Software

You might want to check the examples in the following post to get an idea what the Mertens algorithm is able of. As long as you do not have a very good HDR-display, stay with Mertens, forget the other algorithms.

1 Like

Hi @Ral-Film

As you can see in other posts my yart project is similar to Manuel’s, both inspired by the original Joe Herman project. https://forums.kinograph.cc/t/super-8-scanner-based-on-dominique-gallands-software/2106

So my experience with HDR is the following.

  • As Manuel says you first need to have the lighting adjusted so that the exposure time for an average image is about 4ms (at ISO 100). The exposure time is limited upwards by the framerate and downwards by the sensor reading. For example if the lighting is too weak the exposure time cannot be increased enough to get the overexposed HDR image.
  • I am dubious about the interest of making 6 exposures, in my opinion 3 are enough and this is especially true since it takes about 4 frames to stabilize an exposure and therefore it reduces the throughput considerably.
  • In my case I do not count in stops but in ratio, my values chosen experimentally are 0.1 for the under exposed image and 2 for the over exposed image.So for example exposures 0.4-4-8 ms (This is quite consistent with Manuel’s values)
  • Amateur films can have large variations in exposure, I think that the automatic exposure for obtaining the middle image is interesting, it gives a starting point. This is usually what I choose. Note that this still reduces the throughput because about 8 frames are needed to stabilize the automatic exposure.In the end with a resolution of 1920x1080 the capture requires about 2s per frame.
  • The Mertens algorithm is simple and effective, I could not get satisfactory results with Debevec.
  • My program has the possibility in preview or in capture to display or save the unmerged images, this is really useful to adjust the exposures.

Thank you all for your valuable input.
For test purpose I have altered my diffusion filter and I get now a much higher light output.
I can get with ISO 100 and under 1ms an image.
Mertens algo is working, however I am not really satisfied with my results.
The post suggested by “cpixip” shows an image of the Petra in Jordan which shows a similar situation I am facing.

The images 1.0, 10.0 and 50ms are manual fixed exposers. For each setting I took several shots to ensure the camera has settled.
The final “Mertens image” are combined images (5 img/frame, 5 Stops)
Please refer to my uploaded image. The result still falls behind my expectations. The garbage can in lower left shows nice wood structures. The combined “mertens” image is not much better than the single exposure with 10ms.
I have ordered a new lens for other reasons and will continue optimizing sharpness and mechanical stability first.
Thanks again for sharing all your knowledge.
Have a nice weekend.

What could happen here is the following: the opencv implementation of the Mertens algorithm does convert your 8 bit per channel images into a fused floating point image. However, this fused image usually ends up with brightness values below zero and above one (which is the normalized brightness range for floating point images). If you simply output this image as a .png, for example, dark and bright image areas become clipped. That is an error I have seen in a lot of implementations. Be sure to rescale the output of the Mertens algorithm with something like

minimum = img.min()
maximum = img.max()
scaler = 1.0/(maximum-minimum+1e-6)

img = scaler*(img-minimum)

Another point to check: you have taken the 1ms/10ms/50ms images manually. That does not assure that the images which you are inputting into the Mertens algorithm are the same, as those are presumably taken during automatic capture. It could be that the camera does not really settle to the exposure time you are requesting. That issue is generic to most cameras (you have to wait for 3-50 frames, depending on the camera) and was the reason in my scanning approach to vary the power of the light source for the different exposures, not the exposure time.

The three input pictures you have posted should give you a better result than the one you obtained. Both highlights and shadows are poorly defined.

Generally, I would advise to store the separate exposures of a frame scan for archival purposes. As the HDR techniques (especially the display technology) progress, you might want to realize a real HDR workflow, which is different from what the Mertens algorithm is aiming at.

While you can do a Mertens with only two appropriately chosen images, you will need to cover at least a range of 5 stops for scanning all the details of a standard Kodak color reversal film (other film stock is less demanding). For most camera sensors, this means you should not go below three different exposures. If you sample more exposures from a given frame, the Mertens algo automatically reduces the noise of the camera sensor. Also, you are better off with more exposures if, at a later stage, you want to produce real HDR footage (that would be than “Debevec”, but without tone-mapping).

1 Like

I want to make a few comments about it.

The Mertens algorithm (from opencv), I apply it as follows:

img = createMergeMertens.process(imglist)
img = numpy.clip(img * 255, 0, 255).astype(‘numpy.uint8’)
return img

I have not made it up. It is what I have seen researching on the Internet.

@cpixip I would beg you to complete it in the way that you think is most appropriate.

As for the subject of automatic exposure. Indeed, several exposures of the same frame are needed to arrive at a stable exposure time.
For this, in the DSuper8 software a loop is implemented that takes one image after another, which is sent directly to a null file until a stable time is observed, and at that moment the real image that is sent to the client is taken.
Anyway, I am not very fond of automatic exposure. My working procedure is very similar to that of @cpixip . I have come to this procedure by intuition and by doing many tests, rather than by technical criteria.

If manual exposure is used, in the logs of both the client program and the server, the real data of the exposure reported by the camera appear.

I take this opportunity to inform that I am preparing a new version of the software in which I will take into account valuable information that appeared in the forum.

I’m going to add the ability to save the images from bracketing.

Thanks to @cpixip for contributing his authoritative opinions.

Regards

1 Like

Hi @Manuel_Angel

I would suggest trying the following approach for a test:

img = createMergeMertens.process(imglist)

minimum = img.min()
maximum = img.max()
scaler = 1.0/(maximum-minimum+1e-6)

img = scaler*(img-minimum)

img = numpy.clip(img * 255, 0, 255).astype(‘numpy.uint8’)
return img

and see whether it makes a noticable difference with respect to highlights and shadows.

It’s been a long time since I went along that way - nowadays I am using my own Mertens implementation. So I am not sure if the current opencv implementation still outputs out-of-range data. You can check this by simply including a single additional line in your code, like in the following code segment:

img = createMergeMertens.process(imglist)

print(img.min(),img.max())   # this is the simple test line

img = numpy.clip(img * 255, 0, 255).astype(‘numpy.uint8’)
return img

If you see in this output numbers below zero or above one, the current implementation of the opencv Mertens still suffers from bad scaling.

I observed out-of-range data mainly in strong contrast scenes with clearly defined shadow areas. Again, haven’t checked this for years, but I assume that this is still the case.

If you are rescaling the Mertens output to a valid range (as suggested above), you might see a noticably reduced contrast, of course depending on your material. As I am doing anyway a color-correction for each scene in postproduction (daVinci Resolve is recommended), that is not too much an issue for my workflow. However, it might not fit your workflow.

In fact, in a production setting, if the above described rescaling is necessary, it is probably better to keep the rescaling fixed (say, the interval [-0.5:1.3] is mapped to the interval [0:1] or at least use some more robust min/max estimator, similar to something like

minimum = np.percentile(img, 1)                                 
maximum = np.percentile(img, 99.5)

Some additional comments:

this is indeed the way to go if you are working with Raspberry Pi hardware/software. As far as I remember (again, ages ago), you would set the exposure time via shutter_speed and query the exposure_speed until both match approximately - that takes usually 3-4 frames (it might be the other way around, can’t remember). For other cameras, you need to find another way to make sure things have settled.

I absolutely agree, as this kills the original dynamic of the movie, at least to a certain extent. If you are taking multiple exposures anyway, you can set up your system in a way that the whole dynamic range of your source material is captured for sure. One way to do this I have described in detail in this and the posts following. Basic idea: the exposure reference is the light source itself, which should be imaged in the darkest frame captured in such a way that it’s not burning out (an 8bit value of 230-250 for example). The other, brighter exposures are used to sample a dynamic range of 5-6 stops. From my experience, this is sufficient for all color-reversal film stocks I encountered (Kodachrome, Agfa, Fuji, etc.)

3 Likes

@Ral-Film and @Manuel_Angel
@cpixip and I were asking how you perform the Mertens merge and we drew your attention to a frequent misunderstanding not well explained in the opencv documentation, you have to normalize the result between 0 and 1. Either as explained by cpixip or like this :
image = cv2.normalize(image, None, 0., 1., cv2.NORM_MINMAX)
and this before changing back as usual to an integer between 0 and 255 :
image = np.clip(image*255, 0, 255).astype(‘uint8’)
Typically the Mertens opencv merge gives a result between -0.05 and 1.2 and without this normalization we lose all the benefit of the merge at the ends of the histogram. This is especially noticeable in the treatment of burnt whites, the main advantage of HDR. On the other hand as @cpixip says the contrast is reduced by this normalization and a histogram equalization can be done in the post.

First row auto exposure 14ms Over with ratio 3 → 42ms Under with ratio 0.05-> 0.7ms
Second row Mertens clipped Mertens Normalized Mertens Normalized and equalized
Sorry, I couldn’t find an image in this reel that better highlights the difference in the white areas
@cpixip
Concerning the auto-exposure or not, I understand your reasoning but an amateur film is often a succession of scenes with very different luminosities and if the auto-exposure of the Pi Camera is not perfect it still gives a good starting point to obtain the median image before implementing the other HDR exposures. I also think it should allow to limit to three exposures. The constraint is that the image must be framed without black bands that would disturb the calculation.

@Manuel_Angel
Concerning the wait for the stabilization of the exposure it is not certain that the stability on two successive images is sufficient. As indicated by cpixip you can compare the requested shutter_speed and the obtained exposure_speed but the equality will not be perfect (the exposure is multiple of the scan time of a pixel line of the sensor). For the Pi camera the most reliable method is to wait 4 frames. And if auto exposure it takes 8 frames to get the result. All this slows down the capture, what is the throughput when you capture in 2028*1520 with 6 exposure per frame?

1 Like

Hi @dgalland,

If we are talking about automatic exposure, in principle we do not ask the camera for any exposure time with the shutter_speed variable. In the PiCamera class, to make automatic exposures, we must set shutter_speed = 0, and obtain the exposure time from the exposure_speed variable

Certainly a variable number of exposures is required until exposure_speed is stabilized. The loop I use in my software is more complicated than simply comparing two successive captures, most of the time 6-8 captures are required.

The Raspberry Pi camera capture functions were one of the biggest headaches I had during software development.

I always take captures in manual exposure, generally with 6 images per frame and a difference of 4 or 5 stops between the fastest and the slowest. This gives a wide range of exposures, enough to digitize any film normally exposed at source. The total process takes about 3 seconds per frame, including the capture, the sending to the main computer, the post-processing and the writing of the final file.
In my next version I hope to reduce this time.

Regards.

@Manuel_Angel
Yes the capture loop is a very tricky piece of code.
Maybe I’m wrong but if I look at your code DS8Server.py I only see a waiting loop in the case of auto-exposure but no waiting loop between successive brackets?
From my experience after setting cam.shutter_speed to a bracketed exposure value you have to wait for 4 frames before you get the requested value in cam.exposure_speed ?

@dgalland

Until now I thought it had a good capture algorithm. With your comments, I certainly now have doubts.
The following snippet comes from the PiCamera class documentation:

exposure_speed

Retrieves the current shutter speed of the camera.

When queried, this property returns the shutter speed currently being used by the camera. If you have set shutter_speed to a non-zero value, then exposure_speed and shutter_speed should be equal. However, if shutter_speed is set to 0 (auto), then you can read the actual shutter speed being used from this attribute. The value is returned as an integer representing a number of microseconds. This is a read-only property.

As you can see, according to the documentation, in manual exposure the variables shutter_speed and exposure_speed must have the same value.

I will do further tests to confirm it.

Anyway, everything can be improved, especially if we talk about software.

… some comments to clarify things:

I checked my old software for Raspberry Pi cams how the fact that the exposure was settled was checked. Here’s the original code:

# has the cam settled to the new exposure?
if int(self.camera.exposure_speed) - int(self.camera.shutter_speed) < 20 :        

   # ok, camera setting time has expired, send the result
   if self.camera.img_type==1:
       # time to send the image!
       if image_size>0:
           self.stream.seek(0)     
           self.counter += 1               
           self.camera.ldrStream.put( 
               (self.camera.img_type,image_size,self.camera.shutter_speed,self.stream.read()) ) 
           self.counter += 1                                 
           utils.logStatus('Output - Image: %d pushed onto LDR queue'%self.counter)

Before that code segment, the desired shutter_speed was set like:

camera.shutter_speed = int(hdr_steps[img_count])

So the sequence of operation was:

  1. set desired exposure time via shutter_speed
  2. read exposure_time while frames are being taken by the camera and wait until both exposure_speed and shutter_speed have the same values, within limits. Then send the image to the client.

Edit: In fact, as I discuss in a post below, that is only partially correct. You will need to wait through 3 to 4 additional frames after this happens to be sure that the image is taken with the exposure value you have requested. See data plot in the post below.

As far as I can recall, waiting only for both values to be approximately equal was necessary because sometimes the automatic algorithms running in the background were not able to get both values equal.

Instead of the above procedure, from my experience, you can also simply wait 5-6 frames; that is the longest time I have encountered under normal circumstances that it takes one of the Raspberry Pi cameras to settle onto a requested exposure.

Note that the above code is old, somewhere around 2018 or so. In the mean time, the Raspberry Pi foundation released lots of new firmware for their cameras. Things might have changed, but as I am no longer using the “different exposure time” approach, I have not checked this.

I am currently running my scans with 5 exposures/5 stops, but only because my tunable light source is not really able to deliver an exposure ranges of 6 stops. I would also work with 6 stops if possible, because with that dynamic range, even the largest contrast you will encounter with color reversal film stock will be properly digitized.

While reducing the number of exposures to a minimum helps you dramatically in scanning time, you loose some other advantages. Let me elaborate:

  1. The dynamic range what a single exposure can cover is limited. Also, the response of the camera (if you are not working with raw data) is non-linear. So if you do not adapt the exposures to the image content (which, as I understand, @dgalland is doing), you simply might miss important image data in the shadows and highlights of over- or underexposed scenes. Also, when working only with a few images, you might notice (due to the nonlinearity of the camera’s response curve) transition zones (banding) in slowly varying intensity gradients of the frame. Have not noticed this with @dgalland’s 3 image procedure, but I have seen this in another Super-8 scanner which was working only with two exposures.

  2. Note that the Mertens algorithm changes drastically the visual appearance of your film material. It needs to do so, as the full dynamic range of a Kodachrome film is impossible to code, transfer and display via existing technologies. But a Kodachrome does no longer look like a Kodachrome - compare the examples I posted elsewhere on this forum.

  3. If you capture the full dynamic range of the film (=5-6 stops) and save the raw LDRs (exposures), you are in principle in a position to process the scanned data into a real HDR video. This involves a totally different approach than the Mertens algorithm and makes only sense if you have a real HDR display for viewing at your disposal. We’re not there yet, but I expect such displays to become available in the next few years. Of course, you can always rescan at that point, but remember that certain film stock fades away… That is the reason I archive every single exposure - which leads of course to a huge demand in disk space. Note that for the creation of a real HDR (not: “Mertens”), you will need absolutely precise exposure values.

  4. Taking several exposures of a single frame allows the Mertens algorithm to average each pixel over a number of exposures, which results in a reduced level of camera noise. This noise reduction scales with the square root of the number of frames taken and is actually also a tiny advantage with respect to working with raw frames.

  5. Lastly, I would recommend to store the output of the Mertens algorithm as 16 bit per channel image, not a 8 bit per channel image (if you have the disk space). While this might not be noticable under normal circumstances, the increased color resolution might help you in the post production. daVinci Resolve has no problems working with 16 bit per channel images.

welcome to the club! :smile:

2 Likes

@dgalland
I have been doing tests and indeed, I have to admit that I had the wrong idea.
Experimentally I have been able to determine that the variables shutter_speed and exposure_speed do not equalize until the second capture. In the capture algorithm I have added a loop to take 2 false shots before the final capture.
Obviously this will mean a certain increase in the capture time. I hope it is not much, the false shots are made with a very low jpeg quality parameter, in order to make the size of the images small.
At the moment I cannot appreciate the impact on a real shot. Where I am I do not have the real machine and I am doing the tests on a simulator that uses a Raspberry Pi 4 as a client instead of a PC.
Thanks for the warning

@cpixip Impressive explanation. Thanks again.

@cpixip
I think that there is no perfect equality between requested shutter_speed and obtained exposure _speed because the exposure_speed must necessarily be a multiple of the scan time of a line of pixels of the sensor.
@Manuel_Angel
In the picamera documentation it is important to read Misconception #1 and #2 and understand that the Pi camera is not a DSLR but rather a video camera.The camera sends a continuous stream of images at the requested framerate, that’s why a requested shutter_speed change will only be effective a few frames later.
So you have to ignore at least two or three frames after a shutter_speed change.
This slows down the capture a lot, that’s why I limit myself to 3 exposures (I don’t think that the change of jpeg quality will change anything because the jpeg processing is done later in the ISP and does not intervene in the exchange between the camera and the GPU of the PI)

Of course the shutter_speed modification must be done with exposure_mode=‘off’ otherwise AGC algorithm will compensate by changing the analog or digital gain. These gains must absolutely remain close to 1.

There remains the question of auto exposure. I really think that for amateur films with scenes of very variable luminosity, auto exposure allows to have a good starting point for the median image before doing the HDR. It can be seen as an optimization. What’s the point if the image is very dark to make 6 exposures, most of which will be just as dark? I prefer to let the auto exposure play and then make my three exposures.

In this case HDR it’s more complicated because you have to switch back to exposure_mode=‘off’ to get the over and under images before switching back to exposure_mode='on" to come back to auto for the next image (refer to my code for implementation details) In this mode with all the necessary waits I capture at about 2s per frame.

yep, that might be. It’s been years since I wrote that code.

Just for fun - here’s a data capture of mine from the v1 camera dating from April 2017 (that was the time I was investigating this):

The green crosses indicate a certain camera.shutter_speed (precisely: the value plotted is camera.shutterspeed/100)) and the red crosses the corresponding camera.exposure_speed (precisely, camera.exposure_speed /100). Note the one image lag between camera.shutter_speed and camera.exposure_speed.

The blue curve is the actual recorded image intensity and should ideally be following the shutter_speed setting. Clearly, this was not the case in 2017, and even worse, the behaviour between a step-up in exposure and a step-down in exposure was different. Whether this is still the behaviour of the Raspberry Pi foundations ISP I do not know - most probably, this has not changed too much.

Just for the record: the fact that camera.shutter_speed being equal to the camera.exposure_speed seems not to indicate that the exposure has really setted to the requested value. I think in the end I decided to wait simply additional 3 or 4 frames after this happened. So my above comment about my code was only half right (sorry, but this was all a long time ago…)

Also, for the record, this is the behaviour of Raspberry Pi hardware when the camera is operated in capture_continuous mode (that is “video” mode). When operating in “picture” mode, the camera in fact takes at least two frames - one for getting the current image statistics, and the second one for real. That’s why it takes with the Raspberry Pi cameras two seconds to take a photo with one second exposure time (but, that’s a different story).

You certainly have a point here. However, from my experience, quite a few scenes in a normal Super-8 movie do show a dynamic range within a single frame which exceeds what you can capture with three different exposures.

The following (A) is a rather typical example of a standard Super-8 frame

and this one (B) is also comparable:

If I had worked only with three exposures (that would be the upper row of images in the examples above only), in (A) the face of the man in the center of the image would have been ill-defined. (B) shows a more pronounced case, also with sub-optimal image detail in the face of the man as well as on the right shoulder, where the details of the cloth starts to disappear into the darkness.

Given, I could have worked with three captures and spaced them further appart than one f-stop, covering a larger exposure range in this way. But even than I would only sample those very dark areas in the brightest image exposure with a section of the camera transfer curve which is definitely non-linear (the knee of the S-curve). The same argument goes reverse for the highlights in the darkest image (the bow of the S-curve).

Since I was able to capture five different exposures within about 2.5 seconds per frame, I went along that way for some time. Nowadays, my system uses instead a tunable light source for establishing the different exposures and the HQ camera I am using operates just with constant settings through the whole capture.

1 Like

New version of DSuper8 software.

In this version the handling of HDR images has been improved.

To complete the Mertens algorithm I have taken into account the indications of @dgalland and of @cpixip . Thanks to both. The dynamic range of HDR images obtained using the Mertens method has improved markedly, although the contrast has certainly decreased.

The Devebec algorithm has also been improved and gives good quality results with 6 photos per frame.

I have also taken into account the suggestion of @Ral-Film whom I thank for their contributions. Now it is possible at any time during the capture the activation and deactivation of the storage of the bracketing images. The exposure time is found in the EXIF data for each individual image.

Also minor bugs have been fixed.

Hope everything works fine.

Greetings to the forum.

DSuper8 Wiring Diagram.pdf (45.5 KB)
DSuper8-Server.zip (14.2 KB)
DSuper8-Client.zip (786.9 KB)
Instructions.pdf (33.8 KB)

2 Likes

@Manuel_Angel
I looked at your wiring diagram and I don’t understand the interest of the ULN2003 circuit in addition to the TB6600. The 3.3V GPIO signals of the PI can easily control the inputs of the TB6600 ?

Hi @dgalland,

The driver based on the TB6600 chip uses opto-coupled inputs. To correctly bias the input LEDs, about 15 mA each is needed.

If we connect the pins of the GPIO port directly to the inputs of the driver, it does work, but in my opinion this situation is dangerous for the Raspberry Pi. We risk damaging the GPIO port from excess current. 15 mA is a very high current for the port pins.

The Darlington transistors of the ULN2003 chip have a very high current gain (more than 500), so that with an input current of less than 1 mA, we can obtain an output current of 500 mA. The GPIO port works with a minimal current.

Apart from this, we get two additional advantages:

  • The lighting can be controlled via software, using the transistors of the ULN2003 chip as a switch.

  • We avoid the necessary circuit to power the driver’s LEDs at a voltage lower than 5 V.

I leave a link to an interesting page with a lot of information about the GPIO port. Among other things, they talk about current limits.

Regards.

Hi everyone,
I have done some tests with a very contrasted image to compare the results of the HDR Mertens and Devebec algorithms.
The starting images have been exactly the same.
Let each draw their own conclusions.

Bracketing images:

Mertens method:

Devebec method:

Regards

1 Like

@Manuel_Angel - can you elaborate a little bit more how you applied the Devebec-method?

The background of my question is the following:

The Mertens method is “just” a way to combine the best looking parts of a series of exposures into a single image. It uses normally three different parameters to estimate this “best looking” criteria for each image part. Usually, these are measures of well-exposedness, saturation and local image contrast. Depending on how you set these parameters, you will get slightly different looking results. While the output of the opencv Mertens implementation is actually a float image (even with negative pixel values), the intention is nevertheless only to map a stack of images covering a high dynamic range into a low dynamic range image (LDR - usually just 8 bit per color channel).

The Devebec method is actually something very different. It creates out of the different exposures of a scene a real HDR (actually, a radiance image). To do so, the method needs the exposure information for each of the input images. It than uses that information to calculate the camera’s response functions. Once these response functions are known, it is possible to transform the input images into the HDR space. That is usually the output of the Devebec algorithm.

Only after you have obtained the HDR image, you are in a position to re-transform this HDR image back to a LDR which you need for transcoding/transmission/display on standard hardware. This process is called “tone-mapping” (different tone-mapping algorithms are tested/displayed at the end of this page and this page, for example). Tone-mapping is separated from the Devebec algorithm, it’s part art and part science, and there are a lot of scientific papers out there about that subject.

From my experience, I would have expected quite different results from your input images. Usually, if you normalize the Mertens output, it kind of just looks ok. If you do the same with the HDR the Devebec algorithm outputs, you are usually getting a very dull image - especially in high contrast cases.

Clearly, your Mertens result looks duller (less local contrast) than the Devebec image (where it is unclear what type of tone-mapping was applied). I would have actually expected just the opposite results, similar to what is described on this page.

Also, in your input image sequence, there’s a magenta color tint noticeable in at least the three brightest images - which does show up quite well in the Mertens output, but not so much in the Devebec result.

What is surprising is the magenta cast on the last three images. This magenta is going to be in the Mertens merge, it’s absolutely normal.
Are you sure you are in awb=‘off’? You should manually check that this magenta cast is normal at high exposures and also change the color gains and try the merge again.
Why the magenta doesn’t show up in the merge debevec is another question. It looks like the merge debevec is doing a white balance?