My Telecine Software

Hi @Manuel_Angel

I would suggest trying the following approach for a test:

img = createMergeMertens.process(imglist)

minimum = img.min()
maximum = img.max()
scaler = 1.0/(maximum-minimum+1e-6)

img = scaler*(img-minimum)

img = numpy.clip(img * 255, 0, 255).astype(‘numpy.uint8’)
return img

and see whether it makes a noticable difference with respect to highlights and shadows.

It’s been a long time since I went along that way - nowadays I am using my own Mertens implementation. So I am not sure if the current opencv implementation still outputs out-of-range data. You can check this by simply including a single additional line in your code, like in the following code segment:

img = createMergeMertens.process(imglist)

print(img.min(),img.max())   # this is the simple test line

img = numpy.clip(img * 255, 0, 255).astype(‘numpy.uint8’)
return img

If you see in this output numbers below zero or above one, the current implementation of the opencv Mertens still suffers from bad scaling.

I observed out-of-range data mainly in strong contrast scenes with clearly defined shadow areas. Again, haven’t checked this for years, but I assume that this is still the case.

If you are rescaling the Mertens output to a valid range (as suggested above), you might see a noticably reduced contrast, of course depending on your material. As I am doing anyway a color-correction for each scene in postproduction (daVinci Resolve is recommended), that is not too much an issue for my workflow. However, it might not fit your workflow.

In fact, in a production setting, if the above described rescaling is necessary, it is probably better to keep the rescaling fixed (say, the interval [-0.5:1.3] is mapped to the interval [0:1] or at least use some more robust min/max estimator, similar to something like

minimum = np.percentile(img, 1)                                 
maximum = np.percentile(img, 99.5)

Some additional comments:

this is indeed the way to go if you are working with Raspberry Pi hardware/software. As far as I remember (again, ages ago), you would set the exposure time via shutter_speed and query the exposure_speed until both match approximately - that takes usually 3-4 frames (it might be the other way around, can’t remember). For other cameras, you need to find another way to make sure things have settled.

I absolutely agree, as this kills the original dynamic of the movie, at least to a certain extent. If you are taking multiple exposures anyway, you can set up your system in a way that the whole dynamic range of your source material is captured for sure. One way to do this I have described in detail in this and the posts following. Basic idea: the exposure reference is the light source itself, which should be imaged in the darkest frame captured in such a way that it’s not burning out (an 8bit value of 230-250 for example). The other, brighter exposures are used to sample a dynamic range of 5-6 stops. From my experience, this is sufficient for all color-reversal film stocks I encountered (Kodachrome, Agfa, Fuji, etc.)

3 Likes