HDR / Exposure Bracketing

HDR seems to be the most important missing link in most transfers, for making video look just like the projected image. I’ve always been disappointed with the increased contrast and loss of detail in off the wall attempts of my own, and haven’t seen that many DIY transfers that achieve a good contrast range.

I’m looking at how to do HDR with the Flir Blackfly BFLY-U3-23S6C-C camera and it looks pretty straightforward in FlyCap. If I read correctly, I can take 4 different exposures merely from 1 trigger from my hall effect sensor each time a frame stops in the gate. Which is neat as it removes the need for any complex circuitry.

Has anyone else tried this, and if so what settings did you find useful? The software seems to offer either varying shutter or gain for each exposure, though I dont know yet how easy that would be to translate into stops.

Yes, I have found HDR useful with still photo scans, but have not tried it with cine’ film scans. And my HDR work has been done in post ,not in the scanner. I generally use a +1, 0, -1 exposure with 3 images or a +1.5, +1, 0, -1, -1.5 exposure with 5 images.

The top 2 color images were a comparison of dye transfer and inkjet prints. I scanned an original dye transfer print them made a inkjet print of the scan to compare the quality of the 2 printing methods. I was not able to achieve a close enough copy until I used invisible HDR on the scan.

dye transfer vs inkjet 2 D.D. Teoli Jr.

dye transfer vs inkjet D.D. Teoli Jr.

The B+W photo was done with HDR as well as Lightroom contrast grading from a 6x6 negative shot with a Hasselblad SWC. (I think I used +1, 0, -1 but it could have been +1.5, +1, 0, -1, -1.5…can’t remember.)

sunlit-slipper-silver-print-vs-inkjet-print-copyright-2013-daniel-d-teoli-jr

Vintage silver gelatin print left - Inkjet print right

Unless you got AI for repeating the contrast grading done by hand, this level of recovery is not achievable by HDR scans only for cine’ film. Every cine’ frame would have to be worked on individually.

Daniel D.Teoli Jr. Archival Collection
Daniel D.Teoli Jr. Small Gauge Film Archive
Daniel D.Teoli Jr. VHS Video Archive
Daniel D.Teoli Jr. Audio Archive
Daniel D.Teoli Jr. Social Documentary Photography

Wow! Okay, that image where you actually recovered the window from all of that blown-out brightness is really impressive. I would imagine that it’s not possible with anything under 10-bit images.

All of these types of examples that I have seen all involve luminance. Is it possible to do HDR scans and recover whatever color is left in a film print that has faded to red? Could you use HDR to eke out what minuscule amounts of green and blue dyes that might remain in a 16mm print?

1 Like

I don’t have highest bit scans when working with TIFF. I can’t afford the storage space. I think it may have been a 8 bit scan and 8 bit in Lightroom.

When I get time I will do a shootout between 8 bit and 16 bit B&W. I know people overblow the RAW to JPEG debate, so will have to check out the bit debate for myself.

Here are tests of 31 generations. of JPEGs.

I work in a tremendous amount of areas for a broke, 1 person archive… film, VHS, audio, photos, magazine and publications, ephemera and have a huge amount of scans to deal with. Plus my own photography demands for storage is also very large. I can’t afford the latest LTO drives and have to deal with individual drives. The magnetic HDD are only so good for longevity. The SDD are terrible for longevity unless you keep them charged and can lose data fast in an uncharged state. So anything of archival import eventually gets put on M-Disc. And inkjet printable M-Disc is $$ and that is another costly area of my budget.

What I’m getting at is I can’t always work in the highest quality available and many times make do with ‘decent’ quality and not ‘best’ quality. Now they are talking about putting data on synthetic quartz the size of a post it note via laser. Who knows how much that will cost or if it is even feasible for the average Joe or Jane’s budget?

As far as color correction?

My video software is very basic and it does color red correction poorly. Talk to the users of the top post software to get an answer. I’ve seen a demo by Lasergraphics of an instant color correction of red faded film via their scanner software. Looked impressive. Since I can’t do much with red films I turn them into B&W for now. If the lotto ever cooperates then all doors are open. But until that time, this is how I do it.

You can talk to Perry at Gamma Ray, he is into HDR film scans.

He said he does HDR in the scanner versus my method of doing them in post. My still scanner does not scan for various exposures, so again, that is how I make due with what I got. The 2K Retroscan scanner can run scans at different exposures, but have no idea how I’d combine them.

Good luck!

As I am working on a Super-8 telecine based on HDR, let me throw in my 2 cents as well.

First, a color-reversal film like Kodachrome 25 has a tremendous dynamic range. When projected onto the screen, our visual system is exceptional in view these high contrast images.

Maybe in a few years, when HDR gear and formats are widely available, we will be able to work directly in HDR. Until than, we are stuck with display hardware barely able to display a 8-bit per channel image properly. Also, most common video file formats compress (among other things) the quantization fidelity of the color channels. So currently, a major challenge is to aquire the huge dynamic range of the original source material faithfully and later map it onto the reduced dynamic range displays currently available and in use.

HDR is used by a lot of people in quite different ways. So let’s clarify a little bit:

  1. The standard HDR aquisition pipeline uses a range of different exposures of a single frame. Normally, these different exposures are sufficient to calculate from a set of images the transfer characteristic of the imaging system. This is an essential step, as these curves are needed to map the normal images into the HDR color space. What you end up is an HDR image which captures ideally the full dynamic range of the scanned frame. However, these images displayed on standard equipment look rather dull. So an additional step is usually needed: “tone-mapping”. A tone-mapping algorithm converts the HDR image into an image which “looks right” on a normal display. There are a lot of tone-mapping algorithms around these days, and I can assure you that every one of them usually fails to achieve a satisfactory result. You end up with manual tuning the mapping - something you might not want to do for every single scene of a movie.

  2. Occationally, people do HDR-work with 12-bit, 16-bit, etc. “raw” material. Here, no HDR capture is involved at all, only the later “tone-mapping” process. At most, artifically “exposed” images are created as intermediate, but they are all based on the single raw image. This is not HDR in the true sense, it is just the tone-mapping part: reducing a high dynamic source to something a low-dynamic display can handle.

  3. There is another technique which combines a stack of differently exposed images into a single low-dynamic image. This is called “exposure fusion”, and that algorithm actually mimics in some sense the way our own visual system is conquering the huge dynamic range of a projected image.

I have worked with all three of the above approaches, implementing each of them in own software. From my experience, only approach 3., exposure fusion, gives you in the end satisfactory results. Most importantly, it is a mostly parameter-free approach. So a whole movie can be processed, from normal exposed imagery down to scenes which are 1 or 2 stops underexposed. I give here an example:

MY_00646

One the right side, you see best single exposure capture I could achieve for this particular frame. As one can expect, the highlights are blown out (cmp. the rocks on the left road side), while structure and color is lost in the dark areas (for example, the blue of the jeans is quite similar to the dark brown color of one man’s coat in the frame capture). Well, only the mid-tones are captured ok.

Now, to the left is the result of the exposure fusion algorithm. Both dark and bright image areas are better rendered, and the mid-tone range is still ok.

This particular frame is from a Kodachrome 25 Super-8 movie. It was digitized with five different exposures, spanning an exposure range of 5 f-stops. Anything less would result in loss of detail in either the bright or dark areas of the image. Here’s another example, captured and processed with the same transfer parameters. There is no manual adjustment done between the capture of the two scences:

MY_01906

Again, one the right side you see the best exposed single frame, on the left side the result of exposure fusing a stack of 5 different exposures, with a dynamic range of 5 f-stops.

Of course, capturing HDRs from a movie comes at a price: you need to stop the frame perfectly before you start aquiring images. Even the sligthest movement of the frame will ruin your result. Also, at least the cameras I had available have a substantial lag between switching the exposure value and actually delivering the requested exposure. It turns out that you need to wait between 4 and 9 frames for the exposure to settle, depending on the camera model.

All this makes HDR capture a slow process. My system currently needs for one sec (18 frames) of film about one minute of capture time. This is for a capture resolution of 2880 x 2160 pixel. For lower resolutions, the system is slightly faster. Exposure fusion is also a quite complex computational process. For the resolution quoted, it takes about 1250 ms for reading the captured frames into memory, and than about 3725 ms to compute the fused result - per frame. Again, lower resolutions are substantially faster to process.

5 Likes

Singe image HDR is sometimes called pseudo HDR, but don’t confuse the two. You will get different results from just plugging the single image into HDR software and hitting play versus processing 3 or more exposure in post that are used as if they were separate camera exposures. But it does not matter what people call it…I just do it.

If I just plugged in a single image to the HDR software the results would not be the same as the single image B&W HDR I sent in.

The best bet for HDR is if you can do in-camera, multiple exposures, but that is impossible when things move.

Here is another single image HDR with a number of post exposures. (Don’t remember how many.) I don’t have the original handy, but lots of detail was missing in the original. It is not exactly invisible HDR, but close enuf for me.

Whoop Whoop fire breather D.D. Teoli Jr. mr

This HDR below is more of ‘hyper-real’ style HDR sometimes called painterly. Again, multiple exposures in post.

faces of gentrification lr daniel d teoli jr MR

1 Like

Thanks guys for your replies, very very helpful.

I guess @cpixip I’m referring to exposure fusion. Your results are incredible, I’m not sure if I’m more envious of the closeness you’ve got to a projected image, or the fact you were able to shoot K25 in Super 8. When I started shooting film seriously the only supplies available were a few years past expiry with very variable quality (really dependent on how the seller had stored it and how honest they were about freezing it), and K40 had just been discontinued as well.

I’m hoping that the FLIR software I’m using won’t suffer from the exposure lag, as the HDR is a camera design feature and it has a high FPS capacity, but I can’t actually find any documentation so I’ll have to test it out and report back. My modified Eumig 610D only goes as low as 3fps so it will be impressive if I can get it to work without changing the motor out.

@bainzy - concerning “Kodachrome 25”… - well, my memory was twisted a little bit here. The Super-8 film stock I was actually using was Kodachrome K40. The Kodachrome 25 was the stock I used for 35 mm photography at that time. I do mix that up occationally…

Actually, I still own a a single Kodachrome 40 catridge from 1988, at that time priced about 19 DM, which is more or less equivalent to about 10 € nowadays:

IMG_20200314_093134

Too bad there’s no lab any longer available to process it. There was still a lab in Switzerland, but this closed around 2006…

Anyway. From my experience, the exposure fusion algorithm (Mertens/Kautz/Van Reeth) is the best bet on combining different exposures of a single frame into something which is “viewable”.

Most importantly, it is basicallly a parameter-free approach (there are parameters, and for optimal results, you want to tweek them together with the specific camera you are using). Once you find a sweetspot, it does not matter much if you scan differently exposed scences or even different film stock - the results look ok. Great advantage in my view.

The original authors of the exposure fusion algorithm did not discuss too much how their approach actually works. Here’s my take on that: if you come from a photographic background you know that it was usual stuff in an analog photography lab to brighten and darken certain image parts which would come out too dark or did not show enough texture. You would select a main exposure time for the print and place your hand or appropriately cut-out paper pieces over certain image areas which would come out too dark otherwise. After that, you would do an additional exposure to further locally darken image areas which would otherwise be too bright in the final print.

In a way, exposure fusion automates this technique: they devised certain image operators looking for regions in each of the exposures of a scene which are

  • well-exposed
  • have a good color saturation
  • or a good local image contrast

Now, for combining these areas of interest, they choose a well-know image combination algorithm which is pyramid-based and was initially proposed by Burt/Adelson in 1983. That is more or less the basic idea behind exposure fusion, as far as I understand it.

In any case, the best part of all this is that you do not even have to bother to understand or implement this algorithm by yourself (well, I did that anyway… :joy:) - it is implemented as " cv.createMergeMertens()" in the opencv computer library. If you want to look at exposure fusion and all the other options (HDR-creation + tone-mapping) the opencv library has available: the following link shows you example code in C++, Java and Python (OpenCV HDR algorithms) which you can use as a starting point for your own software.

Concerning the exposure lag and so. There are two different points here which are important. For one, the image which is taken by the camera at a specific point in time is first processed in the camera, then transmitted to your computer and there basically processed again by the device driver. All this happens before your software even sees the data. That pipeline introduces a fixed temporal delay, which however could be taken into account by a proper software design.

IF you are working with low cost hardware, you are going to meet another challenge when attempting HDR-captures: your camera does need some time to actually reach the desired exposure level.

I specifically looked at three different low-cost cameras: the Raspberry Pi v1, the Raspberry Pi v2 and the see3cam_cu135 camera.

The Raspberry Pi cameras have all sorts of automatic stuff running which is difficult or even impossible to deactivate for full manual control. Specifically, you can not immediately switch to the requested exposure. It takes at least 3-4 frames until the actual exposure time is even close to the requested one.

The same is true for the see3cam_cu135 camera I am currently using. Here’s a plot of see3cam_cu135 data showing that behaviour:

Unbenannt

What you see here vertically is the mean brightness of a fixed film frame. Horizontally, the frame number after an exposure event is displayed. As you can see from the traces, it takes 3 frames until the camera even reacts to the new exposure time (I started with two slightly different intial exposure settings, one close to 100, one close to 128 - that explains the two different lines at frame position 1-3).

This inital dealy is most probably the delay introduced by the camera+device driver pipeline.

After this inital delay, the camera switches to the vincinity of the exposure value requested, but not exactly to that value. Mostly, the exposure values seen at frame 4 and 5 do not match the requested exposure values. Even worse (look at the brown trace!), sometimes the exposure only decays very slowly to the value requested.

To make matters worse, how long that relaxation lasts depends heavily on both the inital and the final exposure values - and, as these traces show, you might have to wait more that 40 frames with that specific camera for the exposure to settle. It is clearly impractical if you are scanning thousands of film frames. If you do it anyway, you will notice some flicker in the exposure-fused imagery.

Actually, exposure fusion has the nice property of equalizing shortcomings of your camera images, provided you are using more images than necessary to achieve a certain dynamic range: errors in camera exposure will be reduced and camera noise will go down as well in the exposure-fused image.

With your FLIR camera, you might have a much better setup available than I have. I worked with such cameras when the guys were still “Point Grey Research”, long before they were bought by FLIR. But rapid exposure control was not something we needed and looked into at that time. I think it would be interesting for the forum if you can report your results here!