Evaluating Extended Exposure Fusion

Hello Everyone,

I’m reaching out to discuss the Extended Exposure Fusion method, a recent development in HDR imaging, particularly for handling exposure series. This method offers several enhancements over the traditional Mertens merge technique and I’m considering its potential integration into OpenCV or C++. Here are the key advantages of Extended Exposure Fusion:

  1. Reduced Artifacts: It effectively minimizes common artifacts such as out-of-range issues and low-frequency halos.

  2. Single Exposure Application: Unlike Mertens merge, this method can also be applied to single exposure images, offering greater flexibility.

  3. Advanced Fusion Techniques: Incorporating advanced methods like Quaternion Factorized Simulated Exposure Fusion (QFSEF), it aims to improve results in challenging lighting conditions.

For a practical demonstration of this method, there’s an online demo available here: Extended Exposure Fusion Demo.

I’m interested in your opinions on two key points:

  • Do you think the Extended Exposure Fusion method is valuable enough to warrant its porting to OpenCV or C++?
  • How do you assess the quality of the results obtained through the online demo, especially in comparison with the traditional Mertens merge?

Your insights and experiences would be greatly appreciated!

1 Like

Well, this is a kind of old article from 2019. From what I remember reading it some time ago, the original exposure stack is artificially enlarged by, say, doubling the number of input images. The additional input images are “created” by applying a kind of squashing fct from the original ones. The “normal” Mertens exposure fusion algorithm is than used for merging this artificially extended exposure stack.

Of course, the final result needs to be remapped into a usable exposure range for further processing.

Taking all of this together, you end up with more parameters to tune and potentially better results than usual exposure fusion. Since the input images are squashed before fusion, the proposed algorithm yields also a different look the standard exposure fusion. The details can be found in the paper.

The online-demo is nicely done and I would invite people to run this demo with their own images. This “new” algorithm seems to produce softer (more even) results as the standard one. It’s probably a matter of taste as to which one someone might prefer.

As with every exposure fusion algo I have encountered, the appearance of the material is notably changed by the fusion step. While in original footage you still can sense (with a little expertise) the underlying film stock (Kodachrome vs. AgfaChrome for example), after exposure fusion, in a way “all looks the same”.

The major drawback of the proposed method is the increased running time, as the number of images in the stack is increased on purpose. That will hunt you once you get into processing interesting frame sizes.

Otherwise, note that the implementation uses anyway the original Mertens exposure fusion approach - which is already implemented in openCV. You just need to add the remapping functions for input and output. This should not be too difficult and as these function are a kind of simple ones (only pixel operations), they could be implemented in numPy/Python efficiently with a decent speed.

One caveat - the remarks above are memory items. It’s been at least one or two years since I looked into this paper.

1 Like

I ran the online demo with the house data set and default settings. It appears the white balance is different for output of the 2 methods. I think the temperature of the extended exposure fusion output is closer to the original bracketed exposures. I can’t tell if the shift is global or local. I’m curious to others opinions because color correction is not a strength of mine.

Others on this forum have criticized exposure fusion for it’s color shifts and have started working with a single raw image for each film frame.

My suggestion would be to use actual bracketed film scans and run the online demo for different values of parameter Beta and compare the results before spending time coding in C++.

Question: would color shift be an issue if the input images were first transformed to a different color space i.e. HSV/HSL or YUV/YIQ? Would it be possible to perform the fusion only on the intensity component and preserve the original color?

Input




Exposure Fusion

Extended Exposure Fusion

2 Likes

– note the quite noticable white borders in the window frames looking outward with the EEF algorithm. That is not present in normal exposure fusion. The wall right of the cupboard is too yellowish for my taste in the EEF result.

The EEF result has generally more (local) contrast, and that leads to more saturation in the colors. It is hard to compare color fidelity from such rather arbitrary images. One would need to do some serious here tests, with some color charts in different exposure zones. I planned this for a long time. But… :upside_down_face:

On page one of the archive further down there is a colorlogic color sheet which can be loaded into the demo by clicking the “reconstruct” button.

Initially I was keen on reducing the halo in the merged now I have to fiddle with the settings as any subtle color shift seems to be amplified by the EEF method. Overall the image quality is more vibrant to me but as @cpixip said, it’s a matter of taste which image appearance one prefers.

I tried to contact a guy who ported the SEF (single exposure with generated additional exposure) code to opencv if he could help porting the EEF method also, maybe he has something up his sleeve.