HDR vs High Resolution - which is better? cheaper?

I have thought for some time that I’d need a 4K scanner to best preserve the films in my collection - most of which are 16mm animated cartoons from the 30’s to the 70’s. However, as I learn more about HDR it seems that perhaps it may be a better, cost-effective way to preserve my films for the future. Seems that HDR images likely contain more data and greater colour grading flexibility. Further, its likely that HDR may actually be closer to the ‘theatre experience’ that most of the films were originally intended for so long as you watch it on an HDR capable monitor or projection system. I’m curious which technology makes the most sense to build for - I’m assuming its an either/or situation.
I may be way off base here but wouldn’t it be cheaper to build a scanner at 2K resolution with HDR than 4K an no HDR?

@Jitterfactor - of course, you want to do both, if possible :upside_down_face:

With a future-looking attitude, it is probably interesting to note that we are experiencing also both developments: image resolutions are going up and HDR is becoming mainstream. So, what’s in the basket for both?

Well, both topics, image resolution and HDR, require more resources and investments, the better you want to get with them - both in real $ or €, in scan and processing time as well as storage requirements and internet transfer times.

Resolution can be discussed endlessly. On one end of the scale are the ones that want to “see” every single grain of the original movie source, on the other hand there are the ones that apply heavy filtering anyway to get rid of all the grain and other image imperfections.

Note that the later, grainless approach gives you alos nice file sizes for streaming the material.

There is probably a personal sweetspot with respect to scan resolution, and it depends a lot on the material in question. A Super-8 film will be very different from a 16 mm film in terms of resolution needed. Also, normal reallife film is different from animation material - the later has more evenly varying image patches and it might not look nice if homogenous color patches feature too much noise. On the other hand, to faithfully capture the shape of the brush strokes, you might be tempted to go for a higher scan resolution and reduce noise in the post. In the end, it’s a balance between personal tastes and the intended audience/distribution plattform.

If you are aiming at archival purposes, the best scan resolution is probably the highest you can afford. You can always reduce resolution and grain during postprocessing. Your archival copy might also be re-rendered at a later time with higher resolution, once higher bandwidth distributions become available.

By the way, HDR is a generic term and needs to be split into different topics. Originally, HDR did refer to an image capture process which records the full radiances of a scene faithfully. You need special image formats for such a thing, like the OpenEXR-format for example.

Now, the images we are used to see in normal life are not radiance images - they are tweaked in a way that there’s a midtone range which features a nice contrast, and while the highlights burn out into pure white and the shadows dip into black. If you look at a radiance image unprocessed, it in contrast looks rather dull.

In order to get a useful image, a secondary process called tone-mapping is necessary for a real HDR image. Basically, in the tone-mapping step, the contrast range of the radiance image is compressed in such a way that an image is obtained which looks “right” in our eyes. Simultaniously, during tone-mapping, the immense range of pixel values of the original HDR (high dynamic range) image is reduced to obtain a LDR (low dynamic range) image which can be stored, transmitted and displayed on usual hardware.

The typical LDR image features 8 bit per color channel - that is also probably the bit-depth your monitor is currenty working with. An HDR-image format, like the above mentioned OpenEXR can sustain practically unlimited bit-depths per pixel.

Somewhat inbetween are captures in RAW-format. Todays camera sensors operate with anything between 10 bit to 14 bit. That is too much for current display technology (remember: only 8 bits per color channel, maybe less, but no more), so people need to “develop” their RAW-files. That step is equivalent with the tone-mapping step mentioned above for HDR images.

“Development of RAW” or “tone-mapping of HDR” imagery is actually a very subjective thing. The reason is that you need to decide which of the immense tone range you have available do you want to use. Will you sqeeze the whole dynamic range into your small LDR-range, resulting in a dull image? Or will you allow a lot of shadow detail to be lost, recovering in turn dynamic range for you mid-tones? Or does it not matter if the highlights burn out, enabling you to keep some important information in the shadow regions?

There’s still another thing which is also called “HDR” and which I probably need to discuss here. Modern mobile cameras feature such a mode, for example. Even some sensor chips have already build-in “HDR” modes. What is this? Well, basically, the camera takes here in short succession two or more exposures with different exposure settings and combines (tone-maps) them into a single LDR (8 bit per color channel) image. While the algorithms that combine these different exposures are usually not disclosed, my best bet is that they are similar to “exposure fusion” which I described already on this forum on several occasions (see here for some basic examples and here for a comparison between HDR and RAW capture).

At this point in time, I am not aware that anybody has used a sensor or camera with build-in HDR capability. So the normal route to obtain a “HDR” (=a LDR derived from a real HDR) is to take several differently exposed LDRs and combine them during post-processing in one way or the other. Clearly, there is not really a cost factor attached to this other than your personal time, safe of the need for extended storage space. You might need some technique to ensure perfect registration of the different exposures of a single frame. Otherwise, you can use the image resolution of any camera you can afford.

From my experience, the most limiting factor with HDR-scanning is the additional time it takes to capture and process the different exposures.

Because of the time factor involved in HDR capture, utilizing a single RAW capture with a high-quality camera might be a better way to proceed. A single RAW file needs much less data to transfer and store. And the technique is already build in cameras, software and post-production workflows.

Note finally that commercially produced material is usually optimized in such a way that the dynamic range stays anyway within certain limits - so a real HDR capture might not be needed at all. That is even more true for animations which are already recorded in a very controlled way in the animation studio.

3 Likes

HDR scanning is a totally different thing to HDR delivery. HDR scanning is about capturing the full dynamic range of the film i.e. all of the shadow and highlight detail. It isn’t an either or proposition for resolution vs HDR, and with very dense film, you may need to do a HDR capture as no sensor on the market has deep enough wells to capture all of the dynamic range in a single pass without blooming. Basically get the highest resolution, best SNR sensor you can afford, and then do two or three exposures of each frame, and then stack the 3 frames in such a way as to recover the shadow and highlight detail. Again, this isn’t about HDR delivery, it’s about capturing all of the detail on the film.

1 Like

Some of the latest Sony sensors and some BMD and Arri sensors do have the multiple exposure/HDR capability built in. The ArriXT uses this method. The Director uses multiple separate exposures for its ‘triple flash’ method which gets detail from dense print film that other scanners can’t currently match.

@Ozfilm - thanks for the additional info!

Well, I am coming from an image-processing background, and for me, HDR in its true sense corresponds to capturing and processing an unlimited dynamic range. No display known to me can display actually such an image faithfully, at least if the image contains some extreme values. Think for example of a view outwards of a really dark cave, with the low-standing sun in the center of the image.

As far as I know, camera manufacturers claim to do “HDR”, but in reality it seems that they “only” enlarge the available dynamic range of their equipment. Here’s a quote from some ARRI marketing material:

Some manufacturers claim that 16 bit linear is the key to HDR, but it is important to make a distinction between the bit rate used in the camera and that used to store and transport the resultant images. ARRI uses 16 bit linear to capture and process images inside the camera, but then packages them in 12 bit, either as Log C in ProRes files or as a similar log curve in ARRIRAW files

While that is impressive, in my terminology this technique would still end up in the RAW category. Because you still work with finite number of intensity levels, in the case of 16 bit with 65536 different intensity levels, in the case of 12 bit with 4086 levels. No question, an impressive range, especially if you distribute it evenly in log-space, a space which matches better human perception, as ARRI does.

In fact, from my experience a range of 16 bit is kind of sufficient in the case of Super-8 material, which usually features a lot of under- or overexposed stuff. I assume that normal cinema stock is much better exposed in terms of dynamic range, as the illumination is normally tightly controlled during capture. So the demand during scanning might be rather modest with respect to dynamic range. Would be interesting to find out how much the density varies with typical film stock. That would give a lower limit on what the sensor of a film scanner needs to deliver.

Sure, 16bit is enough to capture the dynamic range of any film, 12bit log is as well, but none of the sensor well depths are enough to manage that in a single pass.
So you end up with missing detail in either the shadows or the highlights. Pumping enough light through the film to get the shadow detail blows out the highlights, keeping the highlight detail ends up with underexposed (crushed) values in the dark areas. This is why multiple exposures at different light intensity settings are needed, and reasonably clever image processing to merge the multiple exposures to capture all of the detail on the film, and throw away the under/overexposed sections.
Print film, and especially IB Technicolour film, is extremely dense. It is much easier to capture the dynamic range of a negative than of a print.
The commercial scanners, with the exception of the Lasergraphics Director, with it’s triple flash, struggle to do well with release prints, but do a great job with negs.

I work in scanner design and image processing, the limit is the well depth of the sensors for capturing the dynamic range of film, not the bit depth. You get around the well depth with multiple exposures at different light intensities.

To truly capture the colour range, that is another discussion, and mathematically, particularly for things like tinted and toned Nitrate, that requires a minimum of 24 different colour exposures (rather than the traditional RGB) and some very hairy math.

Having said all of that, for home movies etc. a good current gen Pregius sensor has low enough noise and deep enough wells to do a good job in a single pass, more than good enough for most uses. If however you are an archive, or need to grade the film extensively, then you will get a better result with multiple exposures.

4 Likes

I didn’t explain what we did when developing the Arri sensor, I am not referring to the 16 bit capability of the sensor, but rather that we made two separate read-paths from each pixel, each with its own level of amplification. This is equivalent to taking two exposures with two different light levels. This happens while we are still in the analogue domain, the two signals are fed through 14bit A/D convertors, and then those two digital signals are combined into one 16bit ‘HDR’ image. This gives us a wider dynamic range than cameras that just take one exposure, and similar dynamic range to capturing two separate images at different exposure levels. Sony has started doing something similar with its latest Pregius sensors. At the moment, it is limited to two readouts per pixel, so you can get better results with your own design if you take 3 or more exposures at differing levels. But sensor tech is enabling this kind of HDR capture within the camera, and it is implemented in the Arri XT scanners.

1 Like

… - interesting approach! Things are moving fast these days with camera technology.

About 17 years ago, I worked on the development of a real-time 3D camera using two Neuricam NC1802 “Pupilla” chips - these were 640x480 pixel CMOS optical sensors which utilized photodiodes with a logarithmic response as photosensitive elements. Because of this, the chip had an interesting dynamic range of about 120 db (should be equivalent to around 20 bit). The camera worked great and was able to resolve pedestrians dressed in dark cloths while standing between the high beams of a car. However, because each single pixel of that chip had a slightly different response curve, we needed to calibrate and correct every single pixel of the sensor individually in a FPGA in order to obtain a usable image. Also, the sensor resolution obviously wasn’t that great … :smirk:

By the way, here’s a scan comparision of a 12bit RAW scan (via a Raspberry Pi HQ cam) and a classical HDR scan with 5 different exposures, with the same IMX477R sensor. Not much difference in visual appearance between both. That supports somewhat your statement that a single RAW capture with a good sensor should nowadays be able to handle standard film stock in a single scan pass.

The extensive tests from the Diastor project comparing high-end archival scanners found that the best scanner for release prints – even difficult Technicolor IB prints – was the Kinetta Archival Scanner. No need for triple flash. Captures 16-bit RAW files with plenty of dynamic range.

Hi Bob, that paper was limited from the start as mentioned elsewhere. It’s a cool paper, but incredibly misleading if you’re taking it at face-value. My understanding is that both the Arri and the Director they used were out of date models run in archives. Some of the other machines may have been out of date as well. For the Kinetta they got the samples directly from the company so the latest model with the latest technology. They didn’t include the Scanstation or the HDS+ either.

That paper was published in 2018. LG moved to Sony sensors in 2019, so did Kinetta. Arri as already discussed developed a new sensor. Many older scanners are CCD and DPX-only, most don’t scan prints/dense film well, but there are many still in regular use today for production (blurary, streaming, DCP, etc). The Arriscan was released in 2004, the Director not long after, the vast majority of the machines out in the wild will be old out of date models. Arri & LG have invested in continuous development of their machines, pumping out improvements regularly. So of course if you compare an unknown model Arri to a current Kinetta you’re not getting a balanced comparison.

The Kinetta is a good scanner, but it has its limitations and one of them is that it can’t do continuous-motion HDR. Jeff himself has criticised that design, however LG has perfected it and it works well. If you compare a 6.5K Scanstation using HDR against the Kinetta with the same Sony imager you should find the Scanstation can produce a better scan. Comparing single-flash scanning however they should be equal, or near-equal. The Director with a CCD camera or a Sony imager in it should be better as it captures discreet R/G/B using a mono-sensor.