Picamera2: jpg vs raw-dng

@Manuel_Angel - still having fun with your image and daVinci.

I adjusted slightly the color grade and put in two nodes in the node graph of the color page:

Node 01 performs a spatial noise reduction:

grafik

and Node 02 a sharpening operation:

This is the result:

Not bad for my taste with a workflow only utilizing daVinci and no additional software…

4 Likes

This result looks great! I am especially jealous of the sharpening. It’s quite a bit sharper without actually looking sharpened. I remember observing this in one of your previous posts a long time ago already (I believe it was a frame with a bus on a road in the mountains). Somehow, I can never quite get this right, and even when I only apply sharpening lightly, it always looks somehow over-sharpened.

daVinci’s sharpening is very sensitive and difficult to adjust. That’s why I perfer to use my own tools for that. The “bus” is such an example.

The strategy is always the same: you always need to reduce the film grain as best as possible, otherwise only the film grain gets sharpened. That is what node 01 does in the above example. Normally, you would employ temporal denoise as well, because it is much more effective on film grain (while you have a spatial correlation wit respect to film grain, temporally, film grain is uncorrelated). Since in this example I only had a single frame, there was no option here for this step.

Once you got rid of the film grain (by whatever option is available), a mild sharpening can be applied. Main controls in daVinci are the radius (stay close to 0.5), and the scaling, which seems to vary the amount of sharpening applied. But I have yet to find a proper documentation about what these controls really mean in terms of image processing…

@cpixip,

The result of the last treatment given to the image has been truly great.

The image has been absolutely unbeatable: vivid colors, well saturated without exaggerations. Detail in all areas and especially what has surprised me the most is the increase in sharpness.

It has been fully demonstrated what can be achieved by knowing what we do and using good tools.

I have had version 17.3 of Da Vinci Resolve installed on my Linux machine for quite some time, though I have never used it.
It is clear that I have to learn how to use the program, although I have a little respect: the user manual is more than 3000 pages.

It seems that taking the raw-dng route and further processing with DaVinci Resolve is a very good option to follow.

However, in principle, I see two problems:

  • Logically, a film will contain very bright scenes and others that are noticeably darker. Scenes with little contrast and others with high contrast. So we can’t just scan the whole film with the same exposure and with HQ camera and raw shots it’s hard to decide the right exposure, we don’t have help like for example with a digital still camera.
  • On the other hand, the consumption of resources is enormous. For a simple 15m Super8 film, we would find 3600 dng files each 17.6 MB, if we scan at the camera’s full HQ resolution. If we were content to scan at the intermediate resolution (2028x1520 px), the size would be reduced to 4.4 MB per frame.

Here is the same image scanned at the aforementioned resolution of 2028x1520 px, in case you consider it appropriate to give it the same treatment and compare results.

Thanks again for your excellent contributions.

Hi @Manuel_Angel - the current version of daVinci is 18.5. And yes, there is a steep learning curve involved. The reason: this is a very professional program and a lot of advanced stuff is somewhat hidden from the average user.

However, there are free tutorials available from BlackMagicdesign and a lot more if you simply search on youtube with “davinci” and the topic you are interested in. I highly recommend to obtain the studio version, which costs around 330 €. For our specific purpose, the temporal and spatial noise reduction features are important to have. The Studio version features also quite a lot of other goodies, like automatic scene detection.

Anyway, here’s the result with your reduced size raw image:

using the same processing path as the previous example. The differences are tiny, if you compare that to the previous result. This might be correlated to the input setting I used:

grafik

The decode quality is only equivalent to the resolution of the project, in my case 1920 x 1080.

The largest film roll I have contains approximately 50000 frames (about 46 min running time @18 fps) - this would amount to about 0.88 TB of data scanning in full resolution. So a dedicated SSD like the Samsung T5(1TB) is sufficient. And that is actually the disk I am using during scanning (both PC and directly RP).

2 Likes

I think that scanning raw might be even easier, in a sense. The following is getting a little bit technical but I will try to keep the things simple and (maybe) short.

The problem arises with the immense dynamic range a normal color-reversal film can feature. A normal camera with, say, 12 bits per color channel simply can not resolve this dynamic. One would need at least 14 bit, preferably more.

One way out of this is to capture several differently exposed images and combining them into a final one. This is the basis of the Mertens exposure fusion algorithm. Or, if you dare so, to create a real HDR from the exposure stack.

Another, alternative way discussed in this thread is to capture just a single raw-dng file and process this appropriately. Advantage are: speed, as just a single capture per frame is taken, and therfore much faster processing.

The disadvantage is that you need to get the exposure right. If you do it right, a raw 12 bit per channel capture will be visually indistinguishable from a Mertens merge.

But how do you set exposure? Well, if you overexpose your frame, you will loose all highlight detail. But there is a simple and sure recipe to avoid that situation. Simply adjust your exposure in such a way that the empty film gate gives you the full image intensity without being clipped. Since anything in the film gate, including transparent highlight areas in your frame, will reduce the amount of light arriving at your camera sensor, that data will surely not be clipped. In fact, since the raw-dng is a linear record of the intensities your sensor is recording, the situation is actually slightly more favorable than with the non-linear JPG as output, as used in the Mertens approach.

The downside of the raw-dng approach is actually hidden in the shadows. They will not be covered as good as it is possible with a multi-exposure approach. Does it matter? Probably not, because of two things. First, shadow areas will show up in your final footage anyway as rather dark areas. A loss of precision will be barely noticable, especially with small format film and its excessive film grain in those dark areas. And, if you employ an additonal film grain reduction step, this step will also take care of the small errors caused by the low intensity resolution in dark areas of your single raw-dng.

I am still in the process of comparing these two approaches with each other, so I do not have yet a final answer on how much the use of a single raw-dng might affect image quality in dark areas. I think that multiple exposures might have an advantage when it comes to severly underexposed images - a thing which happens quite a lot with non-professional footage - and extreme color challenges, like old Ektachrome film stock faded into pink. But again, that needs to be tested.

In summary: if you set your exposure in such a way that the empty film gate gives you full white in the raw-dng without being overexposed, your safe for all of your scans, irrespective of the film material you are going to scan.

2 Likes

Just a quick update. I modified my scanning routine to take simultaniously raw-dng and multiple exposures from a frame. Then I color graded both sources so that they looked more or less identical. I realize now that using full frame raw-dngs puts a quite load on the hard disks throughput- 18 fps @ 18 MB equals 325 MB/sec. So working in daVinci kind of becomes lame - even with render cache and proxies enabled.

Colorwise, both approaches are more or less identical when tuned that way. This mainly required to push the saturation of the exposure fused results from the default “50” to “72” or so. While being overall more or less identical, the exposure fused result had a tiny bit more of brilliance in the mid-tones. The most suprising result was the difference in noise between the raw-dng and the exposure fused image. Will continue to experiment…

In addition to those explained by @cpixip above, I would add that another downside is when multiple scenes in the same film are significantly underexposed. The method to set the range will result in those underexposed scenes be quantified with less bits, and when gain is increased in post, the underquantification will show.

HDR and Mertens multiexposure have the toll of processing time.

A poor-math-man alternative is to do multi-exposure bracketing. Use the method above to set light/shutter for a properly exposed scene. Then, at capture, do an additional (or two) longer shutter captures. In post, if a scene is underexposed, simply pick the alternate frame sequence for the scene, and done.

Another alternative I would like to try at some point is to blend two (dng) or more (jpg) different exposures sequences in Davinci Resolve, instead than using algorithmic stacking (Mertens).

The shortcomings of the well exposed 12 bit dng and/or underexposed scenes, will be complemented by the second exposure blend in resolve. Some curves and nodes should do the trick. Another advantage of this approach is that the blending should also reduce the noise of the resulting image, maybe just a bit, but less.

While these alternative methods require more than one exposure, depending on the film content, it would be a small price for the benefits.

As has basically been said already by @cpixip, you can pretty much expose for the highlights and you should be close enough to be able to fix bad exposures in post. In my experience, this works very well in practice. In fact, I have set my exposure on my own scanner once and never changed it since for probably about 50 rolls of film shot in many different conditions (from dimly lit indoor shots to bright sunlight) and on different film stocks. I was even able to recover seriously underexposed shots, which brings me neatly onto this next discussion.

My particular seriously underexposed shots were so underexposed that viewing them on a projector you can hardly even tell what they show. Yet, with the same exposure I always use, the image can be recovered. I think I’ve mentioned it on another thread, but even if I expose for these underexposed scenes, the image does not look any better than it does using my usual exposure and then increasing the gain in post using the RAW dng files. As said by @PM490, the “under quantisation” is probably there when pulling pulling up the gain in post on those underexposed frames, but frankly, if they are that underexposed, the film itself already doesn’t look great. In my experience, the underexposed film looks worse than the “under quantised” shadows, making the latter more or less insignificant to the quality of the final result.

1 Like

By fixed exposure I mean the same shutter speed, gain, and lens for the complete film. A normal scene would result in the full quantization, meaning blacks would be near zero, whites near full range (4096 for 12 bit), easy to see in the waveform. In a badly underexposed scene, whites instead would be significantly less, some times below 400 (10 times less than normal).

Not sure I follow… How the same exposure (fixed exposure) would work the same for under and well exposed scene and seriously under exposed scene.

The under quantization problems are seen easily in Resolve. After adjusting the gain/gamma, adjusting the lift controls moves the waveform in abrupt steps-like bands, rather than the ultra smooth adjustments typically expected. It will also hinder an accurate color correction of dark areas.

… as indicated above, I continue to experiment with direct RAW-capture. Actually, I have tried RAW-capture for quite some time, but always with mixed results. The raws you could get out of the HQ camera lacked a lot of information necessary for appropriate raw development. This situation has now improved to a point where a picamera2-based raw capture can directly piped into daVinci as well as other software, for example RawTherapee, with good results.

I promoted on this forum for a long time multi-exposure capture and exposure fusion as an easy way to aquire decent captures of old color-reversal film. While exposure fusion gives you quite pleasing results, there are some deficits hiding behind this algorithm. Specifically, the light intensities are modified, as well as the color values. Something you will do in color grading as well. But while you do this in color grading manually, in a guided way, it is done behind the scenes, automatically when utilizing exposure fusion.

Let’s look at an example of a frame captured simultaniously in RAW and a 4 exposure image stack, exposure-fused via the Mertens algorithm:

Clearly, the outcome of the RAW is different from the exposure fusion, in many ways.

[Technical details: The exposure/illumination was set in the scanner in such a way that the empty film gate was at the 83% level of the full 12-bit dynamic range of the HQ camera sensor. That resulted in a level of about 240 in the darkest jpg output (the “highlight” jpg, pictured above). Burned out image areas in the film frame reach only a level of approximately 70% - so nearly 13% of light are eaten away by the blank emulsion of this Kodachrome. Clearly, that is not an ideal setting for the RAW capture approach; however, I wanted to be sure the be as close as possible to the optimal setting for exposure fusion.

For the whitebalance, the area of the sprocket hole was used as well. The RAW as well as the exposure fused footage was loaded into daVinci. To produce the above example, only lift and gain was adjusted in such a way that visually similar results were obtained.]

The first thing to note in comparing RAW and exposure-fused results is the difference in color. While the RAW results mimics closely the highlight LDR capture, the exposure-fused results drifts into the magenta color. Clearly, the exposure-fused result outperforms the RAW in the shadow areas (see for example the difference in the people’s faces). Note however that no shadow or highlight adjustments are active in the above example. So the full potential of the raw has not yet been unlocked.

Speaking of highlights - comparing the boat’s hull in the left-lower corner of the frame: here the unadjusted RAW outperforms the exposure-fused result! Frankly, I did not expect this outcome. It is reproducable with other capture examples and seems to be connected to high-contrast scenes where the specific exposure-fusion algorithm I am using (it’s not the opencv one) is having difficulties in squashing all the information into an 8-bit per channel image.

Generally, the exposure-fusion algorithm performs quite well and delivers images which are nice to view. Here’s another example:

Note that the color cast visible in the highlight LDR shows up again in the RAW capture. The exposure-fused results looks better, color-wise. But remember - only lift and gain have been used to align the results of both capture ways.

Trying to make both ways more comparable, here’s the result of this effort:

Specifically, the RAW capture has now a shadow and highlight treatment (shadows: 50, highlights -6). Color temperature and tint was also adjusted (Temperature: -390, Tint 22). Saturation was also increased slightly, with setting Sat to 63.

Now RAW and exposure-fused capture become comparable. It seems that the exposure-fused results generally creates per se a slightly more “brilliant” image. This is to be expected from the way exposure fusion works and is a general trend. However, one should be able to handle this with further color grading efforts.

Now turning toward another issue discussed above - severly underexposed images. Here’s the example I selected:

While the highlight LDR shows nearly nothing, the unmodified RAW capture displays already a little bit more image content, the exposure fused result outperforms already the RAW one. This footage is quite difficult to grade, I tried my best and arrived at this result:

Clearly, the RAW result is more noisy, as it would be expected. The noise is mainly present in the red channel - this channel is traditionally more noisy than the other color channels in HQ footage. Here’s the RAW frame developed not in daVinci, but RawTherapee:

Notice the horizontal banding? To make it obvious, here’s the red channel alone:

So Pablo (@PM490) is somewhat correct in his assessment.But Jan (@jankaiser) as well. One has to take into account that we melting down a raw 4K image to an at most a 2K image output image - and if this is done right, one gains a bit of dynamic depth by the size reduction. Also, you probably would keep footage that dim nearly as dim as it already is - most probably you would not increase brightness so much that you turn night into day…

For my own scanner/workflow, I do see advantages in switching to a raw workflow. I will have to write additional software (for example, a sprocket-alignment procedure for raw files) and I need to look (again) more closely into the color science involved (I have the suspicion that the embedded camera metadata of picamera2 is still not 100% correct).

[For people doing their own experiments, maybe using RawTherapee as development tool. At least my program version uses “Capture Sharpening” as default, like here:

RawTherapee default

Make sure to turn this off, like so:

RawTherapee correct

… that’s all for now :sunglasses:]

4 Likes

@cpixip,

A truly interesting comparative study.

For my part, I am in the process of adapting my software to make the captures in jpg (with or without HDR fusion), raw-dng or both at the same time, in this way we can always choose the one that we like or suits us best.

The first results are quite promising. Raw captures, despite the considerable size of the generated files, are made in similar times to jpg captures with HDR fusion.

I’ve run into an unexpected problem. The computer I use for the captures is not suitable for DaVinci Resolve. Within a few minutes it reaches such a high temperature that it locks up and restarts spontaneously.

I have to do some batch testing with RawTherapee to see what happens.

3 Likes

@Manuel_Angel - daVinci is quite demanding. But the days that a program might crash a computer should be over. A new studio version was released just a few days ago - you might try this one out. One common failure mode which still exists with high demand software: trying to run it on a notebook. Notebooks have notoriously featured insufficient venting. Pair that with too much dust in the venting channels and you are asking for trouble. Cleaning might improve the situation, but I would never do editing on a notebook. Get the best desktop machine you can afford, consult the software manufacturer for appropriate specs.

On the other hand - processing raws into 16 bit pngs outside of your editing program might give a speed advantage during editing and color grading. Raws tend to slow down daVinci quite a bit and dedicated raw developing programs might have a few more advanced options for development. Not sure what the optimal workflow will look like.

3 Likes

Picking this conversation up from the other thread, now I’d be curious to see how the RAW version of that underexposed fish example turned out with a handful of identical exposures averaged together (without the fusion algorithm).

That should clean up most of the characteristic rolling shutter banding and get the noise floor down in the vicinity of the fused version.

Yep, probably. It’s a thing astrophotographers are doing the whole time: merging hundreds of identical noisy exposures into a nice noiseless version. I looked into this some time ago, with mixed results. Besides, storing a single raw takes quite some time and space. Storing several raws for each single frame, for a film roll with 10s of thousands of frames seems not very inviting from my point of view. There was a script by the developer of picamera2 which did frame-averaging before the frame was written als .dng to disk. However, I have lost the location of this script. Maybe it can be found in picamera2’s github. The results of my tests at that time did not convince me to test this approach further.

Note that the above experiments were performed with a non-optimal exposure setting. Setting the empty frame at 85% of the full dynamic range is way to conservative. The next test will be with an exposure setting just slightly above the Kodachrome’s clear film intensity. That is, the white level will be not 70 % as in the above example, but 95% or so. That should improve things.

1 Like

Is this the script?
https://github.com/raspberrypi/picamera2/blob/main/examples/stack_raw.py

@justin - thanks, yes, that’s the script!

I agree that writing the intermediate captures to disk wouldn’t be great, even just from a wear-and-tear perspective on the drive.

Since I’m not using an RPi HQ camera, sometimes I feel like a bit of an outsider. My Lucid model is a little bit scary sometimes, streaming full-size RAW sensor data at ~18 fps over gigabit Ethernet. I only got a sense for how much data was being thrown around the first time I noticed the network traffic in Task Manager while the camera was running:

eth

The API to retrieve the images just gives you a flat, uint16 array of the pixels, already in-memory. (And because it’s a monochrome sensor, I get to skip the debayering step.) It’s one line to copy it into an OpenCV mat. Then, 214ms later, after another four captures have arrived, it’s an OpenCV one-liner to average them together.

From an RPi HQ perspective, I can see why multiple RAWs does seem a little impractical. But so far–from my different world–I’ve seen some really promising results. To try it out I dramatically underexposed some normal footage, but after pulling it back up in post, spending the extra fraction of a second to grab a few more images appears to be worth it to knock the high-frequency sensor noise most of the way back down, leaving mostly lower-frequency film grain noise.

(These are people in the background of some eBay footage–where the noise pattern was easily visible–under green light. By “dramatically underexpose”, even the highlights of her dress had the equivalent of an 8-bit gray value of 1 out of 255 before developing.)

From the RPi HQ perspective, the value trade-off isn’t quite as straightforward, but evaluating it as a general technique, for stopped-motion scanners, it’s a dependable way to buy another couple bits of dynamic range under otherwise tricky circumstances. And for faster cameras that don’t have to write to disk first, it’s an easy decision.

2 Likes

@cpixip,

Practicing with DaVinci Resolve and following your appropriate indications, I have managed to reproduce the treatment of the raw-dng image used as an example.

At the moment I have the free version so that when the “Spatial Threshold” section is configured, a watermark appears on the final image.

Cheers

2 Likes

Ok, some more remarks.

The 12-bit per channel dynamic range of the HQ sensor/camera is not sufficient to cover the range of densities one will encounter in small format color-reversal film. You will have problems in the very dark areas of your footage. Here’s some footage from the above scan to show [Edit: replaced the old clip with a clip using a higher bitrate] some more examples. (You absolutely need to download the clip and play it locally on your computer. Otherwise, you won’t see the things we are talking about. You should notice also some banding in the fish-sequences and a somewhat better performance of the raw in highlight areas.)

As remarked above, this scan was not optimal for raw capture. Improvements possible:

  1. Work with an exposure setting so that burned-out areas of the film stock in question are at 98% or so of the total tonal range. One can get very close to 100%, as we are working in a linear color space (RAW). This is different from .jpgs which are non-linear by design and thus you have to be more conservative with the intensity mapping (I personally use the 93% mark, which is equivalent to a 240-level on an 8-bit per channel image).

  2. Don’t push the shadows that much in those critical scenes. That’s an easy measure as you just have to accept that this limit exists in RAW capture mode. Done.

  3. Employ noise reduction. There are two opposite positions out there: one states that “the grain is the picture”, the other one that “the content is the picture”. The later one potentially opens up the possibility of drastically enhancing the appearance of historic footage and should help with the low intensity noise of the RAW capture method as well. I will have to look into this with respect to RAW captures; here’s an example of what is possible on exposure fused material:

(To the right the original, noisy source, left and middle section slightly differently tuned denoising algorithms, A 1:1 cutout of an approximately 2k frame.)

Another option to tackle the just-not-enough dynamic range of the HQ sensor is to combine several RAW captures into a higher dynamic range RAW. Two options have been discussed here:

  1. @npiegdon’s suggestion of capturing multiple raws with the same exposure setting and averaging out the noise. The above example of @npiegdon shows that it works. The left image shows, if you look closely, the horizontal “noise” stripes, the right, averaged picture misses that “feature”.

  2. @PM490’s suggestion of combining two or more raw captures with different exposure times to arrive at a substantially improved raw.

Both approaches might be quite feasible, as the HQ sensor delivers 10 raw captures per second, so there is plenty of data available for such procedures. Only writing the raw data finally as a .dng to disk slows down the whole procedure, taking about 1 sec per frame.

While you need quite a few captures for approach 1. (I think the 5 exposures @npiegdon used in his examples are somewhat a sweet spot), potentially, you only need 2 appropriately chosen exposures for approach 2. And the digital resolution achievable in the shadows would be better than in approach 1.

I’ve been running tests along these lines, albeit some time ago. Can’t really find any detailed information right now, have not kept enough records. Anyway, here’s the attempt to recover this:

First, the noise reduction in @npiegdon approach is related to the number of captures (assuming independent noise sources) to scale like one over the squareroot of the number of captures. So initially, you get an noticable advantage, but to improve things further, the number of captures increases. If you get a lot of images fast, that’s a great way to get rid of sensor noise. In the case of the HQ sensor, things are moving substantially lower - we get only 10 fps out of the sensor at 4k.

Secondly, with a few tricks the HQ sensor can be persuaded to rapidly switch exposure times. This opens up the possibility to capture a raw exposure stack and combine the captured raws somehow. The upper row of the following image shows three different raw captures (transformed with wrong color science to sRGB-images - that was done some time ago)

img0 is the classical “don’t let the highlights burn out” base exposure, the two other raw exposures are one f-stop brighter.

Since raw images are in a linear color space, alignment the data should be trivial, by appropriately chosen scalers. That’s what the lower row shows.

Here are the results I obtained at that time (note the exposure fused result in the lower left corner for comparision):

“img0|img1” indicates for example a combination of raw image 0 and raw image 1. You can not simply average both images together, as the blown-out areas of image 1 destroy the image structure in those areas. The images were combined differently: if the intensity of a pixel was above a certain threshold, data form the dark image was taken, if it was below that threshold, data was taken from the brighter image.

In this way, a good signal was obtained in shadow areas of the frame, with only two captures. However, in the vincinity of the threshold chosen, there was some slight banding noticable. Therefore, a second approach was tried - namely blending in a soft manner over a certain intensity range.

That blend needs to be optimized. You do not want shadow pixels from the dark raw image and you do not want highlight pixels from the bright raw image. I did get some promising results, but at the moment, I can’t find them. As at that time the color managment/raw handling of the picamera2 lib was broken, I did not continue that research further. One issue I have here with my scanner (because of the mechanical construction): there is a sligth movement between consecutive exposures, and I need to align different captures. Not that easy with raw data. This might not be an issue with your scanner setup.

Anyway, I think it’s time now to resurrect this idea. Getting two differently exposed raws out of the HQ sensor will take about 0.2 sec; storing that onto an SSD attached to the RP4 will eat up 1 sec per frame, but because of my plastic scanner, I need to wait anyway a second for things to settle down mechanically after a frame advance. Potentially, that could result in a capture time of 1.5 sec per frame. I currently need about 2.5 sec for a 4 exposure capture @ 4k, so in fact the raw capture could be faster.

As far as I know, in the .dng-files picamera2 creates the color science is based on forward matrices Jack Hogan came up with. These are actually two matrices for two different color temperatures, one at the “blueish” part of the spectrum, one at the “redish” part.

In your raw software, once you select “camera metadata” (or a similiar setting), the raw software looks at the color temperature the camera came up with and interpolates a new color matrix from the extreme ones stored in the .dng-file. Not sure whether the color temperature reported by the camera in the .dng-files is the correct one when using manual whitebalance (red and/or blue gains). I have to check this. That might throw off the colors.

For people who are familiar with raw processing, there are additonal .dcp-files specifically for the HQ camera you might want to try when going from raw to developed imagery. You can find and download them from here.

If you use either the color matrices embedded in the camera’s metadata or the .dcp-profiles Jack came up with, you will always get an interpolated matrix, based on the current color temperature. Ideally, you would want to work with a fixed color matrix calibrated to your scanners illumination. That’s what I did until recently, when the Raspberry Pi foundation changed without annoucement the format of the tuning file. Currently, I am using the “imx477_scientific.json” tuning file for .jpg capture which features optimized color matrices for a lot more intermediate color temperatures. So, color science with the HQ camera seems still to be a mess - I did not yet have the time to sort this out.

So, wrapping up this long post: raw capture seems to have some advantages compared to exposure fusion, as well as some drawbacks. It seems that several raw captures combined in a super-raw might be the way to go forward. Raw captures record colors more consistently than the results of an exposure fusion; manually tuning highlight and shadows in raw captures seems to deliver the same or better results than the automatic way exposure fusion is doing. There are issues in the dark parts of raw captures - they limit what you can do in the post. Alternative approaches like combining several raw captures into a super-raw might be feasible, depending in part on the mechanical stability of your scanner.

5 Likes