@peaceman: very interesting results! Indeed, your comparision shows clearly the heavy image processing which is nowadays so common and so unavoidable with consumer sensors.
By the way - even the raw image of the HQCam has already some image processing applied. I have seen noise measurement of raw HQCam captures which hint to some spatial filtering applied at the sensor level. Also, for certain there are dead-pixel algorithms at work before the HQCam ships out it’s “raw” frame.
Usually, it is hard to get such information, as the chip manufacturer tend to keep their secrets, and nearly always impossible to switch them off.
Anyway - what would be very interesting to test (if you have the time and setup) is to see how the 12bpp of the raw image file compare against the dynamic range of a standard Kodachrome color-reversal frame with large shadow areas.
To elaborate: color-reversal film is usually exposed so that the lights don’t burn out. During projection, detail loss in the shadows is not so annoying than burning out the highlights of the scene. Also, our eyes as “consumer” of projected film frames are surpassing the possibilities of current sensor chips by a wide margin.
In order to reproduce highlight details in a scan faithfully, usually an exposure value for a single frame capture would be chosen so that the brightest image parts are still within the dynamic range of the full digital image path.
For example, I am working with 8bpp (and HDR later in my processing pipelinep). So my exposure is set in such a way that the sprocket hole of the film (which is pure white by definition) maps to something in the 240-250 value range (the maximum value representable with 8bpp is 255). With the 12bpp of a raw HQCam image, you probably could go up to values of 4000+ or so for pure white.
Now it would be interesting to see how good the performance of such an exposure setting is performing not in the highlights, but in the darker image areas of a frame. Specifically, how far can you go toward darkness?
There are two interesting points here - first, there will be more noise in the lower bit channels of a capture. This will raise the noise floor in dark, dense image areas, as compared to other areas. Another point to look at would be the limited amount of bit variation available in these dense film areas. Namely, only a few bits of the image data will change here. This could lead to banding and quantization artifacts (it might not - that would be an interesting result).
From my experience, a normal Super-8 frame basically never shows real dense areas, even in very dark shadows. Usually, the black mask around the frame is the densest part you normally encounter (and you would probably not care to scan these areas in high fidelity). However, in a fade you might encounter really dark areas in the frame itself.
Well, it would be interesting to see the limits of a raw capture here, given the challenges of this approach - you have to cope with large file sizes, low transfer rates and the need to “develop” the raw frame into something displayable on todays 8bpp frames… (given, this last point is about to change with HDR-displays )