Well, I think one of the profiles of Jack Hogan is just ASCII-encoded and you might be able to extract the appropriate color matrices and forward matrices, as well as the corresponding calibration illuminants. This might get you going; but maybe the software creating the .dng-files in the first place does use Jack Hogan’s data anyway - as I mentioned, I do not know the current state of affairs here, but there have been developments the last time I checked.
Well, it is a 3d-printed integrated sphere. Used to be equipped with red, green and blue LEDs, nowadays I am using instead three Osram white-light LEDs, specifically Osram Oslon SSL80. The later give a better color definition. Some more information can be found here.
At this point in time, there is not really something I could share. I am still in the process of investigating the best way to do it.
Basically, any raw capture is already a HDR - only, that the range of intensities is limited to the bit range the camera can deliver. This has two consequences. For starters, bright lights might saturate. Of course, you would adjust shutter speed in such a way that this is not the case. However, once you have done this, you will get quantization noise in the low illumination parts of your image. Simply, because the camera is working with evenly spaced intensity levels. You can capture however a second raw with a longer shutter speed, pushing these dark areas into a better range. Of course, in this second capture, all brighter image areas will be blown into the white. Nevertheless, the shadows will be much better defined in your second capture.
For combining these two raws, you need to go from integer numbers to floating point variables. Than you need to multiply the second, brighter exposure with an appropriate scaler which gives you (nearly) identical intensities for image areas present in both images. Once you have this rescaled second exposure available, there are various way (hard/soft threshold for example) to combine both raws into a new image which should have reduced camera and quantization noise in the darker areas of the image.
The information in the darker areas of the image will come from your second, brighter exposure, the information in the brighter image areas from your initial base exposure. That’s it, basically. The challenges in this approach are hidden in the way the two exposures are combined; this will have an effect on how much the end result will be affected by image and quantization noise. I am still doing some research here.
Technically, a true HDR records just the radiance of the scene. So there will never be any rec709 or any other profile hidden in an HDR. Same is true for raw images: a gamma-/contrast-curve is not applicable for raw data (usually - newer sensors kind of deviate). Raw values are linear values (again: mostly).
Let’s get a little bit more into the details:
- Raspberry Pi software captures (or at least did capture) raw sensor data into various non-standard formats onto the hard disk. Some of these raw file formats were proprietary, some are close to Adobe’s .dng-spec. How close at the moment I do not know.
- Raw data is just that: the data the sensor is actually sensing. There is no gamma-curve applied to this data, this is actually one of the last steps done during the development of the raw data. Other steps in the development of the raw include white-balancing and application of the appropriate color matrices. And this is indeed the processing libcamera is applying to the raw data to get from that raw sensor data to the jpg or png which it usually outputs.
- Raw data is in a certain sense already a HDR-signal. However, it is quantized to a certain range (10/12/14 bit) and exhibits therefore overflow and quantization noise. That spoils the fun occasionally.
- a real HDR is a floating point representation of the radiance of a certain scene. Specifically, its values are unbound. A real HDR is not equivalent to what is called a “HDR” on many internet sites or in any sales material.
- exposure fusion via Mertens does not create a real HDR. The goal of exposure fusion is to transform a stack of low dynamical range images (LDR, usually 8 bit) into another image which can be viewed on normal display (that is, actually another LDR image). That is, the output imagecaptures the spirit of all the data contained in the LDR-stack. To achieve this, the dynamic of the original LDR-stack is reduced to something which can be nicely displayed on any standard display (8 bit/channel).
- exposure fusion is similar to the normal HDR-process, but does everything in one pass. The normal HDR-process consists of two independent steps: 1) estimating a real HDR from the scene data and 2) tone-map the HDR into a displayable LDR image. Note again the huge difference: this is a two-step procedure, first create a HDR, than tone-mapping it into a displayable LDR. Again, the result of a normal HDR-process is different from the result of exposure fusion.
- As noted above, a jpg image output by libcamera has, as one of the final steps, a gamma-curve applied to its values (this gamma-curve could be rec709. In the standard tuning file of the IMX477, the gamma-curve is something someone thought “looks nice”). In any case, the gamma-curve applied as well as other image processing options performed in a camera make the estimation of a HDR image from jpgs non-trivial. Debevec came up with a solution in 1998. He first uses the stack of LDRs to estimate the gain-curve of the camera. Once that gain-curve has been calculated, one has a tool to transform the jpg-image intensities back to their original raw values. Combining these recovered raw values into a single image file gives you finally the HDR you are after.
- A true HDR looks rather dull, similar to a non-developed raw. The reason: scene radiances cover normally such a broad range of values that it is impossible to display this on a standard LDR display. Only after a secondary step, the tone-mapping, the HDR content becomes viewable on our average displays. I am not aware of any good tone-mapping algorithm for HDRs. (Just for the record: while a HDR should have the correct colors, the raw file does not. A raw file needs color science to be applied.)
- Specifically, developing a raw into something viewable consists of two steps as well: A) get the colors right (that step depends on the camera’s bayer filter) and B) get the intensities right (that usually amounts to remap some intensity ranges which are either too bright or too dark and finally apply an appropriate gamma-curve). The later step is very similar to the tone-mapping of the HDR discussed above.
One idea I have not yet tried in this context is the following: the new libcamera/picamera2 approach does not only give us access to the raw data, but also to the metadata of every single image. Now, we know that the 12bit of the HQ camera is close to sufficient for most color-reversal material, provided that the exposure of each frame is optimized. So, what if we capture the film with autoexposure doing its fine work (optimizing the exposure of the frame to the working range of the sensor) and recording with every frame the values the automatic came up with? Specifically, we should take note of shutter speed, digital and anlag gain of each frame. With this information, one should be able to transform the .dng-files (with varying exposure settings due to the autoexposure working) to a common reference exposure setting which we need for converting the frame sequence into film footage. In the end, such an approach would be something like a poor man’s HDR capture, with limited (12 bit), but adaptable (via the autoexposure algo) dynamical range. Might be something to experiment with, as only a single raw-capture needs to be done for each frame…