First: did you change your illumination source between captures?
Yes and no. It’s the same lamp and diffusing sheet, but the lamp was further away and not semi-encased as in the projector. But that’s interesting. I thought you could see the structures less because I didn’t get the focus right.
The sprocket hole should have exposure values of 100% in the raw file, nothing less.
Makes sense… yup. I’m used to all the little helpers of the DSLR and am only halfway finished with adding a little histogram feature to my script. I should probably try out one of the more sophisticated scripts linked in these forums, but I like tinkering first .
the digital gain stays at 1.0. You do not want any other value.
I set gain to 1.0 and exposure time to a fixed value, but no matter what I set, digital gain is something above 1.0. How do I know which exposure values are native to the Pi Camera?
Please note that the .dng-files the HQ/Schneider combo would produce can be read directly into DaVinci Resolve.
It’s the same workflow with the Canon . MLV splits into individual dng-files which I can use directly. Similarly, I’d use Adobe DNG converter on the Pi files, just to reduce their footprint (lossless compression).
How do you guys manage those big files, anyway? One 30 Minute film would have 800GB of data (or around 500 if run through Adobe).
This is not a discovery, but the result of optical calculations!
Distance lens (lens optical center) image 50*1.0858+ 50 = 104.29
Thanks for this immensely useful calculation! I’ve been trying to figure this out on my own and couldn’t find anything succint enough for my monkey brain.
Small edit: I’ve been trying to apply this formula to other setups, but can’t relate some of the values. Is it both times the lenses focal length?
I’m using a slightly lower resolution sensor than the HQ camera, but after combining the separate raw RGB and IR exposures, they end up just shy of 17MB for each frame so the amount of data is in the same ballpark.
Since I knew my own collection of reels was around 4 hours of footage–and I don’t know if this is a good answer or not–I just bought a pair of 12TB hard drives and have them set up in a RAID 1 configuration in my PC so everything is duplicated for safety. It’s kind of wild how cheap “spinning rust” has gotten compared to any other kind of flash or optical storage.
While working with the frames I copy them over to a fast SSD (otherwise it’s painfully slow in the editor!) but most of their “cold storage” time is spent on the pair of HDDs. (As a final safety precaution because I’m paranoid, I keep the file hashes for each frame and confirm them whenever data is moved between drives.)
You might want to read here (section “Image formation”) for more information. Actually, the German version is explaining things better. In any case, in the real world, lenses do have principle points and other properties which might throw such simplified calculations off the track. Nevertheless: if you want a 1:1 magnification, the general rule of thumb is that you simply place the object in a distance equal two times the focal length and the sensor in the same distance as well. That gives you 1:1 imaging.
Specifically, for the standard setup of a 50mm Schneider lenses paired with an HQ sensor you place the object in 2*50mm = 100 mm distance. Also, you use the same 100 mm distance between lens and sensor. Again, that is not exact and will only give you a ballpark to improve upon.
That was what was quoted above and that’s what I am using in my setup as well.
Well, get yourself a few TB of fast SSD. And make sure your computer can handle the data rates.
Note that “lossless compression” requires additional computations when reading the files back from disk. Not much of a hassle if you have a fast computer, but it may slow down things when editing.
Another approach would be to reduce the size of the image data you are working with. Normally, I take the 4k raw files from the HQ sensor and pipe them through DaVinci. The developed raw files are scaled down to some reasonable resolution for my project, say 2400x1800 px for HD target resolution, and than rendered out as either linear .tif or .png with 16 bit depth. This format is what I use as basis for further processing, seldom looking at the original raws again. You can compress .tif and .png data as well; at least my hardware is able to handle that data within DaVinci in real-time. But I usually do not bother, in exchange of a slightly larger disk space requirement.
In addition, I simply use sufficiently fast storage (Samsung T7, 4TB for example) + the ability of my NLE program to produce proxies for post production work. You need to make sure that your USB3 ports and USB3 cables are both able to support the highest speed these disc are able to work with.
Well, you have triggered old memories here (which I was obviously trying to forget…). Citing myself,
" Another important deviation from the old picamera-lib is the handling of digital and analog gain. In the new picamera2-lib, there is actually no way to set digital gain. As the maintainer of the library explained to me, a requested exposure time is realized internally by choosing an exposure time close to the value requested and an appropriate multiplicator to give the user an approximation of the exposure time requested.
An exposure time of, say, 2456 is not realizable with the HQ sensor hardware. The closest exposure time available (due to hardware constraints) is 2400. So the requested exposure would be realized in libcamera/picamera2 by selecting: (digital gain = 1.0233) * (exposure time = 2400) = (approx. exposure time = 2456)."
and
“One approach to circumvent this is to choose exposure times which are realizable hardware-wise and request this exposure time repeatably (thanks go to the maintainer of the new picamera2-lib for this suggestion), until the digital gain has settled to 1.0. For example, the sequence 2400, 4800, 9600, etc. should give you in the end a digital gain = 1.0. And usually, it takes between 2 and 5 frames to obtain the desired exposure value.”
I think one way to figure out which exposure times are directly realizable with the hardware is to request a certain exposure and monitor the digital gain which is returned in the metadata. Due to an automated process running in the background of libcamera, you will need to capture at least 10-20 consecutive frames before you read out the value of the digital gain. If it’s close to 1.0, you have found one exposure time you are after.
Note: this was the status about 2 years ago. There has been work done on this, so things might have changed. Never bothered to check again - libcamera and picamera2 are technically speaking a mess and overall a rather bad design. And the libcamera people as well as the RP guys tend to change things happening under the hood without much information towards the user. That’s why I am very hesitant updating these two software packages once I have a running/working system.
Let me note that questions about picamera2 can be posted here. The maintainer of the picamera2-library is rather responsive and helpful. Also, searching the “Closed” issues you might discover some useful information.
Update: I had a discussion with David Plowman, the maintainer of picamera2, about the current state of affairs (Sep '24). According to him, DigitalGain is only applied “to the fully processed ISP output”. It is not applied to the raw output. So DigitalGain is relevant only if you store rec709-like images as .jpg or .png, for example. In case of linear raw files (.dng), only the AnalogGain gets applied. So, if you are working with raw files, no need to worry about the DigitalGain-value. And of course, you should set AnalogGain = 1.0 for the lowest noise level.
I thought so, too, but I’d wager there’s about 10 hours of material lying around here, so I’m going to run out of space fast :-).
I’m going back and forth between the two cameras and I think it’s mostly the file size that’s keeping me from simply using the HQ cam at full resolution. A few questions came up.
So far I’ve only been thinking about using Adobe DNG converter as it reduces file-size to about 2/3 (lossless). It also has a lossy compression feature that unfortunately doesn’t clarify which type of compression it uses (JPEG or JPEG-XL), but resulting bit-depth is 8, so I assume it’s JPEG. The differences between both are… detectable if you overlay them in Photoshop with a difference layer, but even then you’d have to look closely. Unfortunately DaVinci can’t open the files for some reason.
Now I’ve tested slimRAW, and by e.g. using their “lossy VBR HQ” setting it reduces file size to about 6 MB, preserves 12-bit color depth, and the images look pretty much indistinguishable from their respective originals. The problem with slimRAW: It refuses to read Pi HQ DNG natively - I have to put the files through Adobe DNG Converter first. Even then it only picks up the full-resolution ones, and afterwards the files will only open in DaVinci…
I couldn’t find anything on slimRAW on the forums and nothing anywhere else regarding possible issues with slimRAW and Pi Camera files. Obviously there’s something that the Adobe Converter does that makes the Pi DNGs readable with slimRAW, but I don’t have the knowledge to know where to look for possible differences. Are there any options at all to writing DNG files from Pi Camera RAW data?
Update: I had a discussion with David Plowman, (…) about the DigitalGain-value
I personally would not do it. It takes time to compress the data, it takes time to extract the data again. You only gain 1/3 of disk space.
Never do something like this. As you noticed, it kills your image quality.
Because the .dng-files created by picamera2 are kind of non-standard (only one CCM instead of two CCMs in a standard .dng)
Yes there are. In my picamer2-based software, I actually create a .dng directly in memory, send it via LAN to a faster, bigger PC, where it is simply received and stored as it is directly as .dng-file. Once scanning is over, I simply drop all these .dng-files as a single clip directly into the media page of DaVinci. That’s all. Direct HQ/RP5 to DaVinci pipeline. Nothing could be easier. Details are distributed here on the forum in various threads; if I find time this weekend, I’ll write everything together into a single thread.I think @dgalland’s software is doing a similar thing.
Never do something like this. As you noticed, it kills your image quality.
But… does it? I agree that Adobe’s lossy algorithm isn’t an option because it reduces bit-depth to 8. But slimRAW doesn’t. I am able to see the differences in pixel arrangement when I look at these images at 500% zoom, but is that really relevant?
Granted, I don’t have any proper scientific ways to compare those three images, so you might convince me otherwise.
Because the .dng-files created by picamera2 are kind of non-standard (only one CCM instead of two CCMs in a standard .dng)
Is there a way to save a “standard” DNG, though? (I really wish I had more knowledge about these things. It’s not easy to catch up).
I actually create a .dng directly in memory, send it via LAN to a faster, bigger PC, where it is simply received and stored as it is directly as .dng-file.
Do you mean a raw data? Or is it already a dng before it’s sent over? How does the receiving PC store it as DNG?
Does the PiDNG library by itself have more options available? The examples are rather lacking…
Once scanning is over, I simply drop all these .dng-files as a single clip directly into the media page of DaVinci.
I wouldn’t do it any other way.
…Only, maybe, lossily compressed
Do you mean a raw data? Or is it already a dng before it’s sent over? How does the receiving PC store it as DNG?
In my software, using functions from the picamera2 library, a .dng file is generated which is then sent via LAN to the main computer whose job is simply to save it to disk so that after the capture phase, post-processing can be carried out, for example with DaVinci.
I attach the python function that carries out these operations:
Take and send a dng file.
def takeAndQueueDng(self, imgflag):
# The new exposure time is stabilized.
if not self.cam.autoExp:
self.stabExpTime(self.cam.exposureTime)
# Raw image is captured.
request = self.cam.picam2.capture_request()
request.save_dng("file.dng")
request.release()
# Sending flag.
self.imgSendThread.imgflag = imgflag.encode()
# Sending dng file.
with open("file.dng", "rb") as dngFile:
self.imgSendThread.stream.write(dngFile.read())
# Sending the exposure time.
self.imgSendThread.exposureTime = self.cam.captureMetadata().ExposureTime
# Thread start.
self.imgSendThread.event.set()
Well, of course. Any lossy compression algorithm is just that: lossy. Whether you will notice or not depends on the circumstances.
The simplest approach to smaller file sizes, faster processing and irrelevant image loss is often overlooked: reduction of image size. For example: a HQ-sized .png (4048 x 3023 px) might come along with 67MB of storage; the same image scaled down to 2400 x 1800 px only requires 23 MB storage space. And that’s a resolution which is for most workflows a sufficient resolution for S8 material.
Well, there is indeed a spec by Adobe. Problem is that it is so complicated that no-one implements it to the full - save maybe Adobe and other software based on Adobe’s implementation.
.dng is intended to be an intermediate/universal raw format, freeing the user from all the individual implementations of the various camera manufacturers. But you have to “translate” your cameras original format into the .dng-format. This is what Adobe’s raw converter is doing. So, that’s the “standard” way of creating a .dng-file.
In doing so, the converter needs to translate the camera’s metadata into the .dng-format’s metadata tags. How that is exactly done is not that clear. Specifically, a .dng-file contains color science information. How this gets translated into the .dng-color information is dependent on the camera model and, up to my knowledge, undisclosed.
The usual .dng-format contains two color matrices, corresponding to a cold and a warm color temperature. In addition, also the color temperature at the time the image was taken is can be interfered from the metadata contained in a .dng-file. Any software developing the raw data is expected to use the image color temperature and interpolate the real color matrix from the two color matrices located at the far end of the color temperature scale.
That’s the usual/“standard” .dng-file. There are other, more complicated forms of color metadata defined in the Adobe standard, but that’s not relevant here.
The software used in picamera2 is a free software implementation which just contains as metadata a single color matrix - namely the one the camera thought was appropriate at the time of picture taking. That is: either calculated by the AWB algorithm, or specified by you as a user, setting red and blue gain.
While this format is perfectly ok - it is in a sense even preferable, as it is not necessary to interpolate - it might cause trouble in software assuming to get two color matrices for interpolation.
In addition, the free software solution utilized in picamera2 set some other tags a little bit wrong and that is also a potential source of software refusing to process .dngs directly from the HQ sensor.
The important thing is: DaVinci understands the HQ/picamera2 .dng-format correctly. I doubt that a picamer2-dng file run through Adobe’s raw converter gives you the same color science as the original .dng. But: I have not checked that, as this additional conversion step does not help in any way. At least not in my workflow.
@Manuel_Angel already posted some code on how to store such a picamera2 generated.dng-file directly. As mentioned above, I may not have time to gather all the necessary information until next weekend.
Sorry to reply here with no real info to add to actual discussion.
I just have to emphasize to all of you who share your knowledge here, that it is tremendously appreciated. All the bits and details make for a really interesting read, and even though it’s sometimes hard to digest all the details, it is much appreciated.
Thank you all!
@cpixip it would be really helpful to others (me at least) if you could gather all the details of your process, but I know this is no quick task.
There’s so much info on this site, that I get lost in “what is the current best way to go”, “what is just a matter of opinion” and so on, but still interesting to read nonetheless.