Exposure and light

You’ll want at least 3-4 of those LEDs to get enough light. I used the same ones with the same sensor size.
I’d put even more light in there than you need, particularly for very dense, underexposed film
I can’t remember what shutter speed I was using sorry.

The helicoid/extension tube loses a lot of light when setup for 8mm.

1 Like

Thanks for your helpful infos.
What if I get something like this? It has integrated fans, it’s dimmerable, and looks to have high CRI

Has anyone tried using these for making integrating spheres.

https://www.ebay.com/itm/385196904479?_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20220705100511%26meid%3Dd9c377ed1d3f41e5823f294d2edd1331%26pid%3D101524%26rk%3D1%26rkt%3D1%26itm%3D385196904479%26pmt%3D1%26noa%3D1%26pg%3D2380057%26algv%3DRecentlyViewedItemsV2%26brand%3DUnbranded&_trksid=p2380057.c101524.m146925&_trkparms=pageci%3A32ea5fb0-bf21-11ed-bf52-02ea9ab5fb6f|parentrq%3Acabac2b61860a0f34f8d20baffff7f89|iid%3A1

I have not personally constructed an integrating sphere. Here are some examples from others using cake molds or bath bomb molds. I would be concerned about the interface between the 2 halves (see the 3rd link.) Do the 2 halves form a perfect sphere or form an ellipsoid? If ellipsoid, how does the deviation affect light uniformity?

http://www.moria.de/tech/integrating-sphere/

https://hackaday.com/2022/02/06/cannonball-mold-makes-a-dandy-integrating-sphere-for-laser-measurements/

https://www.whitworthnearspace.org/wiki/Determination_of_atmospheric_ozone_concentration_through_absorption_of_UV_and_visible_light_in_an_integrating_sphere

@justin: that is probably rather irrelevant. I have seen integrating “spheres” realized as a simple styrofoam box. Some enlargers which were used in the old days of analog photography worked with that principle.

More important should be that the coating inside the sphere is lambertian (ideally). That lambertian reflectance realizes the desired random mixing of light inside the sphere by multiple reflections. Small form deviations from the spherical form will not make a too much difference than.

Finally, the openings of the sphere should be sufficiently small with respect to the sphere’s diameter in order to get an evenly distributed light intensity out of the ports.

1 Like

@agus35monk I am using 55mm ping pong (oversized) ball, which is decent for an 8mm target. No coating was used, I did select that the visible portion does not have the hemisphere seam or the injection mark.

@justin thank you for sharing these references. The last reference uses this interesting aluminum cake molds, which are available in various sizes. The part looks like a promising half sphere, perfect for light sanding and coating. As mentioned by @cpixip, the relation between the diameter and the port size, and the reflecting surface are critical.

As mentioned prior, imageJ is a great tool to measure the resulting flatness of the illuminant+reflector+lens+sensor+library and provides a great visual representation to confirm the system is flat.

Hi,

i was just revisiting my flat gray image when I found this pattern, and I’m just not sure at what I’m looking at:

this is with the HQ cam at full res with the scientific tuning file, png compression disabled. What may cause this too regular pattern to appear - don’t think it’s from the lens or the camera (15 * 20 peaks if I countered correctly), any hints?

@d_fens - could you elaborate more on how you obtained this image? You are mentioning png-format, but how did you arrive here? Taken by some of the libcamera-apps? Taken by own (python?) code? First thing I would check is whether this pattern is already present in a raw capture (probably not, but who knows?). If it’s not present in your raw capture, the issue might be created by libcamera’s processing pipeline. Did you notice that pattern before? Libcamera enjoys frequent updates, and there is a slight chance that something broke.

Whatever you used to obtain this data, try to change things systematically. For example, compare the results with the scientific tuning file vs. the standard tuning file. Check if using a different output format has any effect on the result. Try to use alternative capture software - that is, use picamera2 if you captured your data with a libcamera-app and vice versa.

I am not aware that anybody else encountered a similar pattern. As the pattern repeats in sync with the image borders, it’s unlikely (but not impossible) that it is caused by intensity fluctuations in your illumination setup.

@cpixip I tried to reliably reproduce it using the example_app from the libcamera2 repository and I think i may have found the cause: when requesting super short exposure times like 141 ( thats 114 “real” µs plus some digital gain) this pattern can be seen.
This is independent of raw/jpg/png, AWB on/off, etc and of course requires the scientific tuning file - otherwise the image is rather a bowl than flat.
so the simplest test case is smth. along
libcamera-still -o test.jpg --analoggain 1 --shutter 141 --tuning-file /usr/share/libcamera/ipa/rpi/vc4/imx477_scientific.json , would be interesting if you could repoduce?

Great result! I certainly will look into that, but it will take me one or two weeks. My setup is currently dismantled…

An exposure time of 141 is quite faster than the exposure times I usually work with, and I usually try to make sure that the digital gain is identical to 1.00, as I do not really trust this part of the libcamera approach/implementation. What happens if you request a shutter time the sensor can realize directly, without the digital gain adjustment? Second experiment: capture simultaniously a raw (I think this can be done with the -r flag) and check whether you can see the pattern already in the raw file. If so, it’s a sensor issue. Otherwise, I would tend to blame libcamera.

Again, if I understand your experiment correctly, you worked with a digital gain of about 1.24 - I always select exposure times which result in a digital gain of 1.0 (that should be 114 in your case?).

Here is an approximation to your setup, with a couple of limitations:

  1. The HQ sensor I have, does not have the original IR filter.
  2. Analog gain settings were adjusted for best white balance.
  3. The White LED is 5000K.

The command line used was:

libcamera-still --gain 1.0 --shutter 141 --awbgains=1.58,1.86 --awb=daylight --tuning-file=/usr/share/libcamera/ipa/rpi/vc4/imx477_scientific.json --output test.jpg

I adjusted the settings to have the approximate same Z range.


In the plot some known spots are visible. The lens is at f2.8, and the White Leds are set to max.

Not sure what the pattern you see is, but it does not appear to be a sensor issue. Hope this helps.

Had more time to play with the settings, and below is another plot of the same capture that I believe is closer to @d_fens above plot settings. The pattern is not visible, the known lens spots are.

Additionally, I tried the following settings for a new capture.

libcamera-still --gain 1.0 --shutter 141 --awbgains=1.58,1.86 --awb=daylight --tuning-file=/usr/share/libcamera/ipa/rpi/vc4/imx477_scientific.json --denoise off --sharpness=0 --encoding=png --output test2.png

Using the png output file, and using the exact same plot settings, here is the plot.

With these camera settings… there is something resembling a pattern when ploting the png file.
To compare, below is the plot of the jpg file saved on the same capture… the pattern is not as visible.

Recap: All 3 surface plots above had the exact same ImageJ settings. The last two plots correspond to the same libcamera-still capture, the only difference is the encoding file analyzed.
With denoise off, sharpness off, and png encoding, a slight pattern was visible.

When using @d_fens simpler libcamera-still settings, no pattern was visible.

@d_fens, @PM490 - could you upload a raw capture with the exact same capture settings in which you obtained the periodic pattern? Also, it would be interesting to see whether this effect happens also when the digital gain is equal to 1.00 (for this, the exposure time needs to be changed).

For the .jpg or .png image to be reproducible and useful, it is important that no autowhitebalance algorithm is active (that is not the case with @d_fens’s commandline) and the color gains are fixed to some sensible values.

As one can see from @PM490’s results, the pattern changes slightly when using .jpg vs. .png - that could simply because the .jpg-format is compressing the image data with some loss in fidelity, while the .png-format should not do this. So to hunt for the periodic pattern, it is best to use the .png-format.

It would also be helpful to record some of the metadata associated with each image. Specifically, the digital gain as well as the gains for the red and blue channel should be important parameters.

To recap what I know about the libcamera’s way to obtain a .jpg or .png:

  1. once the user requests a certain exposure time, libcamera tries to approximate this exposure time by a shorter exposure time the sensor can realize in hardware, plus an appropriately chosen digital gain which scales the real exposure time to the value the user requested. I currently do not know whether this is realized in the sensor’s hardware (in this case, the pattern might already be noticeable in the raw image)
  2. Once the raw image has been aquired, the red and blue color gains are applied to the raw image. This is certainly already done in libcamera itself - and of course, the output result will depend on the color gains applied to the raw image. Since both of you have quite different setups (and: some images were taken with autowhitebalance active, some not) your results are a little bit difficult to compare.
  3. The next step is to apply a color matrix for the data. This step is actually quite complex, mainly because the color matrix chosen depends on a lot of variables estimated by libcamera. The actual color matrix chosen will be available in the metadata of the image as well. The matrix will not be identical to any of the matrices encoded in the tuning file, as it is computed from these matrices by libcamera.
  4. An appropriate contrast curve (which is Rec. 709 in the case of the scientific tuning file) is applied to the data of step 3. At this point in time, the image is “viewable” to a human observer.
  5. The image is finally stored into a file on your disk. Using .png should result in a larger file which has all the information available from step 4., using .jpg will result in some smoothing of the image data - jpg. is not a lossless storage format. Specifically, the pattern visible in your test will be less noticeable when storing the data as .jpg.

Technically, the periodic patter could be the result of some overflow/calculation gone wrong in the processing pipeline described above. Another issue might be that the lens shading algorithm reacts weird if it’s not present at all (as is the case with the scientific tuning file).

If you could post the actual captures (not the ImageJ plots) I could have a look with my tools what the ghost pattern is really all about. Ideally, the combination of a .png output with the raw DNG file would be quite helpful.

@PM490 thanks for checking it out - i do think that this pattern is in all of your images, although with a lower frequency and amplitude to really pop out.

Like for someone with a hammer everything looks like a nail I do see the patterns as I annotated (badly) here:


I have a suspicion what might cause these patterns. The “rpi.alsc” module, responsible for the “adaptive lens shading compensation”, is missing in the scientific tuning file. For good reasons.

According to available documents describing the actions of tuning file, a module which is not present should never be activated.

For the following, we need to take a closer look on how the ALSC module operates. It gets its compensation data out of the tuning file. Specifically, there are low resolution tables like this here

"luminance_lut":
[
    1.548, 1.499, 1.387, 1.289, 1.223, 1.183, 1.164, 1.154, 1.153, 1.169, 1.211, 1.265, 1.345, 1.448, 1.581, 1.619,
    1.513, 1.412, 1.307, 1.228, 1.169, 1.129, 1.105, 1.098, 1.103, 1.127, 1.157, 1.209, 1.272, 1.361, 1.481, 1.583,
    1.449, 1.365, 1.257, 1.175, 1.124, 1.085, 1.062, 1.054, 1.059, 1.079, 1.113, 1.151, 1.211, 1.293, 1.407, 1.488,
    1.424, 1.324, 1.222, 1.139, 1.089, 1.056, 1.034, 1.031, 1.034, 1.049, 1.075, 1.115, 1.164, 1.241, 1.351, 1.446,
    1.412, 1.297, 1.203, 1.119, 1.069, 1.039, 1.021, 1.016, 1.022, 1.032, 1.052, 1.086, 1.135, 1.212, 1.321, 1.439,
    1.406, 1.287, 1.195, 1.115, 1.059, 1.028, 1.014, 1.012, 1.015, 1.026, 1.041, 1.074, 1.125, 1.201, 1.302, 1.425,
    1.406, 1.294, 1.205, 1.126, 1.062, 1.031, 1.013, 1.009, 1.011, 1.019, 1.042, 1.079, 1.129, 1.203, 1.302, 1.435,
    1.415, 1.318, 1.229, 1.146, 1.076, 1.039, 1.019, 1.014, 1.017, 1.031, 1.053, 1.093, 1.144, 1.219, 1.314, 1.436,
    1.435, 1.348, 1.246, 1.164, 1.094, 1.059, 1.036, 1.032, 1.037, 1.049, 1.072, 1.114, 1.167, 1.257, 1.343, 1.462,
    1.471, 1.385, 1.278, 1.189, 1.124, 1.084, 1.064, 1.061, 1.069, 1.078, 1.101, 1.146, 1.207, 1.298, 1.415, 1.496,
    1.522, 1.436, 1.323, 1.228, 1.169, 1.118, 1.101, 1.094, 1.099, 1.113, 1.146, 1.194, 1.265, 1.353, 1.474, 1.571,
    1.578, 1.506, 1.378, 1.281, 1.211, 1.156, 1.135, 1.134, 1.139, 1.158, 1.194, 1.251, 1.327, 1.427, 1.559, 1.611
],					

defining multiplication constants for luminance (taken above directly out of the standard tuning file) and the Cr/Cb color channels.

Now in order to be applied to the full resolution image, this low resolution data needs to be up-scaled to the full resolution of the raw image. If this (the up-scaling) is done improperly, a periodic pattern similar to the ones displayed above might appear.

Specifically, the correction tables have a 16 x 12 pixel size in the IMX477 tuning file - this does not quite match the pattern @d_fens detected initially.

It’s just an idea, and there are some points which argue against such a cause. @d_fens counted initially a semi-periodic pattern of 20 x 15, which does not really match the table size of 16 x 12 px in the tuning file. Also, according to the documentation, modules that are not present in the tuning file should never be active in libcamera’s processing pipeline. But at the moment, it’s the only point I can see where such a semi-periodic pattern might be created.

To advance this discussion further, one should initially check whether the pattern is already present in the raw data of the sensor. If not, libcamera’s processing is creating this pattern somehow. In that case, one would need to check each suspicious module in turn. The ALSC-module would be a prime candidate for me…

1 Like

Happy to assist with the captures, since presently the setup is available.

The prior captures were done with a square mask on the sphere output, and the camera was/is not in perfect alignment with the center. In the captures today, I removed the square mask to eliminate the hot square in the plots.
The capture was performed with the following command:

libcamera-still --gain 1.0 --shutter 50 --awbgains=1.58,1.86 --awb=daylight --tuning-file=/usr/share/libcamera/ipa/rpi/vc4/imx477_scientific.json --encoding png --denoise off --sharpness=0 --rawfull 1 --raw 1 --metadata 20231025test1 --output 20231025test1.png

The raw (dng file) result was opened in Darktable (Ubuntu) and exported as an uncompressed 16 bit png. The resulting 16-bit-uncompressed-png was used to produce the following plot.

The pattern not only is there, it quite visible. To aid in @cpixip request/investigation, with the exception of the 16-bit-uncompressed-png (which size github did not like), all files are available here.

Looks like this confirms that is the case. Good hammer!

PS. Here is a hypothesis of why is more visible in @d_fens capture.
If the pattern is inherent/related to the sensor, it would be similar in both captures (with/without sharpness/denoise/jpg), but it is significantly less on the captures.
The illuminant+sensor has inherent noise. If the noise combination is greater in one case than in the other, the smoothing of the processing (denoise/sharpness/compression) would make it less apparent. If the illuminant noise is greater, the smoothing of the processing would actually make it more visible.
In other words, if the noise amplitude is smaller than the pattern threshold (in the plot) while still there, it would be less visible.

Oh, that is very interesting. So it’s not libcamera, but the sensor which produces this pattern! And looking at the metadata of the capture (" 20231025test1.txt"), the digital gain is a solid 1.000.

One suspect I could come up with is what some users call the “Star Eater Algorithm” (more marketing style: “in-sensor defective pixel compensation” - a thing which is implemented right in the sensor (here’s a discussion of this topic). Not sure at the moment whether there is an option to turn this thing off in either libcamera-apps or via picamera2…

… ok, here’s an update on switching off “DCP”…

For libcamera-apps, in the software doc it is suggested:

libcamera uses open source drivers for all the image sensors, so the mechanism for enabling or disabling on-sensor DPC (Defective Pixel Correction) is different. The imx477 (HQ cam) driver enables on-sensor DPC by default; to disable it the user should, as root, enter
echo 0 > /sys/module/imx477/parameters/dpc_enable

And according to this recent issue, a similar strategy is required for picamera2:

The Linux kernel driver will let you disable this via a kernel module parameter. For the HQ cam, for example, do the following

sudo su # you will have to be root to change it
echo 0 > /sys/module/imx477/parameters/dpc_enable

So the next experiment would be to disable DPC completely and see whether this pattern is still there…

If it’s not available in the camera directly, you should be able to compensate for this in software, using the same technique used in astrophotography: for each set of images you shoot, you take a few dark and flat exposures, and those get stacked together into an average image. Darks are images taken with the lens cap on, to locate and compensate for amp glow in the sensor. And flats are images taken of a flat neutral grey (the technique most people use is putting a couple white t-shirts over the end of the telescope and taking images in daylight - though you can use a highly diffuse light source as well, as long as you know it’s flat).

Those are then used to locate any discrepancies in the sensor. This is basically a map of pixels and how much they need to be compensated for digitally to arrive at a completely flat image free of exposure variations. Once you know that, you can apply the same compensation to the exposed frame and the end result should be free of these patterns.

I am fairly certain scanners like the ScanStation do this as part of their camera calibration routines. While astrophoto people are a bit nutty about this and do it every night, I don’t believe it’s necessary to do it that frequently. The sensor map shouldn’t change that much over time, save for dead pixels or stuff like that.

Well, in another life, I worked with a special logarithmic image sensor where the individual pixels had a logarithmic response curve by design (that is different from the log-image you might get from an ARRI Alexa or so, where the pixels are still linear in response and the log is calculated afterwards). This logarithmic sensor required a different blacklevel and gain compensation for each individual pixel, and that compensation was also temperature-dependent. Once this worked (we implemented that in a FPGA), the dynamic range was fantastic for the purpose we used this chip - a single fixed f-stop could be used for moonlit scenes as well as for sun-flooded scenery.

The thing here is the following: modern sensors tend to have quite a bit of image processing internally. So what you are getting out of the sensor is nowadays strictly speaking not really “raw” data. To make things even more complicated, specifically the Star Eater algorithm is in its effect non-linear. So the standard approach you suggested would not work in this case (as it is intrinsically a linear operation). But it is fine for compensating @PM490’s or my dark spots (I wonder how @d_fens got his sensor array so clean :grin:)