[v2 dev] Milestone 3 - Lighting

… here’s an example of a frame capture to illustrate the above description.

Top left is the frame capture with 1/1000 sec @ ISO 100. The sprocket hole is not burned out and appears white. This is how the LED power was adjusted. Dark shadows show no image structure, as the camera can not capture the high dynamic range of the film stock.

Top right is the frame capture with 1/32 sec @ ISO 100. Almost all of the image is burned out, except the dark shadows. Currently, I take a total of 5 exposures. The best single exposure image of this image stack is the following image:

For my taste, the structure of the bright stone floor is burned out to much in this image and the details in the shadow are fading too much in the darkness. That is why I decided against a single image capture and instead utilize intermediate frame motion which allows me to capture an image stack with different exposure settings for each frame.

The stack of 5 exposures is combined via exposure fusion, which result in the HDR image in the lower left corner of the top image (strictly speaking, this is not a true HDR - but this is another topic). Here, the contrast of the HDR image is reduced so all of the image detail is visible.

The HDR image is further processed into the final output image. Processing consists mainly of contrast adjustment, gamma-adjustment and sharpening. The final output picture is displayed in the lower right corner of the top image.

If you compare closely one of the input images with the output image, you will notice that a slight color shift was introduced. Therefore, the color of the sprocket hole is no longer pure white. Color correction and color grading are two close cousins…

2 Likes

… and here are some further infos about the integrating sphere implementation:

The inital integrating sphere was designed with nearly double the diameter (d = 98 mm) than the final setup (d = 50 mm), because I really wanted to stay below the 5% limit for the total port area. Here’s a photo of this intial setup:

The unit was split into two halfs for easy printing. Three entry ports were used for the LEDs which were mounted on a rather large heat sink. These heat sinks were kept outside of the unit in free air. So LED-temperatures were no issue in this design. The next image

shows the internals of the second major iteration. Note the three baffles placed around the hole of the film gate. These baffles turned out to be important, as they block any direct light coming straight from the LEDs. Without the baffles, illumination of the film gate was uneven, in terms of color constancy.

The form of the baffles was calculated in such a way that they have the minimal surface necessary to block direct light from the LEDs (compare the size and shape of one of the baffles shadow with the film gate).

This design had the further advantage that you could easily replace or change the LEDs. However, the illumination levels where not really satisfactory. Using several LEDs mounted on a single heat sink was one option I tried, among others. Finally I settled for a different, much smaller form factor for the integating sphere, as well as the LEDs:

Left in the image above you can see the old, large design with external heat sink. A multicolor LED is fitted into the mounting tube of this old design. In the right half of the image you can see the type of heat sink/LED I am currently. The LED is an Osram Oslon SL 80 LED mounted on a small heat sink with dimensions 10 x 10 x 10 mm.

The most important change of the third design was a diameter of the integrationg sphere about half the size of the original design. This increased the illumination intensity by nearly a factor of four. Furthermore, the mounting of the LEDs was changed. They were placed way deeper and tilted in the unit. Therefore, the original baffles around the film gate in the original design were no longer needed and could be replaced by small tongues near the mounting holes of the LEDs (cmp. the illustration a few posts above).

As it turned out, the heat sinks alone were not sufficient to keep the LEDs cool enough, so an active cooling system with a fan in the back of the unit and corresponding airducts to the LED heat sinks was added:

This current setup performs ok so far. Temperatures of the LEDs stabilize to about 50°C, even if the unit is operated for days.

One disadvantage of my lastest design is that changing or replacing a LED requires the whole unit to be unmounted and opened.

By the way - I remember much simpler designs based on the “integrating sphere” idea. For example, in the 80s I owned a color enlarger that did use a simple cube-like box for light mixing. On the bottom of the box was a glass plate for the 6x6 cm film format. The other walls of the box were covered simply with styrofoam. The light of three adjustable color filters was shining through a small hole in one of the sides of the cube onto the opposite side of the cube, but not directly into the film gate. The performance of this simple setup was quite good.

2 Likes

This is excellent documentation, @cpixip. Thank you for all the time it took to walk us through the design choices you made. I think this is going to be extremely helpful when we get to lighting design in early 2020.

Not counting the 3D printing, what would you estimate the production cost to be in terms of materials and time to assemble?

Also, that software you are using looks great. Did you find that somewhere or write it yourself?

I really love the detailled description of the evolution of your integrating sphere, cpixip! For comparison here is the lighting setup of the Wolverine:

Not quite as elaborate, but quite effective for the small size and provides fairly even lighting for the tiny light table. The LEDs feed into the relatively thick diffuser plate at a 90 deg angle. Because of the shape of the diffuser, direct light is almost guaranteed to get reflected by the surface instead of passing through. What the diffuser is really made of, I can’t easily tell. It’s littered with dots that look to be at different depths of the material.

1 Like

This is great. Thanks for the teardown, @mbaur!

@cpixip I was mulling over your design a bit more and was wondering a few things:

Did you notice any difference in running temp on the different LEDs? Like the top one staying cooler because the hot air having an easy way out? If so, how about setting the baffles or tongues at 60 degrees instead of 120 effectively putting all the lighting sources on the top half? And to avoid direct light from the LED maybe try adding a slight angle of the LED openings toward the front? That way the LED would be “aimed” more towards the back of the sphere away from opening where the light exits.

And the second question I thought about was the surface: Have you tried reflective spray paint to coat the inside of your sphere? There might be some rattle can paints that could work.

And thank you so much for publishing your whole design evolution. Especially the steps that didn’t work so well are very helpful in knowing what not to try :slight_smile:

… well, the 3D printing takes about 2 days on my printer, which is a cheap Anet A8 (200 €).

Add about half an hour of spray-painting the inner side of the sphere. Assembly is rather simple and might be done in a few hours. You take the 3 LEDs with their squared PCB and solder wires to the pads of the PCB. Then an adhesive pad is used to glue the LED-PCB and the heat sink together. I guess you will need at most an hour for this task. The next step is to insert the LED together with the supply wires into the grooves of a half unit.

This cut-away rendering shows how this done (I hope :slight_smile:). The straight dark red lines are the wires:

Once that is done, the back and front halfs of the unit are pushed together. There’s a rim on one half and a corresponding grove on the other half, so some sort of snap-fit is achieved. Just to be sure, 3 long M3-screws are used (on the bottom part of the above cut, you can see the channel of these screws). All that doesn’t take too much time and that is the final assembly step.

With respect to costs: each LED-unit is composed of

  • one Osram LED on a 10x10 mm square PCB, about 3 € (I am in Europe)
  • heat conducting pad, about 10 cents
  • 10 x 10 x 10 mm heat sink, about 1 €

So that would be about 4 € per LED-unit, so a total of 12 € for the LEDs. PLA-material for printing the sphere is negligible, probably less than a 2 €, while printing time is not (2x days with my printer).

The real costs are probably hidden in the electronics which drive the LEDs - these are programmable constant current sources. For my setup, I did not want to use PWM-type of LED-dimming (which usual LED-strips do).

However, I can’t really give an estimate on these costs, since I used parts I had already available. Basically, the electronics consists of two dual DAC (10 bit resolution, SPI-interface), a quad operational amplifier and 4 power fets on heat sinks. Part costs might be around 30 €, at most. I created a first (through-hole) version of a PCB for this unit, but that circuit design still needs some improvements.

Which brings me to your second question, the software. I do develop my own software. As my system splits the work onto three processors, it depends a little bit on the processor which software environment I am using.

First, there’s an Arduino Nano which drives four stepper motors, reads potentiometer values from two tension control arms and adjust the LED-currents to the requested values. This part of the software uses the Arduino-IDE (so essentially C++).

The Arduino gets its commands via a serial interface from a Raspberry Pi 4. Connected to this Raspberry Pi is also the camera used to scan the film. It features a v1-sensor chip and a Schneider Componon-S 50mm as lens. Both camera and serial communication with the Arduino are handled by a server running on the Raspberry Pi, written in plain Python 3.6.

This setup is very similar to other people which utilized a Raspberry Pi for telecine. A special library which is utilzed in the server software is the “picamera”-library. The server is able to stream low-res images (800x600) @ 38 fps, but drops down to a few frames per second for the highest resolution and color definition.

Anyway, I do use a special version of the picamera library, which is able to handle lens shading compensation. This is absolutely necessary if - as I did - you change the lens of the camera (remark: via lens shading compensation, a partial compensation of color shifts can be achieved which are introduced by a lens change. Turned out to be sufficient for my needs in case of the (older) v1-camera. The newer v2-cameras become unusable if you change the lens).

Well, the Raspberry Pi server delivers a camera stream plus additional camera data to a client software which is running on a rather fast Win10 machine. This software is also written in Python 3.6 and uses a few additional libraries. Specifically,

  • Qt-bindings (PyQt5) for the graphical user interface
  • pyqtgraph for graphical displays like histogram and vectorscope
  • cv2 for a lot of image processing
  • and finally numpy for the rest

This client software can request an exposure stack from the Raspberry Pi and essentially stores the original .jpg-files of the Raspberry Pi camera on disk for later processing.

The next piece of software is a simple Python script which does the exposure fusion (Exposure Fusion). While cv2 has its own implementation, I actually opted to implement this by myself in order to have more control over the internal workings. This script takes different exposures of the same frame and combines them into a single intermediate, stored again as 48-bit RGB image on disk.

Note that this exposure-fused image is not a real HDR-image. Actually, the client software mentioned above has the subroutines available to calculate real HDR imagery (which consists of estimating the gain-curves for the camera, and than combine the exposure stack into a real HDR-image), but I did not find a satisfactory way to tone-map the resulting HDR-movie into a normal 24bit RGB-stream. So I settled with the exposure-fusion algorithm of Mertens.

Anyway, the 48-bit pseudo-HDR images are than read by my third piece of software, also written in Python 3.6 + PyQt5 + pyqtgraph + cv2 + numpy, which allows me to do sprocket registration and various color and other adjustments. Here’s a screenshot of this program to give you some idea:

The 48-bit intermediate image can be seen in the top-right corner of the frame display for reference. At this point in time, the software works stable, albeit a little bit slow. Also, too much experiments left their traces in the code. So I am planning a total rewrite of this software once I think it has reached its final form…:wink:

@mbaur: this teardown is interesting! Because: this setup is very similar to illumination setups used in LCD-screens (edge-lid backlights). The white dots which are visible are used to “extract” the light from the sideways placed LEDs perpendicular into the surface to be illuminated. Sometimes (in LCDs), these dots do vary slightly across the surface to counterbalance the otherwise uneven light distribution of such an setup. So actually, ripping apart an old LCD screen to get the diffusor might be an interesting basis for some experiments with a brand new telecine lamp! :joy:

1 Like

I used to work for a company that designed and built integrating-sphere light sources. The very fragile Barium Sulfate was typically replaced by Spectralon, a trademarked form of PTFE (Teflon). In this case the sphere was made from a block of that material. It did not just reflect at its surface, but through the volume of the material. So wall thickness had to be kept high.

There were specialty sources for which we used good old fashioned Krylon 1502 Flat White spray paint. We had to use a surface scatterer, because we were working with very short laser pulses that would get stretched by passage through the spectralon. The pulse spreading just due to the integration sphere geometry was much less (the spheres were quite small). But not an issue in the Kinograph application anyway.
Why is it that you, @cpixip can’t use PWM LEDs? Is there an exposure effect due to too-low a frequency driving the modulation?
Also, the 5% opening diameter was sometimes violated (by a lot) without noticeable non-uniformity.

@mbaur:

Did you notice any difference in running temp on the different LEDs? Like the top one staying cooler because the hot air having an easy way out?

No, not really. After adding the airducts, the temperature of the LEDs were quite similar, within a few degrees Celsius. There was a larger difference without the active cooling, exactly as you have described above. Note that the LEDs are driven with slightly different currents as they have different efficiencies. Typical values are 294.7 mA (red LED), 425 mA (green LED) and 278.5 mA (blue LED). That results also in slightly different temperatures by itself. The cooling with the little fan (it is a 5V fan used by Raspberry Pi units) is quite effective.

If so, how about setting the baffles or tongues at 60 degrees instead of 120 effectively putting all the lighting sources on the top half?

That could probably be done by going back to the original design were the wires and heat sinks are kept out of the unit. The current design holds the LEDs mainly in place by the supply wires. These wires are running through straigth channels in the body of the integrating sphere. Easy mounting, but you also have to dismantle the unit if you want to replace a LED…

And to avoid direct light from the LED maybe try adding a slight angle of the LED openings toward the front? That way the LED would be “aimed” more towards the back of the sphere away from opening where the light exits.

This is actually the current design (have a look the cut-away drawing I posted above). The LEDs are tilted by 45° and the baffles are now just small tongues close to the LEDs which also act as fixation points.

And the second question I thought about was the surface: Have you tried reflective spray paint to coat the inside of your sphere? There might be some rattle can paints that could work.

No, I have not tried any other spray paints. I have printed the second version with black PLA, in order to reduce the light shining through the unit, but that did not help too much. From the theory of the integrating sphere, you do not want to have highly reflecting surfaces. Ideally, the surface should be “ideally dull” (more technical: it should have a Lambertian reflectance function). Barium sulfate is a material which comes close to this. I used simply several coats of white radiator paint. That is far from ideal, but currently sufficient for my setup (it mixes the light from the LEDs ok and the exposure times are short enough that I can work with them).

@bwrich:

Why is it that you, @cpixip can’t use PWM LEDs? Is there an exposure effect due to too-low a frequency driving the modulation?

At least that was what I was afraid of. The camera I am using, a standard Raspberry Pi camera, is a rolling shutter camera. I was afraid if I would use pulsed LEDs, the intensity variation of the LEDs might show up as bright and dark horizontal stripes in the images I am taking. Or at least as some sort of color/intensity noise.

Of course, a high enough PWM frequency would certainly take care of any such effects. But I am not sure how high such a frequency would need to be. I think WS2812 LEDs for example work with 800kHz typically. Given, this would result in 800 pulses during a single exposure time of 1/1000 sec. Probably enough pulses averaged during that exposure time to avoid any negative effects on image quality.

For me it was easier to use constant current sources, as I had the parts already available.

An additional comment: my decision to not use PWM was done years ago, and I can not really remember the research I did at that time.

Anyway, trying to clarify some points a little bit more, I discovered that the RGB-Led I cited in the first reply of your question, the WS2812 LED, seems to have only a PWM-frequency of 400 Hz, not the 800KHz I mentioned above - that is the frequncy these units can communicate with. Now 400 Hz is way to low not to show up as intensity stripes during short exposure times.

The troubles you can come into with PWM’ed LEDs is well-known, see for example this video Oh Come All Ye Faithful LED Fix Comparison where a recording suffering from PWM is fixed. Also, this short article here LED Color Shift Under PWM Dimming discusses other issues related to PWM’ed LED setups.

Things like this pushed me into my decision to use constant current sources for driving the LEDs, just to be on the save side.

2 Likes

@cpixip I am doing some improvements to light source for the second round of a frame-by-frame 8mm/super 8mm DIY.

First and foremost, I have no clue on what are the particulars for best light source. I did some experimenting using a circular polarizer placing it in various locations and comparing, and had some interesting (and some puzzling) results. The setup is light source - film - inverted Nikor 50mm F2.8 - below - DSLR Nikon D3200.
When the polarize was set between the light source and the film, there seem to be an slight improvement in details (dust/scratches) and the grain seems less prominent. In other locations, the grain seemed to improve at the expense of less details. To say that the changes were subtle is an understatement… but given my clueless approach, would like to ask if you have any experience before I chase the rabit down this rabit hole.
Thank you

Interesting - were you by any chance using pairs of polarizers? Or was it just the one circular polarizer placed between the light source and the film?

The Callier Effect has more to do with collimation but it’s possible a similar effect could be observed with polarized light… you could also have a well-collimated light source and I just don’t know about it :slight_smile:

Only a single circular polarizer. Do not have a linear polarizer, although thinking about getting a linear film to do more testing.
Definitely do not have a well collimated source.

If you introduce polarizers into your imaging pathway, be aware that there iexists the so-called photoelastic effect. Many translucent plastics show stress birefringence and you can make colorful images of the stress distribution in a sheet of plastic with polarizers.

With respect to “optimal” LEDs: I actually tried to approach that issue somewhat scientifically, and calculated the transfer functions of my intended “LED->film->camera” setup. To do so, you need the emission spectra of the LEDs, the transmission spectra of the film stock you are working with, and finally the spectral response curve of camera chip you are using. Needless to say, it is quite difficult to get hold of such data.

I did the calculations anyway and came up with an “optimized” setup. However, as it turned out, this setup was far from optimal. For example, the RED LED was way too far emitting in the infrared and the scanner picked up and amplified uneven exposure of film stock which I did not see in a normal projection of the same movie.

So in the end, I simply tried out different sets of LEDs until I got something workable for me (details are noted above somewhere). It seems to depends only slightly on the film stock, much more on the camera and how its internal color calculations work. Also of course on the design of the illumination system. I opted (as detailed above) for a very diffuse one - it reduces the amount of scratches you are going to pick up. But, as @johnarthurkelly already noted, it reduces image contrast…

It’d be very interesting to see the work you’ve done calculating the transfer functions for your setup @cpixip!

Would you mind sharing your camera and lens setup as well? I believe this information could help shed some light on the concept of the imaging chain/system for other group members.

@PM490 @cpixip I actually think both of you would find this paper interesting:

Optical Detection of Dust and Scratches on Photographic Film

@johnarthurkelly: well, that was long ago (2018) and did not result in anything usable. What I did was basically the following: I searched the internet for characteristic curves of movie stock and the camera I was using at this point in time (a Raspberry Pi v1 camera). I then digitized these curves from 400 nm to 700 nm and combined them. This allowed me for a specific LED to calculate the response as well as the cross-talk to other channels. Here’s an example:

Unbenannt

The red graph is the expected main response, the green line is a slight cross-talk. The idea was to maximize the difference in the color channels to capture most of the colors. However, the set of LEDs that I ended up with did not perform well. Specifically, as far as I can remember, the following issues arouse:

  1. The LEDs choosen had quite different efficiences. I had only access to relative intensity graphs, and there was no easy way for me to correct that.
  2. The Red LED turned out to be too much on the IR-edge, actually enhancing some fixed dark red pattern which might have been caused by the uneven coverage of the daylight filter of the Super-8 camera. This was only visible in a test movie I bought via eBay and in my scans of that movie, not during normal projection of that movie. However, it prompted me to abandon that approach. I do not have a good example available any longer, but here
    vlcsnap-2020-03-02-20h19m59s476
    you can notice a dark read glow in the lower right corner of the frame, in the shadows. It is a pattern fixed to the film frame and has a form similar to a too small filter.
  3. I discovered that the cross-talk of the Raspberry Pi camera chip v1, if combined with my Schneider 50mm lens is much more noticable than the cross-talk I was trying to minimize with my transfer calculations anyway…

So in the end, I simply bought a bunch of LEDs and swapped them around until I got something which worked… :slightly_smiling_face:

Ok. Now for the camera/lens setup. Initially, I started with a Raspberry Pi v1 camera. I was aware that there is a slight color shift introduced by the mismatch of the microlens array of the sensor and the lens I was going to use: a Schneider Componon-S 50 mm.

But I developed a technique to compensate that, basically taking several exposures, calculating an HDR image from that and then compensating in HDR-space. However, that was a slow process.

When the v2 version of the Raspberry Pi camera came out, I immediately tried to switch to that camera - only to find out that the mismatch between microlens array and a long focal length lens (compared to the standard lenses these cheap sensor are designed for) is even worse. However, at that point in time the Raspberry camera software introduced the possibility of a lens shading table. I implemented that in my software, and I got resonable results with the v1 camera sensor, using a modified picamera library. It was much faster than my HDR-space based compensation.

With the newer v2 camera sensor however, the results kept being disappointing. I could compensate the color shifts, but the cross-talk resulted in a noticable loss of saturation towards the edges of the image frame. So I reverted to the v1 sensor chip.

I finally abandoned the Raspberry Pi setup alltogether because I discovered that certain colors were not really reproduced good enough - especially brown tones still showed a tendency to be saturated in the center of the frame, but desaturated on the edges of the frame.

My current setup consists now of a See3CAM_CU135 camera, mounted onto a Novoflex bellows with a custom 3D printed attachment. On the other side, the Schneider lens is mounted, fixed at an f-stop of 5.6. Supposably, this is the f-stop with the sharpest image. This setup is currently on a freely movable platform which is moved as a single unit to get the frame into sharp focus. There is certainly room for improvement here.

Here’s an image for illustration

In the background, you can see the film gate attached to an integrating sphere, which I described in detail above. A note about the See3CAM_CU135 camera: if you consider this camera, it is best to use a Linux-based host computer and v4l2. At least I was not able to interface the camera well enough with Win7 or Win10 systems to use it. Example captures can be seen in my post found in this thread: HDR / Exposure Bracketing

And finally: thanks @johnarthurkelly for the link to the paper. Indeed interesting!

3 Likes

Thanks @cpixip for the setup, and @johnarthurkelly for the link to the paper. Interesting foundation for playing some more with the polarizer setup. Thank you

1 Like

@cpixip Can you explain your gate design? I see the two tabs for horizontal alignment on springs, but I also see two ramps on the right side of the film path. Looks very interesting, but I’m unclear on each part’s function.
Thanks!

@junker - Ok, here’s the story of this gate design.

My goal was to create a mostly 3D-printed design of a Super-8 film scanner. The illumination approach I am using is an integrating sphere (see above). This light source requires for its intended function a film gate very close to the exit pupil of the sphere. If that is the case, small scratches are reduced by the mostly diffuse illumination.

My initial attempts with film gates scrapped from old Super-8 projectors turned out way to bulky, so I decided to take the challenge and design a very flat 3D-printed gate. The specific challenge was here that my 3D-printer, an ANET A8, can print reliably only with about 0.2 mm in the z-axis. That is larger than the average thickness of Super-8 film stock.

The core design is a block with a V-shaped groove, which guides the film only on the edges:

micro_gate - 10.0_E

The film channel of the V-groove has the minimum width possible with my printer, of about 0.2 mm. Two identical blocks together with a mounting box compromise the film path:

The two V-groove holding the film in place are attached via some 3D-printed springs to the enclosure; I borrowed that idea from the designs of xyz-flexure stages which are routinely used for very precise movements. These springs are very stiff in the direction of the camera, so the film always stays in focus, but move easily if the film width or film thickness changes.

Of course, it took several iterations before the spring design action worked as intended. Here are some examples of the iterations this design went through:

If I do a next generation, I will fix the lower V-groove to the enclosure. Only the upper V-groove will be able to move via springs. This will minimize a slight wobble which is noticable in the current design if a film cut is encountered. Super-8 film was spliced together with adhesive pads, and they wrap around on the side of the perforation, increasing the width of the film material at that side only. With the current design, both V-groove adapt and move. Ideally, only the upper V-groove should react.

Finally, the two ramps you are seeing in the final design are just guides which ensure that the film, when inserted manually into the gate, is directly aiming at the tiny V-grooves of the film gate. Without these ramps, it is rather difficult to correctly insert the film into the gate. Here are two enlarged view of these ramps and the V-grooves:

If you look closely, the ramps do not reach the level of the V-groove, ensuring that when the scanner is in operation, the ramps do never touch the film. They are just there to help the manual insertion of the film.

In closing, here’s a full view of the gate.

micro_gate - 10.0_A

I hope the images and remarks above clarified the design; it the end, it’s a rather simple piece of plastic, with only three 3D-printed parts.

3 Likes