Lighting Research Results

On the barium sulfate/paint setup - did you mix that by weight or by volume? we’re thinking of trying it out and testing it with a spectrometer, along with some straight paint, just to see the difference.

1 Like

@friolator It was by weight - I have a precision scale at home but no graduated beakers :smiley:

Of note, I also added about 5g of water for every 50g or so of solution to help aid in mixing and to make it a consistency I could apply more easily. Depending on the exact formulation of the paint base being used this may or may not be necessary. I didn’t test alcohol (thinking that it might evaporate too fast to be useful) but it should work too, in theory.

I also have samples of the 3 beta versions of Stuart Semple’s “White 2.0” that I’ve been meaning to test - I’ve had really good luck with using their “Black 2.0” on some light baffles at work, figured their white might be interesting!

Thanks!

That’s funny about the White 2.0 - I contact them as well about that but they didn’t have any samples to send me. They did say that one of the formulations they were testing used barium sulfate. We’re using Black 3.0 inside our camera housing.

I’m going to be working on the integrating sphere for our 70mm scanner this week. We’ll be 3D printing it in sections, once the LED board arrives and I have a firm reference I can test some prints on. It’s a monster, so it’ll sit on an aluminum platform. Our LED design has 6 COBs, for sequential RGB scanning. three are optimized for print and three for color neg. For the sphere, we’re thinking of using a cylindrical design with a half-sphere at the back, near the entry port, largely because of how we have to situate it in our retrofitted Cintel scanner chassis. not enough room for a true sphere with ports the size we’d need them.

I already got a bottle of barium sulfate, just need to get the paint, so we can do some tests in the next couple weeks.

@johnarthurkelly did you ever try a model spray gun?

1 Like

I actually meant to - one of my former technicians was really big into painting models and I was about to talk with him about options before the series of unfortunate events took place in March here in NYC. I’m not particularly familiar with them so I definitely want to talk to someone who has used the tool.

@johnarthurkelly been meaning to ask your for ages, what was your rationale behind choosing the Yuji LED’s beyond the CRI and colour spectrum of the specific model being close to the projector bulb?

I’m looking now for my system and it still seems that there isn’t a lot of choice in the market for this type of COB usage - most are fairly low CRI. Opinions seem to differ on whether that’s an issue (some people seem to argue high CRI COBs are actually less optimal and it’s better to create a colour palette yourself from multiple COBs).

Apart from the Yuji ones which are £££ - I’ve found very cheap ones from Bridgelux and Cree that seem to fit the bill on various counts. This is an example: https://uk.farnell.com/bridgelux/bxrh-30h1000-b-73/cob-led-warm-white-1033lm-3000k/dp/3106841?ost=bridgelux+bxrh-30h1000-b-73

but I can’t figure out, what is the air gap between this at the very cheap end of the spectrum, and a fairly expensive Yuji offering. Surely the cost hasn’t come down that much just owing due to time. Are specs as they seem (eg. CRI of 97)? Usually you get what you pay for (not always, but generally speaking!), and the price disparity is huge…

1 Like

The choice at the time came down to basically those aspects, that and availability and ease to work with. I knew that I’d need a significant amount of light output and my design of the illuminant for my own machine wasn’t accommodating a large array of LED’s without the addition of condensing optics. I had a goal of getting the machine running by the end of 2018 so I went with the more simple design with a single LED. The Yuji fit the bill based on specs but I wound up liking it a lot and I’ve had a great experience with it so far.

I debated switching out to a 3 or 4 wavelength model or even using a white LED with supplemental narrow wavelength LED’s if the spectrum was deficient in some major way. I haven’t had any complaints with the spectrum of the Yuji diodes so I haven’t spent too much additional engineering time on that. I do think it is worth exploring but I had such a plethora of fun engineering problems to solve just getting the machine running in the first place :wink:

I’ve heard very good things about Cree but haven’t explored much with them beyond reading the whitepapers. I can’t speak to why exactly Yuji’s price point is where it is, but I think they found a toe-hold with a western market that was willing to pay for their COB models and the high CRI. The Bridgelux model you linked looks pretty good, it seems to be a little lower in the relative spectral distribution down in the shorter wavelengths but I’d be very curious to see how it looks compared to the Yuji model given the price point!

Thanks John, that makes complete sense.

I think there is huge value in reducing the cognitive load early on in not having too many engineering problems, having done a lot of DIY projects in the past like this (building guitar amps, designing a fuel injection system for a classic car… etc), it’s very easy to get overwhelmed and chasing your tail. Where as aiming for a minimum viable product as fast as possible really does allow you to quickly test your assumptions, get some value early on and helps give focus.

I did some HV20 camcorder based transfers to take the camera out of the equation to start, which added to the overall cost, but it allowed me to get started much faster and get other problems (like triggering) out of the way. Not quite the same as your LED choice though, as the Yuji ones seem strong candidates for a permanent solution to the lighting problem.

I’m really interested in the idea of varying RGB intensity selectively in software to squeeze every bit out of the camera. The main problem as I see it is now that Frank Vine isn’t offering his solution to amateurs any more, it greatly increases the complexity as that approach will need very heavy software development.

1 Like

Just gonna leave this link i stumbled upon here.

this guy designed his own LED board for a telecine. might be usefull info thats way above my grasp on the subject.

Thanks for posting @WDaland I was looking for that link a few days ago and couldn’t remember what to google!

@don007 check out that link^

I Plan to create something similar but want to create a led ring that will be placed inside of an integrating sphere. Basically I plan to create a much cheaper Lasergraphics scanstation :stuck_out_tongue:. They are also using integrating spheres: Scanstation integrating sphere. Frank’s work and research inspired me to actually pick a monochrome sensor and to do multiple exposures. It is the only way to capture the dynamic range of positive film. Something like cpixip has been doing, but then with Red, Green and Blue photographs. Since a monochrome sensor has a higher dynamic range I hope I will only need 9 exposures per frame at most.

Hello. Great plan :slight_smile: but there is one big BUT. The problem is with the LEDs (at least that’s how I understood John). The problem is finding the right wavelength of the light source as you choose the right colors of blue, red and green to give the right colors. As for HDR, I don’t even know if it makes sense. Since I tried my scans without HDR so I have all the details in the dark scenes there. When the film is shown on the screen, there is also no HDR :slight_smile: I want to have an exact copy of the film that you see when the film is shown. The problem with HDR is that Lasergraphic and BMC scan to HDR in real time and I think they use a maximum of 2 exposures, one large and one small. The problem is that my camera has a maximum resolution of 36 fps per second at 5MPX. What is the problem. I would have to have a camera that can capture 48 fps per second on the 5MPX. Then I could do a real time scan of 2 exposures. Problem number 2 is that if I had it, how to put it together in davinci to solve it together :slight_smile:

PS: I think it’s best to have a 35mm CMOS chip in the camera. Unfortunately, you will never do the best scan without this chip. ARRI, BMC use CMOS Area 35mm chip. It is said that Lasergraphic also uses a CMOS chip camera JAY CMV20000 but now they have a CCD chip written on the website so I don’t know.

https://www.nacinc.jp/wp-content/uploads/2014/12/arriscan_catalog_EN.pdf

Exactly, the LEDs need to be tuned for the film stock. That is why I want to create such a ring myself, that way I can make multiple rings if needed. Almost all home videos I need to archive are Kodachrome K40 (super 8) which helps a lot in the tuning department. But there are several badly color shifted print films that might benefit from this technique to capture what is left of the pigment.

I am going to use a simple intermittent motion mechanism, and I plan to record the sound separately. This way I am not limited by the framerate of my camera. It will be a much slower process for sure, but I believe it’s worth it.

Do you think that the light you want to produce could be used by me? Since I’m not too afraid of colors because after scanning I would calibrate the camera with the LAD file again. Then I should have the colors correctly as they are on the film material.

Well it will simply be a tightly spaced led ring, with some driver circuitry and a microcontroller. So in my case at least three channels for R G and B that I can trigger for a given pulse duration. This is to be used in an integrating sphere spaced for 8mm film. I expect that 35mm would require a larger sphere and thus a larger led ring.

@don007 I would be very interested to see details of a camera that has captured all the dark details of the film, whilst still giving a good balanced exposure. Pretty much every scan you see of movie film has blown out highlights or lost detail in the shadows, relative to the original film - the only exceptions being scenes with low contrast in the first place. Not saying it’s impossible, but I’m somewhat sceptical :smile:

In my experience no digital imaging camera has that kind of latitude, hence the need for HDR.

Well, I can confirm this statement from my experience too (experience: for once, I designed a real-time 3D camera which could operate reliably in dark tunnels, seeing dark-clothed pedestrians right besides bright car headlights, and in another life, I did research extensively the human visual system).

Specifically, there is no single camera known to me which can record, either in raw or otherwise, the tremendous dynamic range a Kodachrome 40 film can display. This film stock was ment to be projected, and the human visual system can easily cope with the large dynamic range a projected Kodachrome 40 image has.

A camera chip with currently 12-14 bpp per color channel can barely cope with that (quantization errors and noise will kick in) - therefore, if you want to record all the information available in the film stock, you will have to take multiple exposures. This in itself is however not “HDR”, by the way.

HDR in the technical sense is the combination of several exposures into a single radiance image. This involves the estimation of the gain curves of the camera/image processing pipeline used. Once you have these gain curves, you basically can recover from the set of differently exposed images the radiances of the scene. This HDR image is however just that - a map of the scene radiances and thus usually an image you will not be pleased to view.

These images look rather dull. This is mainly because the images we are used to have some S-shaped transfer curve applied to them, crushing shadows and highlights, enhancing the contrast in intermediate tone regions.

So an essential second step of HDR-image processing is a tone-mapping step, which transfers the raw HDR-image into something we are used to view.

This is basically also the case when you are working with camera raw images - only that you have a much reduced dynamic range in the case of raw images, compared to a real HDR.

However, the optimal tone-mapping is very much scene-dependent, and I am not aware of any single tone-mapping algorithm which would be broadly applicable.

The tone-mapping step with HDR/raw camera images is also necessary for another reason: our normal output devices have, at least at this point in time, only 8bpp dynamic range per color channel (some even less, utilizing dithering to hide it). Given, there are real HDR-displays available since some time, but how many people own such a thing? Even if these displays hit the general market in a few years from now, I doubt that they will be a match for the old Super-8 projector gathering dust in the corner.

Summarizing, I think for the time being, nobody will be able to recreate a “real Super-8 movie experience” by currently available electronic means. This hardware is just not available.

That’s why I derived my own approach, taking 5 different exposures of a single frame, each spaced a little bit more than one EV away, and combining them via exposure fusion. You can see here some results (and more information).

The good part is that this process is fully automatic (no manual fine-tuning required) and that it recovers the full dynamic range of the Kodachrome 40 film stock.

However, in that course, the appearance of the Kodachrome 40 film is changed. The final image does not look at all like a projected image. And it does not really look like a real Kodachrome 40 image either, I must confess. But I think that is the price one has to pay when working with current technology.

Note that the final output of my pipeline is not a HDR in the strict sense - the bit depth of the output image is just standard 8bpp per color channel. Since I store the original captures (which are jpeg-files, not even camera raw files), I could in principle, once HDR-displays and -video formats become widely available, rerun the pipeline and create real HDR imagery from the captures. But I guess I will never do this, as it would involve again manual interaction during the tone-mapping step.

Well, in closing, I think I need to remark that professional movie material is usually a different beast to consider. Here, the light situation is normally carefully controlled, with substantially less dynamic range used in any frame than in the average Super-8 vacation movie. I can imagine that in this case high-quality cameras and a processing path based on raw camera data can yield good results - certainly, if the scan is made from the original negative film stock.

But scanning hobby-grade color-reversal film with a single exposure per frame, recovering all the details in the shadows without blowing the highlights, when under-exposed sections directly follow some over-exposed sections? I’d be interested if someone is able to show me how to do this. :wink:

1 Like

This is a great conversation. Thanks to all who have added their knowledge. I would love to see these approaches in the Kinograph software one day as capture modes. cpixip mode = 5 exposures + a post processing step that automatically gets run at the end. Dare to dream.

This information is is mindblowing i never considerd this, i have alot of 8mm film that have exstreamly high brightnes whit very dark shadows, i never considerd that there could be a way to save the information and still make an image worth watching that was not to bright or to dark… thank you all :slight_smile:
Amazing.