Iâve just emerged from the depths of the FreeCAD ! Itâs been a journey, but Iâm glad to announce that a parametric model of my Kinograph Lighting Sphere has been crafted with heavy inspiration from @cpixipâs version.
I have released it on GitHub, you can find it here. and I sincerely hope this helps some of you out there.
Greets!
Here is an internet find of a presentation at SMPTE discussing multi-spectral scanning.
Donât shoot the messenger (me), I do not fully agree with the content, the approach taken on the light, and with some of the explanations given (or lack of explanations).
But it is certainly relevant to the discussion here, actually some of the slides look like the ones we posted a year before the presentation on this trail. Enjoy.
well, from an archival point of view, the more data you can capture the better. Of course, a multispectral scan aims in this direction (more data points) as well. But the real art is to capture only what is needed and not more. From an archival point of view, capture what you can affort. But from a presentation/distribution/economical point of view, capture what you need, and nothing more. There is a reason why all images on the internet and elsewhere are stored with 3 values per pixel (be it RGB or YUV or whatever) and not 10 channels or more. You certainly wonât get âbetterâ colours by capturing 10 wavelengths instead of 3, and color filters on usual cameras are broadband on purpose. There are a lot of quite questionable statements in this presentation. Nevertheless, it was fun to watch. Things are moving on⌠Thanks Pablo (@PM490) for that link!
Not only some of the statements are questionable, but also what it was not said. Presenter repeatedly goes on about how inadequate RGB is for colors, neglecting to mention that the human eye cones are⌠well RGB!
You complain that not all scans were done by a producer approved operator. This was actually one of the main points of the report - that the result of the scan is highly dependent on the operator. Ideally the machine should produce the same result every single time, regardless of the operator. That machine does not yet exist.
Without going into specifics, yes there are certain scanners that are âblindâ to certain colors.
This is not the fault of the scanner, but a question of application.
Films with applied colors (which was one of Diastorâs main subject) have very specific spectra that are vastly different from the spectra of modern color film. All scanners are built for the latter, and most have thus a narrow bandwidth â which is ideal for getting the most color information from modern film stock.
But when you have for example tinted film with also narrow absorption, you can find certain colors that fall within the gaps of those scanners.
And if you use a brad spectrum scanner, you will get that color, but it will look wrong and needs to be manually adjusted.
There is no single scanner that can produce the best results out of all film color systems. You have to pick the right tool for each job. Diastor tried to assess how different commercial scanners captured color information from various historic film color systems.
And for transparency: I was part of a research team that followed after the Diastor project where we built several multispectral scanners.
Not going to happen. There are too many variables and too many things that one needs to know about the film itself that a machine cannot know. Different kinds of films require different settings and you need to know about the film before you load it. Negative, positive, color, black and white, intermediate, print, camera original reversal, theyâre all different and a scanner canât know whatâs there without a person to set things up correctly. If that person doesnât set things up correctly you will get a bad result.
The problem with the way this report was done, and this was acknowledged to me in private emails with one of the authors after it was released, was that there were archives who simply did not have the experience necessary to handle the film they were using. The Director, for example, belonged to an archive that holds primarily Black and White film, and they simply didnât deal with color often enough to know how to properly set up the machine for it. They made incorrect assumptions and that affected the result. The authors of the paper then ran with those conclusions.
I should also point out that the scanner at that archive was a first generation Director running software that hadnât been updated in nearly a decade. I confirmed this with Lasergraphics. Conclusions about the current capabilities of the machines were drawn based on a machine using years-out-of-date software. It is painfully clear to anyone who is familiar with these machines that very little, if any, due diligence was done by the authors of the paper to ensure they were getting an accurate assessment of what the machine could do.
Films with applied colors (which was one of Diastorâs main subject) have very specific spectra that are vastly different from the spectra of modern color film. All scanners are built for the latter, and most have thus a narrow bandwidth â which is ideal for getting the most color information from modern film stock.
Please look at the report. They claim on the same page that a cell phone and light box can reproduce tinted color that a half million dollar purpose-built scanner cannot. The reason the blue tint in that test didnât appear is because the operator messed up the scan. The evidence of this is plainly visible in the scan itself. The person running the scanner incorrectly hit the âcalibrate nowâ button in the software. This neutralized the blue. how do I know this? Because if it was scanned properly, the area inside of the perforations should be pure white. But itâs pink. Because the pink added to the light in the scanner neutralized the blue tint, and that pink colored the blank space where the perf is.
You donât use the calibration now button on Lasergraphics scanners unless youâre scanning color negative. Not on B/W, not on reversal, not on prints. Doing so alters how the scanner sees the image and the result is that the blue disappears.
Did the authors stop to think âgee this seems odd - maybe we should double check?â no. they decided that it must be incapable of properly scanning the film, when in fact I have personally scanned some of that same blue-tinted film (which came from the Harvard Film Archive, one of our regular clients) on our Lasergraphics scanner. And you know what? it looks fine.
There are so many technical issues with the paper that it cannot and should not be taken seriously. I sent an email to the authors immediately upon reading it, and a couple of things I pointed out were corrected. But most of it was not, because it would have fundamentally altered the conclusions that were incorrectly reached because the underlying data was just bad.
Here is the email I sent to them, listing only some of the problems I saw in a first reading of the report:
PAGE 16: Table 1 incorrectly lists the sensor in the Director as a color bayer area sensor. It is a monochrome area scanner.
PAGE 16/17: The director does not use a white light source that shines continuously. Itâs a pulsed light source that triggers the camera exposure. The Director always does separate sequential R,G,B, + (optional) IR exposures. This produces three monochrome images, which are then combined by the scanner software into a composite RGB image. The ArriScan is called out on Page 17 as being a special case, when in fact the Director functions in the exactly the same way.
Section 3.7: LASERGRAPHICS DIRECTOR OVERVIEW
The Director doesnât scan Vistavision. For 35mm, it it does 2/3/4 perf only.
The older models are sprocketed, but the the currently shipping model is sprocketless
âThe GUI nonetheless has two drawbacks: There is no possibility to zoom into the preview image to check focus or image structure, and although RGB histograms are displayed, they are so small as to be almost useless.â â both of these issues are incorrect. The Director uses the same UI as the ScanStation. One can zoom in on an image, and the histogram/RGB Parade display can be any of three sizes, the largest of which obscures most of the viewing area. On our system, this is the same physical size as the RGB Parade in our color correction suiteâs hardware scope. We have had our ScanStation since 2013, and the software has been as I decribed since the first version weâve used. If the software you used did not do this, it was likely an outdated version.
âAs with the Northlight 1 scanner, the tinted piece of film appeared almost completely black and white; the black and
white densities were excellent, but nonetheless without color (Figure 51).â â if the resulting scan is essentially b/w when running tinted film the operator is doing something wrong. We have scanned tinted positive film numerous times for the Harvard Film Archive and others, on our Lasergraphics ScanStation, and this has never been an issue. If the operator runs a base calibration on positive stock, the operator isnât using the machine correctly. This would neutralize the color, resulting in a B/W image with blown-out exposures in the perforation area as you see in the example (just a guess as to whatâs happening). With the Northlight, the result should be the same - running a calibration will neutralize the tint to greyscale. When we scan positive film on the either our Lasergraphics or Northlight scanners, we make exposure adjustments manually to avoid this.
âThe option for output image file formats are 10 and 16 bit DPX and 10 and 16 bit TIFFâ â this is incorrect. Aside from the fact that there have been probably 2 dozen output formats on Lasergraphics scanners for the past 5-6 years at least, none of them (as far as Iâm aware) output non-standard 10bit TIFF. itâs 10/16 DPX and 8/16 TIFF.
Bear in mind this was in 2018 and some things have changed - you can now do VistaVision on the Director, I believe.
If youâre writing a paper that claims to provide a comprehensive overview of the available scanners at the time, and youâre comparing their capabilities, youâd best start by getting those capabilities correct. Mistaking a mono sensor for a bayer sensor is a major issue since they create the image in a fundamentally different way. There were so many mistakes throughout the document like this I simply donât understand how anyone can take it seriously.
This long post is about specifics. Can you share specifics please?
I tried to go into a bit detail in my previous post. Scanners that have a narrow band (which for most films is a positive) can be blind for certain colors.
What I didnât mention in my last post is that diffused light can also change color impressions (only for applied colors). I will not go into detail here, as this is just not the right forum.
For people building their own scanners for ânormalâ color film, the take-away should be: Explore the possibilities of using narrow-band LEDs for lighting, rather than a broad white LED. You will pick up better color information.
Understand you are also part of a grant-funded new company for that purpose: https://scan2screen.com/
I was part of the grant-funded research project Scan2Screen. Out of this we spun of a start up with the same name. This company has not received any funding. I am one of the founding members.
Thanks for your long reply, friolator. I was not involved in the Diastor paper. Iâve been a cameraman in my past and know from technical tests, that there will always be people unhappy with the results. As I read the test, it was done as a âreal life testâ, i.e. with all the human shortcomings being part of the design â to put it to extreme: what good is a scanner that in theory can achieve a perfect result, if a normal operator cannot achieve this?
Rather than insinuating that the authors had financial interests in some of the companies involved or even go so far as to suggest that they consciously falsified data, why donât you set out and make a new test with currently available scanners and get that one peer reviewed?
You attach an e-mail that you have sent to the team. On point 1 you say that the table incorrectly lists the Director as having a bayer sensor. But in that table it says under Director âInterpolation Bayer sensor: Noâ
In point two you say that the report says that the Director uses a white light source. Yet the report lists the Director under âscanners that use a lighting system composed of three lightsâ.
If we always believed the pessimists, there would be no progress.
In 1903, New York Times predicted that airplanes would take 10 million years to develop. Only nine weeks later, the Wright Brothers achieved manned flight.
What is the value in that kind of test? A normal operator who canât do the job correctly probably shouldnât be doing that job, to be honest. And to base conclusions on work done by people who donât know what theyâre doing doesnât exactly provide any value. So why bother?
In the years since this report has come out it has been cited numerous times on forums and mailing lists as a comprehensive survey of commercial scanners. If the authors didnât intend for this to happen, then they should really reconsider publishing stuff like this in the future. But Iâm pretty sure they did intend for it to be used exactly as people are using it because thatâs how itâs presented.
Well it certainly isnât a good look that the one scanner that comes out on top is made by a company that one of the authors had previously worked for. Or that the primary authors are directly involved in a commercial entity that purports to address issues raised by this paper.
That aside though, Iâve never said they falsified data, Iâm saying they didnât bother to correct incorrect conclusions that were based on bad data, when the fact that tests were done incorrectly was pointed out to them. Instead I was told that I donât understand how tinted color works, almost verbatim text to what youâve written above.
I have a business to run and too much other stuff going on to take the time to do a proper shootout. The purpose of this paper was to do that, but it was a botched attempt. Plain and simple.
You are looking at a revised version of the report, made after my email. A few days after the paper was released and I pointed this out, they revised it. I have the emails. If you look at the AMIA mailing list archive youâll see that a few days after the initial release there was a second email announcing a revised version. In between was the back and forth I had with them based on the email above. If you look at the original version of the report it is incorrect, but some of what I pointed out was fixed. They corrected a couple other things too, mostly minor.
Believe me, youâre preaching to the choir on that one. I worked for guy who insisted weâd never be able to stream TV and movies. This was in 2006. His company was out of business not long after.
The difference here though is that film requires knowledge not only of the process but of the actual physical film itself and what steps it went through to get there. And iâm not talking broadly or abstractly, Iâm talking about the specific reel one might hold in ones hand before loading a scanner. Some of that information can only be gleaned from handwritten notes on the can or on the film itself. Some of that comes from experience and being able to spot a subtle thing: Things like knowing that there was like one lab in NY that did things a certain way for a few years in the 1970s, that helps to identify what process that film youâre holding in your hands may have gone through. Thereâs a lot of guesswork, and the film archive market isnât big enough to benefit from an AI that can do that work, especially if it doesnât have the necessary information.
My point here is that if youâre handed a film, sometimes you have to be able to figure out from clues available what youâre dealing with, so that you can set up the scanner properly. This requires knowledge about a bunch of different things, and also access to physical (not digital) data about the film, such as notes scribbled on cans. Could it be done automatically? Sure, maybe. Probably with enough money and resources. Will it? I donât think so because I donât think the money will ever be there to do this.
Bottom line here: a paper was published and it was based on bad data. When that was pointed out to the authors they chose to fix some of the technical details but not to revisit the major underlying assumptions that caused them to reach incorrect conclusions. To my mind, this paper, and any subsequent paper by the same authors, has no credibility and their work shouldnât be taken seriously until this is corrected properly.
There is ample discussion above. Not sure what you meant by color information, if information is defined as the color content of the film, that is not generally the case. At the cost of information, narrow band would possibly have advantages on color separation.
Not sure I understand the statement. According to the grant, it was for developing a scanner. That IP is the company, and scan2screen advertises that fact very well.
No, it is not a good idea to use narrow-band scanners for scanning film. For several reasons. First of all, films designed for projection, at least, are intended to be illuminated with a broadband light source. Or, to put it more technically, the spectrum of the dyes is expectd to be sampled by a continuous sampling function over a large spectral range of light waves. And by the way, the receptors in your very own eye sampling the image on the screen are exactly this: broadly tuned.
Only in the context of broadly tuned filter functions, film dyes work as expected and designed. If you sample instead only on three discreet narrowband data points in the spectrum, you will end up most probably with a horrible color science result (for example, you will have a hard time matching the metamers present in human color vision). Case in point: Any narrowband sampling misses, indeed, certain frequencies in the light spectrum. Pushing this to the extreme: any narrowband scanner will be blind to quite a large spectral range of pure monochrome colors. In the context of film scanning, people using narrowband scanners are only saved by the fact that film dyes normally do not create pure monochrome light - the dyes are actually rather broadly tuned over quite a range of frequencies.
To make this point very clear: the use of narrowband filters and sensors does you do no good in flim scanning. Correct color science becomes difficult to impossible.
Note that hyperspectral cameras (which use narrowband filters, but many, not only three!) are used for example to distinguish ripe fruits from rotten ones, but not for taking nice looking images. Any Alexa, Red or whatever other camera brand you might pick up will have instead camera filters which are (you guessed it): broadband.
What changed in a year to flip your thinking back again?
For film developed yesterday that still appears as-intended, I will agree with all of your points. Using narrow-band illumination (especially at âunusualâ wavelengths that donât correspond to the hardware in our eyes) just adds work to the color-correction side of things for little gain.
But for film developed 50 years ago with substantial fading that varies nonuniformly between channels, you already have a big challenge on the color-correction side of things and broadband illumination introduces inseparable cross-talk between those channels that is essentially irreversible. In that case narrow-band is the only way to reconstruct the color⌠like you said yourself in February of last year.
The multi-spectral case in particular takes this to the extreme. The article that @PM490 mentioned earlier in this thread shows a nice simulation using 7 to 9 LEDs to reconstruct the full spectral response of each pixel to a spectacular degree of accuracy.
All it would take is a model of the film dye response for each interesting film stock to fit those spectra to three âdyeâ sliders and even predict the amount of fading in each (granted, using some fancy linear algebra). Shimmy the slider back to 100% to remove the fading, use your dye model to generate a new spectrum for each pixel, run that spectrum through the CIE human eye model to generate tristimulus RGB values, and youâd end up with something like perfect, original color.
For faded film none of that is possible with broadband illumination, is it?
The exact opposite is true. Read some more papers by Giorgio Trumpy. Hereâs a good start.
Also â when film was copied, they used narrow filters, exactly to get as clean a color signal as possible. You have to think of color as information, and during scanning your primary goal should be capturing as clear data as possible. This is best done by narrow bandwidths, to avoid the crosspollution between the different color channels.
Color science can be counter-intuitive at times. Iâve worked professionally as a cameraman and later scanner operator. Iâm still learning new facts every day and prefer research over intuition.
The path from film to our eyes is a very complex one and the aim should be to take out as many variables as possible.
Note that hyperspectral cameras (which use narrowband filters, but many, not only three!) are used for example to distinguish ripe fruits from rotten ones, but not for taking nice looking images.
When doing digital replica of art, it is standard to use multispectral imaging - and of course in narrow widths.
Any Alexa, Red or whatever other camera brand you might pick up will have instead camera filters which are (you guessed it): broadband.
Cameras have broadband filters because they want to capture as much light as possible, to have a high sensitivity.
I have worked with many different scanners (Scanity, Kinetta, Filmfabriek, Arriscan) and they all require tedious grading to approach the appearance of the original. With our tests using multispectral scanning, we have received far superior results. We have done test scans for clients like Eye Filmmuseum, the [Library of Congress and Das Bundesarchiv.
But letâs halt this fruitless discussion here. In a little while weâll be allowed to post more examples of our work. Let the results speak for themselves.
A few days after the paper was released and I pointed this out, they revised it.
That is how science works. If you know that the paper was revised, why do you then post your corrections years later? To the casual reader this might look like you pointed out errors that were not corrected, when the opposite is the case.
A normal operator who canât do the job correctly probably shouldnât be doing that job, to be honest. And to base conclusions on work done by people who donât know what theyâre doing doesnât exactly provide any value. So why bother?
Again, I was not involved in the tests or in the report. I can only relay my impression of the report. Having scanned many hundred hours of 4k film, I am very much aware of how important the operator is in the scanning process. I know how vastly the results of a scanner vary depending on who is (ab)using it. The operator is an integral part of the scanning process, one could also say: a systematic risk factor.
At the National Library we did some extensive tests of available film scanners around 2012 where we sent the same film samples to manufacturers and asked them to scan them to the best of their abilities. These results were never published, but I can relay that some results were ruined by poor operator choices. Therefore the argument that Diastor should have used only manufacturer approved operators never made sense to me. Diastor tried to compare results achieved by an average operator. After all, most manufacturers advertise the simplicity of their scanners.
I do understand that from a manufacturerâs point of view it is more than disappointing to get sub-par test results.
In the end, a film scanner is a tool. And you have to chose the right tool for the right job. There is not one perfect for all film types. They all have their strengths and weaknesses. Color reproduction is only one of the variables. Donât rely on a single source for your scanner purchase. Make your own tests for your use case. There are use cases where the Director is probably the best choice. Sometimes its the Arriscan. Other times the Kinetta.
One more thing. You wrote that you donât like that some of the authors of the Diastor report now make a competing scanner, quoting a conflict of interest. You have to look at the timeline.
The Diastor report was done in 2016, I believe. Scan2Screen was founded in 2024.
You imply that in 2016 the authors already knew theyâd make a competing scanner 8 years later.
Or maybe it was like this: The Diastor report showed that no (then) current scanner could capture historic film colors in a consistent, satisfactory way. Thus a new research project (ERC Advanced Grant Film Colors) looked at how â and if â a multispectral approach would improve the results (amongst many other things). We built a very rough prototype of a hyperspectral scanner with 31 bands, using both diffused and direct illumination. The results showed that yes, a multispectral approach is superior to the 3 band approach. We were in touch with several scanner manufacturers to see if they were interested in a collaboration but at that time none of them were interested â after all the traditional route was well established and sold enough. And the market for high-end film scanners is not exactly big â around 2014 it looked possible that the 2 biggest actors might vanish.
We then got funding for a new research project, with the aim of building a proper prototype of a more usable multispectral scanner. We obviously had to cut down on the number of bands and improve the speed. Using the results of our 31-band scanner, we could simulate which number and bands would be a good compromise between color seperation, scanner speed and storage requirements. The result was a prototype film scanner which we named after the research project, Scan2Screen.
It had never been our intention to built a film scanner when the ERC project started. It just became a necessity to be able to complete our research â how can we test the validity of our claim that mutispectral film scanning is better if we cannot do multispectral scanning? The Scan2Screen research project ended in 2023 and rather than letting all that research go to waste, we took the scary jump from research into business and founded Scan2Screen in Switzerland and the US. While the underlying research had received funding (as a natural part of a University based research project), the spin off never has received any funding (as implied by you (âyou are also part of a grant-funded new companyâ). The company is 100% self funded, and parts of our proceeds - if there ever will be any - will be shared with the University of Zurich. So there is no âfree moneyâ involved. But believe me: building a scanner from scratch is a very poor choice for a get-rich-quick scheme.
You may believe in a master plan hatched a decade ago, I tend to believe in Occamâs razor.
Well, the last line in the first post above reads: " I have to think a little bit more about this." - and thatâs what I did.
Actually, on a side note: I started off my film scanning experiments with narrowband-LEDs and Debevecâs HDR formalism. Because I thought maximal dynamic resolution and color separation would be the way to go. Right now, I arrived at using just a single raw image capture per frame and whitelight illumination. I learned along the way that the wavelengths of narrowband LEDs influence the colors for a given film stock (which is also a topic of the article that @PM490 mentioned above: However, they produce higher colorimetric error. Moreover, each film has different dyes and the diversity of films in museum is huge, from the beginning of cinema to nowadays. We will not find one single triplet of LEDs that will fit all the film dyeâs maximum absorption.) And: that I got fine color separation is something that I actually donât want to see. Pulling up again an old scan from the beginning of this thread:
you might see what I am aiming at. I have not idea where the reddish tint on the right side of the frame is actually coming from, but it seems to be a âfeatureâ of this specific Super-8 camera. (This is HDR + three narrowband LED illumination). Looks like a kind of filter not fully covering the frame. In any case, it does not show up at all when scanning a similar frame with a single raw capture, opened and developed directly in DaVinci (whitelight illumination):
Coming back to the main topic. I think one has to differentiate in this discussion between two quite different approaches. One using the minimal required number of sensors (three for human color vision) to arrive at a tupel of three color values. This is what most film scanners are doing. The other approach is one which aims at actually reconstructing the complete spectrum from multiple narrowband measurements. Only once the full spectrum has been recovered, it is used to compute the desired three-tupel of color values in whatever color space is convenient for further processing.
In this respect, the most interesting figure of the paper you/@PM490 cited is Figure 9. It shows the color difference measure as a function of the number of bands sampled. The color difference does not noticeably improve above at least 7 channels sampled. But have a look to the left of this diagram which is truncated at 4 channels: the color error increases substantially, approaching the color error quoted for the broadband Nikon D70 camera (2.7) in the paper. If I have time, I will compare the three-channel broadband approach with the three-channel narrowband approach and will report the results.
For my use case, digitizing existing historic color-reversal footage, the associated additional resources required for full spectrum recovery (at least around 7 channels) are not worth it. At this point in time, I am grading into rec709, and this color space is anyway not able to display the full color gamut of any film stock I encountered so far.
The approach of scan2screen is interesting and will lead to a much improved archival copy of precious film documents (with a lots of data). However, at least from my diagonal reading, the approach is only half-way there:
While the characteristics of old projection equipment seems to be taken into account, what about the color characteristics of the screen the image was projected onto? Probably a minor factor, but neverthelessâŚ
Fading over time introduces some non-linear color deviations which are difficult to account for. If there are color reference shots, things are easier. But the âReconstruction of the faded Agfacolor negativeâ available on the scan2screen website is not too convincing for me. I am aware that active research is ongoing in this area.
In any case, the color gamut of current standard displays is unsuited for rendering the full width of the color gamut historical film material can encode. So everybody ends up in employing some kind of adaption in the post to push what was scanned into the smaller color gamut that can be displayed. (Given, things are improving rapidly in this aspect.)
Reconstructing the visual experience of watching old film stock projected in a darkened room with nowadays obsolete projection equipment with the help of digital displays (normally operated in rather bright viewing circumstances) can always yield only an approximation.
So do I (as well as many other members of this forum). Personally, I am a theoretical neuro-physicist and have researched various visual systems (humans and animals) for quite some time. I also founded a company developing 3D cameras from scratch, among other things.
Report posted on AMIA Mailing List on Feb 14, 2018
I sent the email above on Feb 16, 2018 (two days later)
Report revised to correct Director sensor type and a couple of typos on Feb 19, 2018
My problem is that the most pressing issue was that the results of the test clearly demonstrated to anyone familiar with either scanner that neither the Northlight or Director operators properly scanned the tinted film. That issue was not addressed, at that time or any time since. As far as I can tell the Feb 18, 2018 revision of the paper is the most recent revision.
Good science should follow the data wherever it leads. If itâs pointed out that the way the data was gathered was incorrect, then at minimum they need to confirm that and prove that the data was gathered correctly. Otherwise all conclusions based on that data are suspect and shouldnât be trusted. Instead, when I pointed out the issues in the email above, I was told that they would correct some of them but that I was wrong about the tinted film. I am not wrong about that. If that film was sent to us today I could 100% reproduce the image in the report, as well as a correct image showing the tint, using the same machine but with two different approaches. The reason the tint was removed in the example in the report was that the operator told the software to do so, albeit unwittingly.
This was not good science, it was sloppy. And the fact that 6 years later they havenât corrected the conclusions leads me to belive they simply donât care.
Six years. And no, Iâm not meaning to imply this was some grand scheme hatched in 2018. Iâm just pointing out that going back now to change the report to reach the correct conclusions would make two of the scanners you said couldnât scan this kind of film, look better than the report leads archives to believe they are. Which of course isnât what you want when youâre (now, 6 years later) trying to sell the idea that your machine is the only one that can capture that film. Or when youâve published multiple academic papers on the same subject in part based on the incorrectly drawn conclusions from this paper.
See, this is my point, which youâre completely missing. Indeed the report reached that conclusion, but but that conclusion is faulty. Did they ever send the film to Lasergraphics or to Filmlight to scan? As far as Iâm aware, they did not (at least at the time, when I asked the owner of Lasergraphics, they hadnât). Instead they relied on an operator who they state in the next paragraph was inexperienced with this kind of film. Had they found an operator who knew how to use the machine do the scan, and they saw that the tinted film was correctly reproduced in the scan, there would have been no need to do the further research youâre saying this report concluded was necessary.
@Martin_Weiss In your âone more thingâ you appear to reference statements by @friolator, so I would let him respond to those.
In regards to my prior statement, you appear to be quite defensive about my reference to the grant the start up may have benefit from. If the Swiss National Science Foundation Bridge Discovery Grant, and the Universität Zßrich allow it, there is nothing wrong with it.
The lengthy explanations do not change the past.
In particular in regards to:
In an effort to clarify my prior statement that you have so extensively challenged:
I am happy to accept, and make it very clear:
For transparency, you are a founding member of a new company, scan2screen, a startup which bears the same name( and logo) used for a grant funded project, and that the intellectual property basis of the startup is the prior research of its funders, as clearly enumerated in the startup company website. The prior research was done prior to the founding of the company, while the participants (some founders) worked for other institutions such as the University of Zurich.
@Martin_Weiss The part which I respectfully disagree with is:
In my opinion -and everyone is entitled to their own- If the prior research and intellectual property was not paid for the startup one would have to be very naive to believe your statements on funding.
Given that the subject matter of this forum, and of this thread is not startup intellectual property benefits, I will focus on the subject The Backlight.
Look forward to seeing your knowledge contributions in this and other subjects, specially if those include specifics.
Cameras do not have broadband filters for quantity of light. The broad filters are such to -as best as these can- represent each color (RGB) spectral band as perceived by the human receptors and capture this representation as a numeric combination of the primary colors.
Since you have the perspective of a cameraman, and in the interest of sharing knowledge and dispel arguments, I would refer you to the very well documented study of narrow band illumination by the Academy of Motion Pictures Arts and Science. In the videos, the presentations by Jonathan Erland illustrates well the perils of recording reflected light from narrow-band light, similar perils applicable to capturing direct light filtered by the film. His presentation is substantiated with specifics, and to quote from his technical Oscar acceptance speech
Concerned to such issues basic to the cinema process such as light and time. Light in the form of the quality of light produced by the newest instruments including solid state LEDs and time in the form of frame-rates and various techniques by which the cinematographer becomes the master of time itself.
His presentation of chromatic chaos is a master class of the effect of different light spectrum. A portion of the presentation makes an analogy to paint, and a blank canvas, and he says:
Within that metaphor then, the combination of a 3200K balanced stock and a 3200K light source or a very close simulation thereof, the default condition.
Represents the white blank gesso canvas that an artist confronts as he begins a painting. In the course of executing the painting he may impose all manner of changes and effects upon that canvas that after all is the essence of art, but he is entitled to expect the blank pure white canvas as his starting place.
A light source differing from the standards studio tungsten source, specially a discontinuous one, will preemptively impose a color cast on the canvas that will be difficult or impossible to subsequently correct.
While the above is in the framework of reflected light to film exposure, the effects of discontinuous direct light for scanning with a camera sensor -the white gesso canvas of film scanning- would also impose a cast in colors.
@npiegdon that approach basically attempts the very close simulation thereof referenced above.
By providing multiple-discrete narrow bands, a discrete spectrum source is simulating a wider spectrum, while maintaining the advantages of narrow-band illumination for color separation.
One should also consider there are multiple ways of fading. The film die can fade in intensity only, preserving its passing band, and/or may fade in die-color, where the die color is no longer the same as the original.
All visible colors can be described with X Y and Z values.
Colors with equal XYZ values will appear identical, even with different scpectral power distributions.
No three primaries exists which would reproduce all colors.
The solution is an XYZ image sensor (looking for one for the RPi4 or 5 ). Unfortunately, while there are XYZ light sensors, I havenât found an image XYZ one.
One of those XYZ image sensors, and a white light source, and it is the end of the discussion. Every pixel will have its current visible color. End of discussion.
In the meantime, one would have to wear the light and sensor that is best for their requirement (and budget).
After almost two years, I will update/clarify my hypothesis above.
Bn should be captured with multiple narrow band LEDs within the blue channel of the sensor.
Rn should be captured with multiple narrow band LEDs within the red channel of the sensor.
The above may be achieved with a full spectrum image sensor, 3 exposures.
For those on a budget -like me- two raw exposures of an RGB sensor should be as close as the budget permits.
Exposure 1 - White light, resulting in Yw (arithmetically calculated), Rw, Bw, and Gw.
Exposure 2 - Red LEDs and Blue LEDs, resulting in Rn, and Bn.
I would add that the light of Exposure 2 should be set to obtain the best quantization for R and B channels, rather than to seek balance. That is especially useful to achieve adequate S/N for these colors in faded film.
The hypothesis is in line with the mission statements borrowed from @friolator and @cpixip.
@npiegdon@cpixip thank you for sharing valuable knowledge on the subject, this is a great topic and there is much to learn.