At long last Blackmagic Design has listened to customer requests and announced 8mm and Super 8mm gates for the Cintel Scanner. Probably won’t ship until the end of the year. See here - https://youtu.be/_Ed-zXtG8oQ?t=1936
I saw it at NAB this week. Don’t get your hopes up. They’re saying max res for 8mm will be HD “because there’s no need to go bigger.” That, by the way, is straight from the engineer’s mouth. (at NAB they have the people who build the scanner work the show).
Also according to him, the scanner won’t be able to do 8 through 35, it will do 8/16, or 16/35. If you want all three formats you need to get two machines, each configured accordingly. How hard would it be for them to make a modular camera unit that can be user swapped, with the optics installed? That’s how Lasergraphics does it on the Director 13.5k, for smaller gauges.
BMD will never get it when it comes to film scanners.
Wow ! Formula for failure, deeply disappointing. Grant Petty has been spot-on correct about many things, shocking he could get this so wrong. Thanks for being there to ask the tough questions. I’ve missed NAB the past three years now after attending annually for something like 35 years. Odd remark about resolution though. Sure, maybe HD will be the maximum native scanned rez, but Resolve Studio should be able to easily perform a high quality AI assisted up-rez from HD to UHD if needed.
The resolution argument the BMD engineer at NAB was making is an old one. it’s based on the idea that there is a limited amount of resolution that the film can hold therefore there is no point in scanning at a higher resolution than that. There are a few faults with this argument:
Most people say “well, the cameras are crappy and the lenses are no good so there’s very little there.” this may be the case for some Super 8 cameras, but I’ve got some cameras in my collection (Leicinas, Beaulieus, Nizos) with incredibly sharp lenses. Cameras like the Leicina and Beaulieu allow for you to use different lenses so you can put on the same kind of glass you’d use in 16mm cameras. These cameras hold the film steadier in the gate, and with sharp lenses produce much nicer images than a cheap Kodak Instamatic might.
They’re not considering what happens when you scale up. Any scaling up of an image, AI or not, involves making something up that wasn’t there. At least for now, AI scaling messes stuff up a lot with phantom image data. But even with traditional scaling, you lose sharpness when you blow up so you either live with that or you have to add edge enhancement in to try to get something that looks right, and it really never does.
From an archival perspective, it’s completely the wrong thinking: the goal of an archival scan is to capture the FILM. Not the picture on the film, but the film. You are making a digital representation of the film and the grain, dye, etc is all part of that. More resolution means a truer representation of what’s on the film, not whether the picture on the film is “good enough.”
I honestly don’t understand why BMD even makes this scanner. It’s a weird afterthought in their product line that doesn’t get much love and doesn’t seem to make a ton of sense. I’d love one. but only to strip down and use as a base for a better machine.
These arguments collide with a very concrete reality, we never manage to increase the quality or have more details by increasing the resolution of the capture beyond about 1200 to 1600 pixels for Super8.
If you have a large image that’s full of grain, you won’t have a problem with resizing, but you’ll have an image with more grain, no more detail, and still ugly.
Let’s stop thinking that grain is part of the picture. It’s our method of scanning that increases the visibility of grain, project a film and you’ll get some grain but not nearly as much as a scan.
And more you increase the resolution of your scan, more the grain becomes horribly visible.
Wholeheartedly disagree, but it is a choice everyone can make on their own.
For starters, it is always best to process above the intended target resolution (even if the intended target is HD, scan resolution should be higher).
The grain is part of image rendering process (even at the projector). If one wishes to remove it, again that’s a choice.
And if the end goal is 1200 (assume vertically), that’s more than the HD 1080.
When becomes to film, more (bit depth or resolution) is more.
That’s what I said
That’s not what I said
It’s an obvious fact that is irrelevant
I find it hard to deny that increasing the resolution drastically brings out the grain of the image.
Finally, except for a bad interpretation (probably due to my very bad English), we are almost in agreement.
This is what I mean by a fundamental misunderstanding. Nobody is suggesting you can increase the quality or the level of detail by scanning at a higher resolution. The resolution of the PICTURE is set in stone (well, emulsion) by the camera, the film stock, the lenses, the lighting, the processing, and other factors.
But the grain is the structure of the image. Therefore a better representation of the structure is a better representation of the image as it exists on the film. You may not like the way grain looks and that’s your choice. But it is a simple fact that it’s part of the structure of the picture and it’s a big part of what makes film look like film.
From the perspective of someone doing work every day for major museums, libraries, filmmakers, and archives, I can tell you that the thinking is never that the goal is to match the resolving power of the original film. It is to exceed it by maximizing the image of the FILM itself, in digital form, at a resolution at or above the presentation resolution (that is, if the digital presentation is going to be a 4k DCP, you’re scanning at 4k or higher so that the image isn’t scaled up). If you don’t like grain, there are a million tools out there to address that.
Why? I mean, it’s a fundamental building block of film-based imaging and it absolutely is part of of the picture, whether you consider it good or bad. You may not personally like it, or think it’s ugly, but that doesn’t mean it’s not part the underlying structure of the image. Tools exist to get rid of it if that’s what you choose to do.
I think we have to be more open-minded here. Certainly in the 70s or 80s of the last century, there was a trend in Super-8 filming to reduce the visible grain of the small format as much as possible. There were many reasons why people who could afford it opted for Kodachrome instead of, say Agfachrome, which featured a much more noticeable film grain. Even in 35mm still photography there were trends to push the film grain back (microfiche film stock with special developers) in order to get towards resolutions comparable to larger formats. I am guilty by myself of such efforts.
Film grain is a random spatio-temporal texture which is superimposed on the original scene information. The original Super-8 system was tuned in such a way that some of this film grain was temporally averaged by the viewer’s visual system, exposed to a rather bright image in an otherwise rather dark projection environment. It is important to understand that a digital copy of a grainy film displayed on standard display is not equivalent to this viewing situation. So the grain impression of a digital copy displayed on a computer screen will be different from the impression of the same film stock viewed in a projection setting.
Anyway, there has been a stylistic trend to even add film grain to pure digital material. This has nothing to do with archiving old film, it’s a deliberate artistic decision. As a funny side note: due to its nature, film grain increases the bandwidth demands of transmission channels. So there has been even research aiming to split the film’s content from the real or artificially introduced film grain and transmit only the film plus some grain statics to the viewer. Then, on the viewer’s side, the appropriate film grain is added artificially back for the display. (This is economically interesting for distribution networks).
Another point worth noticing is that with advanced image processing algorithms, you can indeed improve the image quality of a given film sequence. Nobody would argue that color grading old film stock is an essential step of converting existing film stock to some digital format destined to the average viewer. How about some sharpening? Impossibly if you still have the film grain in your data. You will not recover fine image detail but only enhance the film grain (remember, technically it’s only spatio-temporal noise).
Again, it’s hard to display film grain on modern digital devices in a convincing way. You can increase the actual information content of your old film stock with advanced image processing technique - if you take out the film grain beforehand. Taking out film grain reduces your file sizes notably (and - I must confess - your processing times ). In the end, it’s probably an artistic choice and will depend on your audience.
In closing, from an archival point of view, you probably really want to scan the original film stock with the highest resolution you can afford. It would be an interesting question to research whether the tiny Super-8 frame really only needs 4k or whether even a higher resolution is necessary to capture all the glory details of the material. But that is the archival scan; the product you are going to deliver to customers viewing that on a TV-screen or even mobile phone might be a different one.
Yes, this is exactly my point.
Doing anything to reduce the grain during the scan, whether that’s intentionally scanning at a lower resolution, or employing some kind of grain reduction/smoothing, is the wrong place to do it. I don’t think it’s closed minded to say that, it’s simply the most logical workflow to use. The tools exist (for free often) to do this kind of thing after the scan, and that’s the best place to do it, if it’s the look you’re going for. As soon as you do it in the scan, you hobble the image and limit what can be done later. We often handle film that probably can’t be run through the scanner more than once or twice, because it is on the verge of falling apart. So re-scanning at higher res later isn’t always an option.
There have been papers about the resolution issue, which I have around here somewhere. I’ll see if I can locate them. Kinetta posted an article a while back addressing this subject: Dropbox - Memoriav Dossier -- a response - 2012.pdf - Simplify your life – this shows some examples of the same film scanned at different resolutions on the same scanner with the same optics. You can clearly see that there is more picture information in the 3.3k scans than in 720p and 1080p HD scans (this was done at a time when 3.3k was the resolution of the kinetta and the paper was in response to an academic paper that showed the differences between SD and HD transfers).
When we scan home movies for a client, the most common thing they want is a flat, ungraded, slightly overscanned 2-flash HDR 4k scan. Along side that, because it’s relatively simple to do in the ScanStation, they get an HD resolution MP4 file with a basic in-scanner one-light color correction (essentially setting black to black and white to white, and the contrast to something less flat than the 4k). The second file is immediately viewable, sharable, etc. and is basically “correct” looking. But they have the 4k if they want to pull high res stills, or do any editing or grading. And a lot of them do. And a lot of the time, the film is on its last legs so it only makes sense to scan at the highest resolution possible because in 10 years it may have fused into a solid block of acetate.
Also agree with cpixip.
But not at all with friolator…
The increase in resolution does not bring anything, neither additional details, no improvement of sharpness. You capture noise and all the imperfections of film.
Apart from clearing your conscience for future generations, you deliver a product that is no better.
The example you mention on the Kinetta, which I already knew, always struck me as suspicious. Why is the 720p image blurry? and the last image has more grain but no more detail than the center image ?
If the concern is to be able to have a large image for the future, I have more time in improving the scaling algorithms than in cleaning up a noisy image.
There is nothing suspicious about it. Look at the subject’s head in all three images. They are the same size. The 720P and the 1080p have been scaled up so you can see what happens when you view a low resolution scan at a larger size. Like if you scanned at HD and viewed at UHD. the same thing would happen.
If what you want is a grain-free image, then that’s up to you. But no archive we have ever worked with, and we work with dozens of them, would ever want that done in the scan. In a viewing copy made from the scan? maybe. But the whole point of archival digitization is to capture a faithful digital representation of the original. The original in this case is film. The film has what you characterize as “imperfections” but that’s part of the image, like it or not.
this here is a paper about a physics-based approach to realistic film grain creation. Some parts might be interesting to forum members as it touches a little bit on the origins of film grain (classical black-and-white process, color-reversal is a little bit more complicated).
I never said I wanted a grain-free image. I want an image that matches as closely as possible how this image would look if projected onto a movie screen.
Increasing the resolution too much amplifies this grain.
Thank you for this document CPICIP, I will read it carefully.
… here’s a short list of additional downloadable papers about various topics related to film grain:
- Film grain noise modeling in advanced video coding: this is one of the papers I mentioned aiming at reducing bandwidth requirements for grainy film material for efficient transmission
- Film Grain, Resolution and Fundamental Film Particles: an introductory overview of film grain and the resolutions involved
- Kodak T-grain Emulsions in color film: detailed information about Kodak T-Grain emulsion with a nice cross section of a color film
This is a great paper. Thanks for sharing the link.
The discussion about MTF curves and how every weakness in the system compounds is very useful. The sensor’s pixel count isn’t the only variable that determines what you’re going to get!
The couple mentions of Abbe condensers was especially interesting to me, since I’ve been poking around in that direction lately. (I just obtained some cool results–including a couple MTF measurements–that I’m hoping to post about in the ReelSlow8 thread in the next day or two.)
But the most apt line I spotted for the purposes of this discussion was in the Fig.24 caption on page 23:
Also note how the […] grain detracts from resolving information in the 100 year old film; the image is much noisier. Some believe that the [image with more grain] looks sharper because the film grain appears sharp, but comparing the images will show more image information in the [image with less grain].
Granted, the comparisons there are on more equal footing (vs. the Kinetta article), having been taken at the same resolution and only varying the method of capture instead. But I’m with cpixip on this one: grain is a spatio-temporal source of noise that corrupts the signal. Certainly you need enough resolution (and corresponding resolving power in the rest of your system!) to collect the signal in the first place, but if there are any adjustments we can make to reduce the magnitude of the grain–holding everything else constant–that’s probably the way to go.
Well, here’s another paper (actually a dissertation from 2013) you guys might find interesting. Talks a little bit about different illumination setups (Callier effect, for example), but it’s mainly focused on different strategies for flaw detection. Caveat: I only read it diagonally.
Actually what you said…
Perhaps I misunderstood, since it was in the context of 1080 max resolution of the BMD scanner. There is plenty of information on the subject that indeed, working above 1080 (including your own reference to 1200) is best.
This is a recurring subject, perhaps I should have referenced the prior post on the subject
Roland, no need to come to an agreement on the subject of resolution or grain. Everyone will use what is best for their particulars. And while I do not share the same perspective on the subject, I respect that is your preference.
Nevertheless, for the benefit of others, it is best to differentiate between personal preferences (which everyone is entitled to have their own), and best practices.
My stated mission adopted from @friolator
It maybe obvious, but if the final product target resolution of 8mm is intended to be 1080, and ANY post processing is part of the workflow (Cropping the most basic), it will be best for the scanning resolution to be higher than 1080. How much more?
Whatever you can afford (sensor and storage).
My summary devil’s advocate.
- Would you like less grain: Select illuminant appropriately, scan at higher resolution than intended target, digitally adjust the amount of grain to your satisfaction (I used NeatVideo to do so).
- Would you like more grain: Select illuminant appropriately, scan at higher resolution than intended target, digitally adjust detail and sharpness to your satisfaction (Davinci Resolve for example).
Thank you @friolator @cpixip for the references. Great material.
In the context of the posting (8mm Cintel Scanner) regardless if one likes more grain, or less grain, 1080 -in my opinion- is not enough for a $30K commercial scanner. Message to BMD: too little, too late.
Here is an elegant way to divert my remarks.
Here is a little reminder:
C’est vrai que j’ai été un peu restrictif, j’utilise des résolution plus élevées pour du Super8 (1800-1900)
For information, I have a camera with a resolution of 4000x3000 pixels with a Sony sensor (I have tested many others before, but always industrial cameras), a 40mm Schneider-Kreuznach lens and indirect LED lighting that looks like the ball model presented on this forum.
My goal is not to convince you either, but for me it is clearly not fair to think that the grain is part of the film beyond what a projected film reveals. This corresponds to finding normal a noise generated by a capture resolution that is too high and that it is not easy to attenuate without removing details.
So we each leave with our point of view, and that’s fine. Thank you for this exchange of opinions.
I used Neatvideo, with 24MP source files, and the results are amazing. As indicated above, since grain is a spatio-temporal texture, the filter can be trained to the specifics of the grain and the reduction or elimination can be adjusted to avoid affecting details. To do so however, a sufficient resolution/sampling of the texture (the grain) is necessary for the filter to work its best. It is also fantastic at eliminating sensor noise.
Thank you for the information and for sharing your views.