4K camera purchase for Kinograph v2 testing

What resolution is the original scan at in this example?

this is a bit of an apples to oranges comparison since the image on the right has been so heavily processd. To be honest, I really dislike the sharper image as it looks artificially sharpened to my eye, and nto at all like 8mm film. On the other hand, the image on the left is too soft and not graded to bring out the details, but generally looks the way film is supposed to look (and also has the correct color balance). The edges are too soft, and I can’t tell if that’s from the camera or if this is a print or something where there’s generation loss, but even if the image is blurry, the grain looks soft, which means the scan itself could be sharper.

Sorry, I’d very much disagree with this notion.

What did you use to shoot that? we’re actually about to start making some custom 35mm test charts for use in-house. We’ve got access to some pretty high end cameras and lenses for that and we’re still planning out what we’ll include in the carts, but I’m curious what the lp/mm count is on the inner circles.

@friolator This was shot on a pretty custom ACME camera. The sprocket assembly can be swapped from 35 to 16 with only 2 screws. B&H shuttles 16,35 and VV. I made a large format mount (Kowa Six) for it to shoot titles, filmouts etc.

For the charts I found a company near me who print very high resolution film transparencies. I design my charts in Illustrator and mount them on custom backlit board with acme peg holes.

Here’s a pic of the inside with 35mm assembly.


I was relearning PCB work and was thinking of doing a test pattern to print (similar to how a gerber file is printed) to a transparency to have very high resolution calibration targets for 8 mm gates. These would not have sprocket holes, but would help check alignment between the camera/lens/gate. Thoughts?

1 Like

Looking at my chart, the thinnest lines (in the middle circles) are 0.089 mm and the spacing between them is 0.100mm.

That’s a great idea - not sure why it would not work, as long as the transparency is held nicely in the gate like the film then it would be perfect. This is the company I use for prints: Contact Us – Digital Imagesetting Film at Film Output

1 Like

Thanks for the contact. FilmOutput (at least their price list) is up to 3600 dpi… that would not be enough for making a good resolution Super 8.
1 inch = 25.4 mm
3600 / 25.4 = 141.7 dp-mm If we go 5 mm wide for S-8, that’s 708 points horizontal.
The key will be finding someone that uses a Gerber typesetter. I know based on PCBs that Gerber resolution far exceeds that. Just not familiar what service companies would provide the service for just the film. I’ll give FilmOutput a call and find out.

@robinojones sharing this because it was not easy to find companies that provide phototooling/photoplotting at meaningful resolutions for 8/S8. Most file to film companies are in the digital graphics domain, which limits their resolution to about 3000 dpi. That’s not good for 8 mm film

Found Fineline Imaging, and the capabilities are up to 50K DPI, prices are reasonable up to 25K DPI. If conversion from EPS/PDF/AI is required there is a conversion fee.

To recap, I am thinking of using KiCad to generate native Gerber lines of precise width/spacing to create a transparent test pattern to use for imaging testing in the same format of a film strip (without perforation).

Having a film test target would be a nice tool in the Kinograph arsenal, helpful for camera comparison, lens comparison and testing the end-to-end limits of the setups.

Amazing thanks for sharing, I see they accept DXF so patterns could be made in Fusion 360 or other cad software. Maybe a separate thread could be made for focus charts / test patterns.

Agreed… :wink: I didn’t have access to my computer so I reused an image from a previous post. Here’s maybe a better comparision:

To the left is the raw input image which was color-corrected to my best abilities. Scan resolution was initially 2880 x 2160 px, camera used for scanning was a see3CAM_CU135, lens a Schneider Componon-S 50 mm. Five different exposures were zoomed down to 1440 x 1080 px for exposure fusion and a color-correction. The result is displayed to the left of the above image.

The film stock is original Kodachrome 40 color-reversal stock from 1981 which (for various reasons) had been developed nearly a year after the original exposure of the material happened. That is probably the reason for the strong grain noticable. The camera was a generic-branded model, original manufacturer was Chinon, Japan. The optics of this camera were ok, but definitly not high-grade.

To the right, the result of a spatio-temporal denoising/processing step at 1440 x 1080 px is displayed. For display in this post, the original images were scaled down to a resolution of 960 x 720 px.

Given, such pronounced film grain is only observable in small movie formats. And grain adds certainly to the aesthetics of the footage. But film grain is produced by the film stock used and has no real correlation to the content of the movie (that is: what is actually imaged in the movie). I consider film grain therefore a creative choice - in fact, you could re-add a suitable amount of film grain back to the recovered footage on the right, if you like. Here’s a small example, to illustrate the concept:

On the right, a little bit of the original grain (40%, to be exact) was added back to the restored image. Something similar to this would actually be my preferred output.

Just a side note: there is actually active research in film compression technology doing something quite similar: these guys analyze the original noisy footage, get rid of the noise which in turn enables them to get much better compression ratios, transmit both denoised content and noise information to the consumer side, and reassemble at the consumer side denoised image and noise information for an approximation of the original footage. I bet this will be standard compression technology in a few years.

About 15 years ago I was manager at a company producing barrier-based 3D displays, using mainly 50 inch plasma screens. The barriers in front of the plasma screen were produced by a local PCB company which sold us not the PCB, but the large film sheet they were normally using to expose the PCB. The barrier patterns needed to be very precise for the whole thing to work. After some try and error, we had quite stable results. If you image such a large print with a very good lens onto fine-grained 35 mm material (Kodak Technical Pan comes to my mind), you should get to get a very nice personal calibration target for tests.

Well, some mathematics and measurements of your optical setup might be required to actually get the real resolution scale of your final 35 mm test target, but that is managable.

Easier to use is ready made calibration material. For example, for Super-8, you can still get from Wittner cinetec SMPTE RP-32 test stock.

Also, there are (optical) companies who are producing calibration targets with extreme precision. Here’s an example. These targets are usually exposed on perfectly flat glas substrates and are actually not that expensive (at least some of them).


Perry’s article is only somewhat accurate really. Over-scanning helps, but it only helps by a finite amount. Friends of mine have done tests on the Arri and 16mm negs and prints and found no difference at all to most film whether it’s scanned at 3K or 6K (it makes virtually no difference for 35mm prints either). That’s a non-bayer scanner with native 3K resolution or 6K with microscanning.

Too much sharpness is over-rated anyway. Projection softens the image, no projected print looks as sharp as straight off the scanner. Too much resolution doesn’t look as good as you may think for prints either, I can show you some tests in a few weeks we’ll run some film with poor optical resolution through at 6.5K and you’ll be able to see for yourself it actually doesn’t look that good when you’ve scanned way above the resolution that is in the film.

And for most 16mm 2.5K or 3K gets you full UHD resolution anyway (2160p), so you don’t have to scale-up to get to 4K.

Of course over-scanning helps, but the professional companies that run Scanstations all do that anyway. Unless I’m badly informed I think the software does it by default now as well, but if it doesn’t you just zoom the camera first to fill the image - the camera’s resolution is 6464x4852 and then you give that a crop and downscale to whatever resolution you want. The over-scanned area isn’t that useful for you:

Note the garbage around the edges that I’ve pointed out there, that happens because the HDR scan is two frames that are merged and the light (and the film) is in different locations relative to each frame. 16mm looks like this:

The point is that the way the Scanstation works you end up with quite a bit of unusable image area either way. The resolution is only about 15% more than a machine with a 4K CCD line-sensor scanning the picture edge-to-edge, and even less for a machine with a 4.3K sensor scanning the same way. Of course depending on your scanner design you don’t need anywhere near this much over-scan, so a stock-standard 4K sensor can get you around 85% of the spatial resolution that the 6.5K Scanstation can get, and the 3.2K sensor about 65% and still higher than UHD resolution. That all depends though on your gate design and how steady the film is - get the film rock-steady and you can zoom your camera closer to fill the image more and lose less to overscan.

This is not accurate. You (as a user) have no control over the zoom of the camera with the ScanStation. Those positions are set by the factory during calibration and are fixed. When you change from 16mm 6.5k to 16mm 5k, the scanner moves a bit to the right position, (because it’s using an ROI in the sensor that’s smaller than the sensor size). But as a user, day to day, one has no control over the zoom. The scanner is always scanning the scanner’s gate area, plus a small bit more. Everything else is cropping out of that full overscan image.

It’s not because of the light, it’s because of the film position, and really doesn’t matter. this is the very reason why the scanner is designed with so much overscan, because it allows you to take two images while the film is in different positions in the gate, and the salient portions of the image are merged. If there’s cruft around the edges, who cares? it’s not something one would ever use anyway. What matters is what’s in the frame.

It depends on the gauge. For the 6.5k ScanStation, the dimensions of the full aperture (the film frame) is 5.3k for Super 35mm and Super 16, 4.8k for Regular 16mm, 4k for Super 8 and about 3.3k for 8mm (though we usually recommend 4k for R8 because there’s typically picture between the perfs and for home movies, that can be the difference between seeing someone and not). See: Lasergraphics ScanStation 6.5k Maximum Resolutions | Gamma Ray Digital

Do you mean Oversampling? Overscanning typically refers to the area of the image being scanned, and oversampling to the process of capturing more resolution than you need, then reducing it to a lower resolution. Oversampling of a bayer image, in particular, is capable of producing images with the same color as a true RGB scanner, because of the downsampling process.

The Arriscan (and the Northlight) are 6k scanners but were never really marketed that way, because the intent is to oversample the image at 6k and output a 4k image. You can make a 6k file on both, but you’re losing the benefits of that oversampling when you do.

Overscanning is only about how much outside of the frame you’re capturing. For most scans, we only recommend a small amount of it. For Super 8, more is better because the top and bottom frame edges can be used for post-scan stabilization if you want it, as S8 tends to be have a lot of up/down jitter. For 8mm, you don’t need it as much unless you want to see what’s between the perfs.

This is reasoning I simply cannot understand. The point of scanning film digitally, especially in an archival context, is to capture what’s on the film as faithfully as possible. Just because projection can soften an image doesn’t mean it should be the benchmark. If you don’t like the sharpness of the image, there are lots of tools that can be used to soften it, but that’s not something that should ever be done during the scanning process.

Yeah sorry you’re right about that I would normally have known that I must have been having a brain-freeze. 5.3K for super-35 (no soundtrack).

Some film can have really bad gate weave in it so the frame-edge would help there in post-stabilisation.

Well I could go on about this at length. Scanning a camera negative with a razor-sharp digital scanner has benefits, but it also comes with a lot of drawbacks and especially when the film’s very cinematic qualities relied on leveraging the properties and limitations that film afforded it. Stripping the film of those limitations can then result in scenes and effects looking substantially different to how they were intended, it can reveal the “flaws” that film covered-up and reveal unwanted levels of detail.

1 Like

… that’s an interesting point. Digital technology allows to transform an image to an extend that the original “soul” of the material is no longer present. In addition, the cinematic experience was usually a product of several generations of different film stocks joined together in the production line from camera to cinema - no viewer has ever seen a camera negative in its full glory. So at the onset, it is not clear what kind of cinematic experince one is aiming at. Furthermore, visual expectations of viewers have changed over the generations, which complicates matters even more. In the end, I think it is an artistic choice what ones defines as the “ultimate digital copy” of an analog media.

In my view, there are actualy two digital copies, always.

One is the original raw scan, at the highest resolution, both spatially as well as quantization-wise. The best you can afford. This is for the archive, capturing in the best possible sense what is left of the original film stock. Aiming here at the best scan quality possible allows you to have enough headroom for upcoming, but not yet available advances in technology which will allow you to further optimize the second type of digital copy.

This second digital copy is the one for viewer consumption. It is subject to viewer expectations and current limits of coding, transfer and display technology. This digital copy is a statement in time, and it will change in a few years. Think of DVDs being replaced by Blu ray. Another example: while I am able to capture the full dynamic range of a Kodachrome 40 (archive copy), I am not able to display this dynamic range with current display technology. So my viewer copy of Super-8 Kodachrome material does not look like Kodachrome at all. But in this copy, all visual details of the original scene are present. In a few years, I hopefully can take my archive copy and redo the viewer release with the appropriate dynamic range.

To summarize my point: technically, you want your inital scan to have the highest quality you can afford. But this material will be never viewed by your audience. What is released as digital copy is defined by the expectations of your consumers, your technological platform and your personal taste.


I think you may have missed part of my point. The camera negative for a movie isn’t the source material, it was an intermediary element only. That limitation was well understood and taken advantage of creatively. For example dark scenes look very different on a print compared to a negative, there’s more detail and colour in a negative but that may not be how the night scene is supposed to look - if it was shot day-for-night it’ll become more obvious, if it was intended to be so dark that you can barely make out the actors or props in a negative all the details covered over by the print’s properties are suddenly exposed to the viewer. There isn’t an easy way to preserve those scenes either because if you put out a bluray with the dark scenes as dark as they were on film, not only will they look like rubbish because of the poor dynamic range and compression, but no one will be able to see what they’re supposed to be following in a brightly lit living room.

That’s an “extreme” example, but really there’s plenty of other ways where going to the negative can reverse the creative aspects of a cinematic film, and reveal details/flaws that were not originally as obvious (hair and makeup, cheap set and costume materials, wires and matte paintings, stunt-doubles, etc). There’s a bunch of decisions taken at the time of filming based on how the final product looks, not based on how the negative itself looks as an intermediary element.

@filmkeeper - maybe I wasn’t clear enough in my previous post. As I indicated, I am aware that there are several copy-and-modify steps between the camera negative and what constitutes finally the cinematic experience. My point was that it is a) difficult to define “the cinematic reference experience”. Do you base it on a 70mm cinema viewing in LA or the 16mm copy of a copy viewed in a small village somewhere else on this planet?

On many occations, actually the only reference left will be the copy you are scanning. In such a case, you will need to interpret, with all the knowledge you have about the cinematic process (and maybe even the personal style of the director) what would be a good recreation of the original viewing experience.

For most of us, the copy we are going to scan will be several generations away from the original edit.

My point was further b) that current digital technology is inferior to even recreate the experience of watching simple home movies faithfully. You simply can not recreate the intrinsic interaction of a projected Super-8 movie with our visual system with current digital technology. Let alone one come close to the experience of watching one of the larger, more challenging formats.

So: we just need to acknowlege that digitizing movie material involves two things. A) Digitizing and storing what is left from a decaying copy of the original film (archive copy), and B) re-interpreting this material into a digital version which is as close as possible to the original (release copy).

And it is in this re-interpretation where your own taste and experience on “how the scene should really look” comes into play.

1 Like

Overscanning includes the frame edge by definition. If there’s gate weave, you can increase the amount of overscan a bit (in the scanstation this means you output a slightly larger file with a bit less crop) to ensure the frame edge doesn’t go outside the bounds of the file edge. But it’s rare that this is needed, except with elements that have been duplicated optically from old or damaged films.

We always approach scanning from an archival perspective, so we treat new and old film in basically the same way. One can easily duplicate the softening that happens in optical duplication, digitally, if that’s the intended look. Thus, the film should always be captured in such a way that the result is as close to the image on the film as possible. One way of looking at it, that I find helpful, is to think of scanning the film, not the picture. Our job is to digitally reproduce the film as closely as we can, not so much the picture the film contains. Color/grain management/noise reduction, all that stuff that used to be part of the telecine chain should be completely decoupled from scanning, because all of it can be done as needed in post now.

1 Like

We recently rescanned the 16mm A/B rolls for a film made in the 1990s by a local, well known, filmmaker. At first, he thought it looked “too sharp” because he was so used to looking at prints derived from the neg. But by the time they were done with the grading he called to let me know that they felt like they had a whole new film. His words:

the images have a clarity and luminousness we didn’t think possible in digital. We feel like we have a 25-year old, brand new film!

This is just a basic 4k flat scan of 16mm color neg done on the ScanStation, but with the post work done using modern software-based systems.


This is the best mission statement, thank you.
Before working on it, I’ve never imagined what a 8mm film can give, and certainly it is beyond the resolution available with HD. One can limit its goals to the particulars of the project (and resources to buy a camera), but I wholeheartedly agree that if you have the sensor and the storage, more is more.


I managed to pick up a couple of the FLIR BF-S 12mp cams very cheap 2nd hand.
These ones FLIR Blackfly S BFS-U3-122S6C-C Camera | Edmund Optics

I’ve mounted it on my moviestuff mkii scanner in the mean time. I’m using the componon 40mm lens which when coupled with an extension tube, it almost fills super 8 frame.

I’m not too sure what’s the best image format to use? I’ve tried Bayer16, but it’s frame rate is too low (I need at least 15)
I was thinking about using Bayer10p.

I’ll share a few frames once I get things working ok :slight_smile:


Connect a speed controller like this and scan slower.

1 Like