Capture software made just for film scanning

A big problem with this is that there are just so many variables involved that it kind of has to be a custom thing. I mean everyone’s scanner is just a bit different.

While some cameras might be using USB, are they all using the same communication protocol, or are some cameras using USB connectors and some proprietary method of communication? What about faster cameras that use 10G ethernet? What about the style of scanner: some might build a constant motion scanner, some might build intermittent motion. HDR? constant light or coordinated flashes with the exposure? I mean, all of this is doable, but it’s a lot of work to write something like that in a generic enough way that it works for lots of people.

There are some open source libraries out there that address a lot of these issues, but pulling that all together into a complete package is really a major project.

1 Like

This is only for the Blackmagic Cintel, not the Kinograph or DIY scanners. I had a chat to him, he said if he’s going to let companies have it they will need to buy a license. He also doesn’t want to do the licensing himself either (I may be able to get a company to do that though).

All the software does is fix a problem the Resolve Software has with importing their own .CRI format. Since there’s basically no other software compatible with .CRI it converts the images and debayers to .DNG and then you import the .DNGs (with the correct settings) into Resolve and continue the post-scanning process as normal.

When I said gig e I was putting all 10g Ethernet cams under the same banner.

But you’re right, there are a lot of variables. But surely it’s better than just running spinview.

I was meaning a basic program to capture frames or an AVI sequence, with a easy to use GUI and some presets that would be common for film scanning. The software is not built for controlling scanners, literally just to communicate with a camera, and grab frames with the GPIO on the camera is triggered.
How you trigger the camera is up to you, and built into your own scanners hardware.

Constant vs intermittent motion is mostly just an exposure time change.
For constant, as fps based on film gauge increases, the exposure time decreases to reduce smearing. This could just be manual entry or custom presets you set up.

I’m not that interested in HDR etc, so I can’t comment. Most of the builds I see on here aren’t HDR. We just want to grab frames.

RGB mixing can be a scanner hardware control once again.
I’m not trying to recreate cine2digits

I’m more so talking about software for scanners built new to capture frames, not really workarounds for existing scanners running their own software

I understand what you’re saying, but the image capture process is so intertwined with so many other processes, that I don’t think you could really make something generic. It would be possible to make something based on GenICam, which would work with any GenICam compliant cameras (which means a lot of cameras). But to add a gui to it is a whole other thing because it necessarily has to coordinate with other elements of the scanner (like the lights, even if through some hardware). I just think this is a really substantial project, and it would be hard to make something that satisfies enough people’s needs to make it worthwhile, let alone most people’s needs.

I don’t think the capure process has to be intertwined with so many other processes, as i thought - scanner hardware is independant. Take Robinoscan-rt-16-35 that’s a hot build on here at the moment. It works very well transporting film independatly of a host computer. And my build - it’s a super simple motor pulling film past a gate, keyenece trigger pulsing the camera. Just add a machine vision cam and a computer to capture those frames and you’re done!

I think you might be thinking too much along the lines of your own build, which is very far out from most peoples DIY skill level.
GenICam is just what i was meaning, but I mentioned ActiveUSB which is made just for USB vision standard cameras. But A&B software also make ActiveGigE, and now (which i didnt realise untill you triggered me to check) ActiveGenI

I’ve been using StreamPix made by NorPix which works well, it has video scopes, a focus assist, uncompressed capture etc. But it has a heap of features and tabs not relevant, and getting used to it is a bit more learning. it’s also over $1000.

To add my 2c, I currently use Spinnview (spinnaker) to capture frames and syncing the RGB flashes. (Using the “counter feature and delays.) I think spinview is using genicam standard because when I open StreamPix, all my camera settings are loaded automatically. And capture behave the same in both software.

The transports are completely independent from capture software on my machine. In future would be nice to have everything in software but I don’t mind operating it like a tape deck in the mean time - and it’s not a compromise.

***If you guys do something compatible with genicam count me in - I’d rather spend 1K on development of open source film scanning software than get StreamPix which is a little awkward to use.

I did my first “proper” robinoscan scan test using a licensed beta version of the software @filmkeeper mentioned - if you are curious how it turned out. See below. Very preliminary test (camera is not properly aligned and I need to overdrive the lights to get more power but results are promising.

Edit: I did the RAW to DNG conversion using the aforementioned software.

It’s a RARE Tron teaser (pretty much unseen) I found in a box at a swap meet in L.A. it had turned red but I recovered the colors as much as I could.

What it looks like under the loupe.



Scan here:


Password: tron

2 Likes

Wonderful scan. You just didn’t attach enough leader so it cuts out at the end haha, but very nice result nonetheless.

I’d love to see a video of your rig in motion to see how it’s actually handling the film.

I’m more thinking along the lines of someone who has scanned a lot of film. Sure - you can separate that stuff and as in the robinoscan, the system works like a “tape deck” - hit record on the software and play on the device and just capture. And this is fine for some things. But as soon as you have a reel that has both color and B/W film on it (extremely common with home movies, I might add), you may find the need to use different settings for those films.

Or it may be that some of the framing is different from some shots vs others. This happens a lot in 8mm, but it’s also very common in 35mm where whoever spliced the film wasn’t careful to splice along the same perfs as the rest of the frames, and you might wind up with a 3-perf frame. This throws off the racking for the rest of the reel, and you need to stop, back up, reframe and go. A lot of this stuff can be easily automated, but you can’t do that with a “wild capture” which is basically what you’re looking at.

I don’t use SpinView, but it sounds like what you’re looking for. If it does indeed use genicam then all you’d really need to do is set up different configurations for the various types of film you’ll be using and them as needed. Then hit record, play on the scanner and let it run.

I guess what I’m saying here is that software to do what you want does already exist. It may not be the most elegant solution, but I don’t think recreating that will necessarily result in anything much better.

It would be great to see something that could actually control a wide variety of devices (cameras, lights, motors), receive responses from sensors, etc. But that stuff is complicated to do in a generic enough way that it’s useful for everyone, and takes into account the vast variety of cameras, controllers, etc that everyone is using.

Now, something that’s specific to Kinograph would be interesting, as it would be open source and would be a starting point for someone to customize. But I’m not sure where the software plans are at at this time. @matthewepler ?

Yes that happens with prints as well if someone has repaired their print using a donor print for example could be two completely different types of material.

Just about every 8/S8 reel of more than 150 ft is likely to have sections with bastly different exposure too.

Controlling every possibility is hard, and there will always be something else. In the mindset of Kinograph, it may be appropriate to setup a framework for common control of different devices, as a layer that individual devices can connect to, and ultimately have a software that uses this common framework for scanning.

I have been thinking along those lines for the 8/S8 built. In that context (home film, poor splices, varying exposures), it makes sense to have a simple protocol for light and transport to a usb port. I’m leaning towards using a raspberry pico (an arduino would work too). Basically the transport and light becomes a box with a USB port.

In the same manner, the Camera can be another section. The context here is stop-motion. And in that context, the camera is already boxed. For those without the budget for a machine vision camera (like me) the alternative over a raspberry HQ may be a mirrorless camera. Both may be controlled via python (the mirrorless via gphoto2), allowing everyone to create their own front-end for the specific purpose of the scanner particulars.

But for those just starting, if there was a ‘box’ that solves the light driver and transport control would be a huge leap forward for opensource builds.

Agree with @friolator that making the same for all possibilities of continuous motion adds much complexity, but for a constrained build like Kinograph there may be a path forward.

Food for thought.

…There’s plenty of leader the film ends abruptly…

But if you have a spliced roll with different film types (b&w & color… ) wouldn’t you still need to first sit at the scanner workstation and manually jog to the different film types / mark in-out points for the different scanning settings before you start the non-supervised scan?

The material I scan is mostly shot and prepped by myself, sometimes I scan other filmmaker friends films so it’s no problem to let it go as I know exactly what’s on the rolls. Also scanning at that speed is mind blowing for me, coming off optical printer intermittent systems.

Spinview is a good workaround for now but it is missing scopes, and the UI is really meant to “test” Flir cameras, they advise to use it for testing only - not for use in work environment. There’s a shitton of useless settings that could be taken out and it’s a little buggy sometimes.

Spinview forces us to save the raw data on disk and do post processing after. It would be nice to process stuff before saving the data (debayer / stabilization) - as you said in another thread @friolator

Spinview also works like poo on MAC, would be nice to get a cross platform solution.

I really think a Genicam based software could work for many scanners including Kinograph if it’s made as a frame capture software only without scanner transport controls. I’m down to put some money for development.

Yeah - that’s exactly my point about this really needing to be integrated with the rest of the system, if you’re going to go through the trouble of making a whole new capture interface. It’s just a lot of work. I’m in the middle of the software for Sasquatch, and it seems like every day I find something new I hadn’t thought of and will need to implement, and I was pretty sure I had thought of most of the stuff we’d need. …and this is going to be a pretty stripped down application in its first version.

We had discussed Genicam in another thread a few months back, but I don’t know where things are at with the kinograph software. I’m sure Matthew has plenty going on with the hardware to start worrying about that right now! But I think Genicam is a good starting point because most industrial cameras do support it, even the lower end ones. And it works with a variety of interfaces from USB up through something high end like we’re using, CoaxPress.

Yeah I’ve had 8mm stapled onto super 8 mid reel, and sections backwards. But this is the type of stuff I pick up when I run a customers film on the rewind bench and check and clean.

This does get annoying, but I will normally overscan enough to accomodate it. If I’m stupid and didn’t overscan enough, I’ll just rewind a bit and go from there. I’ve never seen it as a problem. With 8mm, you can almost predict it that every 25ft it will jump up or down the distance of a sprocket hole from when they flipped it.

I’m not really sure how this helps your argument though? this sounds terribly complicated. I was under the impression the “Kinograph” idea was to produce a scanner and parts that everyone could duplicate at home. Although funnily enough, nobody on this forum actually sticks to the “kinograph” build, everyone just does their own thing totally different from the original built. This is the perfect example where people can just build their own scanner however they want, and whack a machine vision cam on there and start capturing.

this is an issue, and I don’t really have the best solution for it apart from use a basic auto exposure system where it will change the exposure time, but have a maximum exposure time hard stop to avoid smearing. when it gets to there it would push the gain up. This is pretty much how the bundle software with the moviestuff scanners work, and for a basic user, it’s totally fine. When I spoke to Boris at A&B software, he said that they might be able to produce an autoexposure algorithm looking at peak pixel values, and adjusting to avoid overexpose. But you’d have to have a couple sliders to adjust sensitivity and reaction speed. And it would cost around 9k lol.

Obviously bad splices and whatnot should be fixed when prepping a reel. But you’re not going to go physically extracting the b/w reels from the color, or the souvenir prints picked up at Yosemite from the kodachrome shot on the same trip. If you have control over the motors, you can do things like setting in and out points, and that opens the door to stuff like multiple settings per reel. This is a massive time saver especially as resolutions go up and speeds go down. And it should be a basic feature of almost any film scanner.

This was precisely my point, that this is a complicated subject and that basically redoing what something like Spinview does is a waste of effort. If one is going to go through all that trouble, go all the way and make it different enough that it’s worthwhile.

But auto-exposure is a terrible idea. The correct way to do a scan is to set the levels such that nothing is clipped or crushed in the scan, with enough dynamic range to allow for a certain amount of variation. It gets harder to do this when you have drastic changes (like color to B/W, or a badly faded print intercut with camera original), but the idea is that you find a single exposure setting that works for the reel and then deal with correction in Resolve or something like that after the scan. The scan should be done “flat” so that you’re not baking in color correction.

Auto adjusting the levels makes for a nightmarish task of trying to fix errors later. It’s not so much about over or under exposure (though that would be an issue to consider), it’s about the fluctuations in brightness that gets baked into the image. fixing that later is really tedious, time consuming work, and an auto-exposure algorithm is going to get triggered any time the camera pans to look outside a window during daylight, or from someone standing near some trees to a shot of the sky, for example. There’s also usually a lag where it has to react to the image as it’s going by, so you lose a few frames at drastic change points, to crushing or blowout while the auto-exposure algorithm does its thing.

The right way to do it, the way a high end scanner does it, is to capture as much dynamic range as you can while setting the max brightness below peak and the minimum brightness above zero. This way you have a chance at pulling out shadow and highlight detail when grading later, where you can go through the reel shot by shot.

Once again, this is perfect for someone like yourself with experience on high end machines, but I don’t think 3/4 of the members on this group are capable of that type of build, and the ones that are capable, will be developing their own software, again, like yourself.

You said you hadn’t used spin view, so you won’t understand what it’s like to use. But like Robinojones said, it’s good for evaluation. Please just trust me when I say it’s not worth using.

Yes for sure, but the subject of whack home movies was brought up, and density and exposure is all over the shop. I use the autoexposure on my FilmFabriek HDS quite frequently for home movies, but otherwise I’ll have it locked off for the peak value. If i set it on a peak, 5ft later they’d be filming inside in the pitch black. then 5ft later they’ve followed the dog outside into the backyard and it’s super bright again.

I really think you’re chasing perfection with your system, and that’s totally fine, but not everyone else has to have the same standard as yourself, and it’s ok to have a simple solution. I don’t expect you to put money into this idea or agree with anything I say.

1 Like

We are aiming for perfection with our scanner yes. But my suggestion to include more than just capturing is drawing on my experience of having dealt with millions of feet of film over the years. Nobody has to take that advice, I’m just putting it out there.

Also, I think you underestimate the capabilities of the people on this forum…

This is precisely why changing exposure/grading in the scanner is the wrong place to do it.

This is our process for scanning a typical home movie reel:

  1. set the white point to 90%-95% of the white that’s in the perforation. This is pure light, always brighter than any light that’s going through the film. Setting this to about 95% ensures that you will never clip the image, because it would be impossible for there to be more light coming through.

  2. Set the black level slightly higher than the calculated default in the Lasergraphics scanner. We set it at around 10% (the scanner sets it slightly lower for positive film). This ensures you will never crush any shadow detail.

  3. Shuttle the film to the end of the reel (or beginning if the reel is tails out), while watching the film. If you see major changes (say, a B/W section), confirm nothing is clipped or crushed. It usually isn’t but the black levels will typically change. If it is different enough to warrant a different setting then create a marker and adjust the settings accordingly. Rinse. Repeat

This lets you do a few things at once: evaluate the film, confirm your settings, and ensure that you didn’t miss any bad splices while prepping.

In the end, you get a scan that looks a bit washed out, but with more than enough dynamic range to quickly recover the full image and still do any color balancing or correction, if necessary.

I guess my point is that doing any color correction or exposure adjustments in the scan is the wrong place to do that. The purpose of a scan should be to capture a digital version of what’s on the film, because the better tools to use for things like teasing out shadow detail are available in (free) applications like Resolve. These are much more capable than any scanner software’s rudimentary color correction tools. (and that includes all scanners from the RetroScan on up to million dollar machines). The workflow should be essentially the same regardless of the scanner.

2 Likes

Hi all sorry I’m late to this epic party. This is an important topic so thank you to all who have contributed and will contribute.

My perspective is biased towards the realities of production. By that I mean:

a) it’s impossible to plan for every possibility and the ones you can plan for will change many times
b) software development is very expensive
c) there is no one solution for every use case

My other perspective is that of a software developer, which I am full-time for my day job. My experience there has taught me that you only build the minimum of what you need to get the most impactful result. From there, you only add features when you need them, and even then we avoid master plans and instead opt for incremental builds.

So, what does that mean for Kinograph and this idea of software we can all benefit from? Stick with me, I have been thinking about this off and on for years and this is the first time I’m trying to articulate it so I might ramble a bit…

Here are steps I think would be useful to take:

  1. Research user workflows and desired outcomes. We are making assumptions about other people’s workflows based on our own knowledge and experience. Collecting information directly from users about what they want and what they already do/have is an essential step in meeting our audience where they are instead of giving them something we think is what they want.

  2. Map out existing solutions and which workflows or parts of workflows they solve for already. Both paid and unpaid solutions should be considered. What are the tradeoffs? What gaps exist? What open source solutions exist we haven’t found yet?

  3. Make a list of prioritized features to build. Similar to the 80/20 principle, what are the most impactful features we can build with the least amount of effort? From there, we work our way up the complexity ladder iteratively, releasing them one at a time instead of building “the perfect” software and dropping it all at once and hoping it’s what people want. This will allow us to be efficient with our time and get feedback throughout the development process from our users.

  4. Create ownership of features. When we have our features, I think it best to form teams for each of those features so that we build real ownership and accountability for our product. Even if we hand parts of it to paid developers, it’s important that we make the design decisions ourselves, from the overall software architecture to the appearance of our GUI. This will ensure long-term continuity as well as a shared sense of responsibility.

  5. Take inventory of our resources and scope what we think we can do and what we might need help with. Before we start asking for money, we should know exactly what we’re asking people to support. And they should be able to see mock-ups, written documentation about our decisions, and clear structure in place for project management and reporting back. That all happens as a result of the previous steps. We may surprise ourselves and have much less to build than we thought, or the opposite. Until we take inventory we won’t know.

Lastly, I encourage everyone to read the text I just published on the wiki home page, particularly the “Design Principles” section. This is a framework for us to make decisions big and small, such as this one surrounding software.

My suggestion is that we start this software endeavor as a group discussion focused on defining our design principles for the software. Then, as we walk through the list above, we can refer to our principles as a group when in doubt to help us find our way.

Here are some examples below. These are meant only as a starting point.

  • Affordable - Wherever possible, we should encourage the use of already available tools instead of reinventing the wheel (unless that wheel is out of reach financially or legally). Additionally, what are the most efficient tools and design patterns to achieve the desired outcomes?
  • Flexible - our software architecture should be loosely coupled so that new features are unlikely to break new ones and if a user wants to modify a feature, they are unlikely to break anything other than that one feature.
  • Accessible - software should be written in a language that is common and which balances performance with ease of learning so as to encourage beginner coders to contribute. Design patterns should be easy to read and build upon. Ease of fixing and improving a feature should be prioritized over it’s clever complexity.
  • Repeatable - choosing a software stack that is likely to last a long time and not dependent on shiny new things that tend to fade or need constant upgrading. Also, features should follow a pattern so that it is easy for developers to take an existing feature as a starting point for making their own.
  • Good enough - for now, this means supporting our pilot partners in getting from raw scans to viewable movie files. This may mean our interface is ugly for a while, or that our list of features is very very small. As long as we provide an a workable solution for users to attain their scanning goals using existing options and/or our software, then that is good enough.

IN OTHER WORDS…

Let’s focus on how we build first before we dive into a list of features.

And when we do start a list of features, let’s start with a non-feature, the core architecture. I believe it will be something like the Linux project. There will be a “kernel” which will act as an interface and all features will be attached to the core but not each other. Our first job after answering the above general questions will then be to design this architecture and test it with a minimal feature set.

Okay, that’s enough for now. I look forward to and welcome your thoughts and reactions!

3 Likes

I think this all makes perfect sense and seems like a good starting point.

I would second Pablo’s comment above that building a framework that all the different features hang off of makes sense, and that dovetails in what what you’re suggesting about a “kernel” at the core of everything.

Making almost everything a kind of plug-in module makes a lot of sense here, because of the wide variety of cameras, camera interfaces, camera drivers, home-grown LED setups, Motion controllers (Arduino, Raspberry Pi, ClearCore), sensors, etc. It also leaves open the door for software-only add-ons, like scopes, or even post-scan processing steps like file format conversions.

With this kind of setup, you could basically choose the modules you want to include in your scanner and make it as simple or complex as you need.

The main thing with this kind of setup is really getting a very well thought out, lightweight, simple framework that’s flexible enough to handle all these different kinds of modules.

1 Like