Capture software made just for film scanning

We are aiming for perfection with our scanner yes. But my suggestion to include more than just capturing is drawing on my experience of having dealt with millions of feet of film over the years. Nobody has to take that advice, I’m just putting it out there.

Also, I think you underestimate the capabilities of the people on this forum…

This is precisely why changing exposure/grading in the scanner is the wrong place to do it.

This is our process for scanning a typical home movie reel:

  1. set the white point to 90%-95% of the white that’s in the perforation. This is pure light, always brighter than any light that’s going through the film. Setting this to about 95% ensures that you will never clip the image, because it would be impossible for there to be more light coming through.

  2. Set the black level slightly higher than the calculated default in the Lasergraphics scanner. We set it at around 10% (the scanner sets it slightly lower for positive film). This ensures you will never crush any shadow detail.

  3. Shuttle the film to the end of the reel (or beginning if the reel is tails out), while watching the film. If you see major changes (say, a B/W section), confirm nothing is clipped or crushed. It usually isn’t but the black levels will typically change. If it is different enough to warrant a different setting then create a marker and adjust the settings accordingly. Rinse. Repeat

This lets you do a few things at once: evaluate the film, confirm your settings, and ensure that you didn’t miss any bad splices while prepping.

In the end, you get a scan that looks a bit washed out, but with more than enough dynamic range to quickly recover the full image and still do any color balancing or correction, if necessary.

I guess my point is that doing any color correction or exposure adjustments in the scan is the wrong place to do that. The purpose of a scan should be to capture a digital version of what’s on the film, because the better tools to use for things like teasing out shadow detail are available in (free) applications like Resolve. These are much more capable than any scanner software’s rudimentary color correction tools. (and that includes all scanners from the RetroScan on up to million dollar machines). The workflow should be essentially the same regardless of the scanner.

2 Likes

Hi all sorry I’m late to this epic party. This is an important topic so thank you to all who have contributed and will contribute.

My perspective is biased towards the realities of production. By that I mean:

a) it’s impossible to plan for every possibility and the ones you can plan for will change many times
b) software development is very expensive
c) there is no one solution for every use case

My other perspective is that of a software developer, which I am full-time for my day job. My experience there has taught me that you only build the minimum of what you need to get the most impactful result. From there, you only add features when you need them, and even then we avoid master plans and instead opt for incremental builds.

So, what does that mean for Kinograph and this idea of software we can all benefit from? Stick with me, I have been thinking about this off and on for years and this is the first time I’m trying to articulate it so I might ramble a bit…

Here are steps I think would be useful to take:

  1. Research user workflows and desired outcomes. We are making assumptions about other people’s workflows based on our own knowledge and experience. Collecting information directly from users about what they want and what they already do/have is an essential step in meeting our audience where they are instead of giving them something we think is what they want.

  2. Map out existing solutions and which workflows or parts of workflows they solve for already. Both paid and unpaid solutions should be considered. What are the tradeoffs? What gaps exist? What open source solutions exist we haven’t found yet?

  3. Make a list of prioritized features to build. Similar to the 80/20 principle, what are the most impactful features we can build with the least amount of effort? From there, we work our way up the complexity ladder iteratively, releasing them one at a time instead of building “the perfect” software and dropping it all at once and hoping it’s what people want. This will allow us to be efficient with our time and get feedback throughout the development process from our users.

  4. Create ownership of features. When we have our features, I think it best to form teams for each of those features so that we build real ownership and accountability for our product. Even if we hand parts of it to paid developers, it’s important that we make the design decisions ourselves, from the overall software architecture to the appearance of our GUI. This will ensure long-term continuity as well as a shared sense of responsibility.

  5. Take inventory of our resources and scope what we think we can do and what we might need help with. Before we start asking for money, we should know exactly what we’re asking people to support. And they should be able to see mock-ups, written documentation about our decisions, and clear structure in place for project management and reporting back. That all happens as a result of the previous steps. We may surprise ourselves and have much less to build than we thought, or the opposite. Until we take inventory we won’t know.

Lastly, I encourage everyone to read the text I just published on the wiki home page, particularly the “Design Principles” section. This is a framework for us to make decisions big and small, such as this one surrounding software.

My suggestion is that we start this software endeavor as a group discussion focused on defining our design principles for the software. Then, as we walk through the list above, we can refer to our principles as a group when in doubt to help us find our way.

Here are some examples below. These are meant only as a starting point.

  • Affordable - Wherever possible, we should encourage the use of already available tools instead of reinventing the wheel (unless that wheel is out of reach financially or legally). Additionally, what are the most efficient tools and design patterns to achieve the desired outcomes?
  • Flexible - our software architecture should be loosely coupled so that new features are unlikely to break new ones and if a user wants to modify a feature, they are unlikely to break anything other than that one feature.
  • Accessible - software should be written in a language that is common and which balances performance with ease of learning so as to encourage beginner coders to contribute. Design patterns should be easy to read and build upon. Ease of fixing and improving a feature should be prioritized over it’s clever complexity.
  • Repeatable - choosing a software stack that is likely to last a long time and not dependent on shiny new things that tend to fade or need constant upgrading. Also, features should follow a pattern so that it is easy for developers to take an existing feature as a starting point for making their own.
  • Good enough - for now, this means supporting our pilot partners in getting from raw scans to viewable movie files. This may mean our interface is ugly for a while, or that our list of features is very very small. As long as we provide an a workable solution for users to attain their scanning goals using existing options and/or our software, then that is good enough.

IN OTHER WORDS…

Let’s focus on how we build first before we dive into a list of features.

And when we do start a list of features, let’s start with a non-feature, the core architecture. I believe it will be something like the Linux project. There will be a “kernel” which will act as an interface and all features will be attached to the core but not each other. Our first job after answering the above general questions will then be to design this architecture and test it with a minimal feature set.

Okay, that’s enough for now. I look forward to and welcome your thoughts and reactions!

3 Likes

I think this all makes perfect sense and seems like a good starting point.

I would second Pablo’s comment above that building a framework that all the different features hang off of makes sense, and that dovetails in what what you’re suggesting about a “kernel” at the core of everything.

Making almost everything a kind of plug-in module makes a lot of sense here, because of the wide variety of cameras, camera interfaces, camera drivers, home-grown LED setups, Motion controllers (Arduino, Raspberry Pi, ClearCore), sensors, etc. It also leaves open the door for software-only add-ons, like scopes, or even post-scan processing steps like file format conversions.

With this kind of setup, you could basically choose the modules you want to include in your scanner and make it as simple or complex as you need.

The main thing with this kind of setup is really getting a very well thought out, lightweight, simple framework that’s flexible enough to handle all these different kinds of modules.

1 Like

Continuing the discussion from Capture software made just for film scanning:

If people agree with the above, the next action would be to split up and do user interviews and find out what workflows and needs there are. Who is interested in helping with that? I can help facilitate the production of materials to assist.

You might be thinking this an unnecessary step and, if so, you’re not entirely wrong. My goal in making this suggestion is, however, to begin the introduction of good process for adding new features or any other addition to our endeavors. This is a good opportunity to begin practicing the iterative production process of “research, prototype, release, repeat.” Once refined it will be a blueprint for all contributors ensuring further efforts can be autonomous and also consistent.

1 Like

I can help, let me know what you are thinking - We should tally who’s interested.

1 Like

Yes thank you, me too :slight_smile:

I think there would be a lot of users on the Moviestuff retroscan Facebook user group interested. Quite a few users are interested in swapping out their cameras and capturing 10bit 4K files which isn’t possible with the stock software.

1 Like

Hello!

I’m putting together a 16mm film scanner that uses an old Eiki projector + Blackmagic Production Camera 4K EF.

This is the setup:

Reed switch triggers Arduino to send a command through USB to the Mac. I’ve plugged in the Blackmagic 4k to the iMac with Thunderbolt 2, getting a UHD 10 bit signal. The issue is I’m struggling with the software.

I want to find something very VERY simple, but it seems there aren’t any software solutions out there that are ready to perform one simple task: take ONE frame in dpx format. Blackmagic’s software, Media Express, only lets you save stills in 8bit TGA (it has DPX 10 bit, but only for video capture, generating many unnecessary copies of the same frame). I’ve also tried Dragonframe, but it’s too slow. Do you recommend any other software that can perform such a simple function on a Mac and work with the Blackmagic camera?

Many thanks!

I’m not sure sorry, but I’ve seen this place use the cameras often, but they have to get the projector running perfectly in time.

This guy also uses them Probabilmente il primo scanner al mondo... - Matteo Ricchetti | Facebook

Not sure how fast the projector would be running. If it is slow, maybe consider stop motion or animation options.

Dragonframe offers a free trial, so nothing to loose.

Astronomy applications also provide some alternatives, but that camera is probably not typical.

I have used digicamcontrol, but is only available for windows and is for typical mirrorless or DSLR.

Thank you for your recommendation, Pablo. I already tried Dragonframe, as I used to do stop motion animation and thought it’d be a good option. While the quality of the frames is perfect (10-bit DPX file, huge but wonderful), it’s an incredibly slow software. It takes about 2 to 3 seconds per frame. And this is with a high-spec iMac, not a laptop or a low-powered iMac. This is why I wanted to ask if there were any alternatives, as I wanted to have continuous motion from the stepper motor rather than set up a start and stop system with the reed switch, but it seems this is apparently the only option for high resolution frame capture.

Blackmagic Media Express’ stills are good, but not good enough. Maybe for a quick scan, but not for a really high resolution one. I was thinking I could set up a three position switch and have an OFF, FAST and SLOW modes, for when I might want to do it with Media Express.

I’ll keep looking for software, but it’s highly unlikely I’ll find anything better than this. Thanks anyway!

Understood. If continuos motion is what you are looking for, things are at a different level.
For perspective, I use 12 bit raw 24MP, and the camera interface is USB2. With Digicamcontrol I was getting about 20 frames per minute!

Another alternative is to use that camera in video mode, and synchronize the projector to the framerate speed. In general, I think that continuous motion with a stepper will not yield enough speed to use the camera as such.

If using a modified projector transport, and looking for high-quality -which seems what you are after with the 4K camera and the bit depth- stop motion is way easier.

What approximate framerate are you getting out of the projector free running the stepper?

This is why I don’t understand why Dragonframe is so slow capturing each frame from the Blackmagic 4k. It’s connected with Thunderbolt 2! It should be instantaneous, like it is with Blackmagic Media Express. If you’ve got any tips on how to speed up that software, I’d really appreciate it.

I’m getting about 3-5 fps with the stepper, I can make it run faster or slower with a rotary switch.

About continuous motion, I can run the projector at 24fps because the original motor is still in the projector, with a second transport driver (stepper motor + Arduino) for when I want to scan. Best of both worlds, I guess! But I won’t be able to run the projector at 24fps for scanning this, mainly because it’s very old film, going back to 1948, edited by students. I fear running it at 24/25fps might break the edits. On the other hand, taking 1 frame every 2-4 seconds seems like it will slow the process down excessively, but if it’s the only way…

The other option I’ve thought is using a Blackmagic Pocket 6k instead of the 4k and automatically poking the stills button with a small rubber stick every time it has to take a picture (the stills feature cannot be triggered with a wire, only through the incredibly reliable Bluetooth). I really don’t understand manufacturers sometimes… why is it so hard to have basic functionality such as this?

Anyway, it seems like Blackmagic Production Camera 4k connected to Dragonframe with a start-stop transport will be the right way for really high quality scans, with the option to do a dirtier scan with Media Express if I want something quick.

Again, thanks for the help!

1 Like

Hi Diego, if you didn’t figure it out already, I love solving problems, and you have an interesting one.

I remember the Blackmagic SDI interface, but after a quick glance looks like there is nothing to trigger the still in the protocol. Keep in mind I don’t have or have used these particular cameras.

Since I don’t have one of these cameras, please forgive if what I say makes no sense.

One middle ground is to continue capture as video. Let’s say you can run the projector at 4 fps, then the resulting recording will end up with 6 frames (video frames) for every film frame (if the film is 24 fps).

A script can rename every 6th frame into a new frame-sequence, and delete the rest?

One problem with the approach above is to maintain some synchronization between the video and the transport, so the “kept-frame” is always the same in relationship with the stepper/projector advancing. One idea (also completely untested, but sounds easy) is to use an HDMI to VGA converter. VGA provides a vertical sync pulse, so you can have a reference to feed the arduino. Always move the frame with every 6th pulse, and the best frame would always be in the same place, and with a bit of programming the projector will be in-sync with the video.

In summary, the HDMI output of the camera to a VGA converter, the VGA vertical pulse (5V levels) feeds the arduino, and the timing of the stepper is set by arduino, so a the transport always moves in sync with every 6th pulse, at a fast enough speed to get 4 frames per second. Record the video in the camera which would be 6 times longer than the film. Then take the frame sequence, decide what is the best frame to keep, and from them rename every 6th frame into a new frame-sequence, and delete the rest.

Nothing is as easy as it sounds, but it is the fastest cheapest way I could come up. And it will work with every video camera with an HDMI output.

PS. VGA is a 5V signal. Here is a good reference.

1 Like

Hi Pablo. Sorry for the late reply, I didn’t see your message.

Thank you very much for your ideas. I think in the end I’m going to go with the dual setup I mentioned before: fast for Media Express, slow for Dragonframe. I’ve been looking at ways to make it work as you say, but I think it might complicate the machine and add parts that could break at any given moment (HDMI to VGA converters aren’t rock solid, in my previous experience). I just hope that with every software update Dragonframe becomes faster, or that I eventually save up enough money to get a dedicated camera for this project, like a FLIR, and be able to use that camera with better software.

Again, thank you so much for your help! I really appreciate your ideas.

1 Like

Apologies for the late reply.

I assume the Blackmagic camera is one you already had on hand? Unfortunately it’s not going to be easy to use it for film scanning - you need to precisely match the frame rate of the transport system (the projector) to the camera and then record in video mode to .DNG (not DPX) to get the most out of it. How to sync the two together will be an uphill battle. You’re much better off with a machine-vision camera like this (that’s the camera in the FilmFabriek HDS+ for example).

1 Like

Exactly, we’ve got a Blackmagic Production Camera 4k EF that we don’t use very frequently.
Why do you think DPX won’t work with this camera? It seems to output that format natively when connected through Thunderbolt to a software like Dragonframe, producing very nice 4k UHD files. I believe it’s an uncompressed signal, but I might be wrong.

I will check out the camera that you recommend! Thank you so much for your help! :smile:

DNG is the native format, lossless compression in raw Bayer whereas DPX is debayered already and more than 3x the size. The other reason you don’t want DPX is that it’s 16bit or 8bit, but the camera captures the information in 12bits (whether it gets all the dynamic range it can of course is doubtful) so to go to DPX you either have to pad the data to 16-bit or loose dynamic range to 8bit which is not what you want.

You can look at a less expensive lower-resolution option as well, you don’t have to go with the full 4K camera. The 3K one is also a good one.

Yeah I thought you might say something like that, but it will help if you set a goal. Haha. :wink: Using a projector as the transport is fine for a dailies scanner, but doing the old archival film you’re talking about will likely require multiple attempts to scan per reel very often, and that’s where the benefit will come in with using something that’s actually designed to scan film (if you can secure funding).

Do you think a 1 frame capture could be triggered through the SDI port and stored in the camera’s SSD in a 12-bit DNG file? I’ve got a Windows machine with a Decklink card, maybe Blackmagic’s API could be useful for this? Otherwise, I’ll have to stick to 10-bit DPX through Dragonframe, as the Thunderbolt 2 output of the camera only provides a 10-bit 4:2:2 video signal or maybe I could also use the timelapse function if I time it properly.

We will start saving for that lovely 4k Flir camera! Hahah.

Again, thank you for your help!

You need to get an SSD to the specs that Blackmagic require (like this one), and put the SSD into the camera. Then after the capture you take the SSD out and dump the files. Buy two of them, that way you can use one while the other is dumping to the PC.