The ReelSlow8 Film Scanner

I liked following the Sasquatch 70mm build thread so much, I thought it would be fun to also chronical my progress from (nearly) the beginning.

My name is Nicholas and I’ve been planning to build one of these things for almost a decade. I figured it was time to stop planning and start building, so my goal now is to complete some measurable chunk of work each month in 2023 (starting in Feb) to hopefully have a machine that does something by the end of the year (and write about it here as I go along).

The Starting Point

  • About fifty 3" reels of 8mm and S8 Kodachrome family movies shot between 1954 and 1985.
  • A Lucid Triton 2.8MP GigE camera: IMX429, 2/3", monochrome, 1936x1464, global shutter.
  • The usual Schneider Componon-S f2.8/50mm lens.

I like monochrome because it might give better control over colors and it could maybe squeeze more resolution/fewer artifacts out of an otherwise same-resolution sensor with a Bayer filter.

At 2/3", I’ll be running at 1.35x’ish magnification for 8mm film, which is a little higher than most of you RPi HQ folks. But, the extra 1um of resolution in CoinImaging’s “resolving power vs. magnification” chart (vs. 1.0x) that matches this sensor’s pixel size almost exactly made it feel worth it. The nice, deep wells on the Pregius gen3 sensors also felt like a good answer to keep the number of HDR/fusion exposures low. I’ll be curious to actually measure the dynamic range I can get out of this thing.

Areas of Interest

To answer “which features am I hoping will make the ReelSlow8 different?”, I’ve got a few experiments I’d like to try in the coming months:

  • An overkill auto-focus system. I absolutely do not trust myself to dial in any part of this machine. I have this way-oversized (280mm) ball-screw linear stage that I’ll be mounting the camera on, controlled with a stepper. But even more than that, I want the image plane to be fully adjustable and under the control of software: one corner fixed and two others that are motorized so the software can get a more exactly parallel image plane than my hands on an adjustment micrometer. For the main camera focus axis, I want to be able to chase microns. I had a bad experience with a different lens that had lots of axial CA so I want this thing to be able to focus on each emulsion color layer individually, if need be.

  • A tiny 8mm wet gate. After poring over lists of strange chemicals (and their refractive indices) and having chemical supply companies turn me down for years, I finally found someone that would sell isobutylbenzene to a residential address! (The answer in the US is Chemsavers.com, but lab-grade reagents are ridiculously expensive. It’s frustrating when the one liter you can actually get someone to ship to your house costs more than the 50gal drum you’ll never be able to buy from Aliexpress… but that’s the world we live in, as far as I can tell.)

    For the curious: on all of Kodak’s literature about IBB, I’ve seen things like “persistent odor” mentioned a number of times, but never any description of the smell itself. I can now make a first-hand report: it’s a little like black licorice, but worse. :rofl:

  • Collimated condenser/Köhler lighting (vs. integrating sphere/diffuse). If I’m going to the trouble of a wet gate with the right index of refraction, I’d like to keep as much of the contrast ratio intact as possible.

  • IR or dark-field cleaning. Kodachrome is partially opaque to IR, so I’d like to tinker with dark-field imaging to maybe grab a dirt/scratch channel. This is a lower priority because I have high hopes resting on good pre-scan cleaning hygiene and the wet gate doing most of the heavy lifting here.

  • Lots of post-processing. The VideoFred scripts are an inspiration to me. Temporal inpainting and motion interpolation are very exciting. And–more recently–adding “AI” onto the front of either of those terms yields search results that look a little like magic. My target is something as good or better than the “Thanksgiving dinner” video at the bottom of this page.

  • The distribution of these films to my family is just as important as the scanning. They’ve actually already been digitized once by a couple of my uncles, back in 2011. The best they had available at the time was a DV camcorder pointed into a projector + mirror box. The resulting 480i60 / 4:1:1 recording often had half the fields blown out by over-exposure when the scene brightness would change, frequent focus hunting, and the projector’s small, square gate hid about 30% of the image. :sweat_smile:

    But more than the possible improvement in image quality, trying to find the clip you wanted was the trickier part. The end result was a pair of DVDs with about 4 hours of footage in the order they pulled the reels from the old box. Unless you were planning to sit and watch all the way through, the only way to see the thing you wanted was lots of seeking and maybe a disc swap. For some of the “kids” (now in their 40’s and 50’s) that were featured, they might have to skim the whole collection to find the only 5 minute clip they were in.

    My plan is to make a simple, private website with a chronological timeline cataloguing each scene, a family tree at the top that can be clicked to filter for a particular person, and a list of categories (Christmas, vacations, weddings, etc.) to further filter the list. Ideally I’ll have enough metadata that someone could find the clip they’re looking for in only a few seconds and optionally save a copy to their phone/tablet in one more click.

    It’s a small detail, but if an old family film is scanned in the middle of a forest and no one is there to watch it, does it really exist? I try to keep in mind that the accessibility of these memories is just as important as their image quality.

February: Light Source

Let’s kick this thing off with some real work that I did over the last few weeks!

I’ve had some wacky ideas for light sources in the past. (See my profile pic for one bad idea.) I’d settled on a linear (non-PWM) multi-channel source and got some great tips a few years ago on a circuit design that only used a single LM317 to drive several different current levels.

I didn’t have variable current control high on my list of priorities but have since read here that these sensors can be much slower at changing their exposure settings. This sensor has a global shutter, so as a last resort I suppose I can use a timed pulse scheme if the camera ends up being uncooperative.

Because this isn’t a continuously running system, I don’t need to freeze the motion, so I can get away with a lot less light. That means having the convenience of low voltage and low current. This design tops out at about 2.5W (10V at 250mA), which fits neatly inside the spec for the 28AWG wires in a mini-DIN cable. So I was able to use a convenient single power+signal connector.

This uses a combination of Cree, Luminius, and Yuji LEDs. There’s a bit of kitchen-sink thrown in here with White, RGB, and IR channels. Because my project is predominantly exploratory/experimental, I didn’t know what I was going to need, so I included everything, hehe. The RGB wavelengths are 450, 525, and 660nm, which match the optimal wavelengths for decorrelating the color channels (as best as I could tell) from the chart in the Diastor paper. The IR channel uses 940nm. I’ve tried the (much more expensive!) 1050nm range in the past and didn’t find any difference with Kodachrome besides less sensor sensitivity; so sticking with the mid-900nm range was easiest.

The voltage drops on each LED were close enough that I was able to get two in series for each channel except red (with three), while running on anything over 10V. I’ve been trying to teach myself electronics over the last two years, so I also used this particular board to practice input and power safety design (hence the TVS, MOV, fuse, choke, and input protection which is almost all certainly overkill when it’s going to be run from a clean bench power supply).

Here’s the circuit:

It uses the idea from that StackExchange link above where the LM317 will deliver a few different distinct current levels. The white Yuji LED needs a fairly precise 120mA to be inside its CRI spec. But the single-wavelength LEDs can go all the way up to 350mA. I decided to keep them around 250mA to be conservative. Then, because I had an extra pin left on my connector, I wondered if it might be handy to be able to toggle between two current level presets for the sake of controlling exposure times. It’s not quite like having a fine control knob, but it was only an extra high-side PMOS switch and now the individual channels can be toggled between 130mA and 250mA just by setting the “Bright” pin low/high.

The challenging part of the layout was getting all the LEDs to fit in a tiny square. My aspheric condenser lenses are 50mm, and I wanted the lights to fit well inside that circle. And even though they’re going to be well-diffused on the light input side, I still wanted the color channels to be radially symmetric around their center, which led to a lot of fun criss-crossy routing on my plain two-layer board. (This was exacerbated by my insisting that all the LEDs be oriented the same way for ease of hand-populating the board with tweezers. Changing their orientation would have made the routing easier, but I know I would have made a mistake and installed at least one backwards.)

The M3 hole pattern at the top is only 16x23mm. Nice and tiny!

The inputs are tolerant of anything from around 2.5V up to 12V. I remembered I had a little Cherry MX key-switch tester and some solid-colored keycaps (from a different project) that coincidentally matched these colors. So I soldered a few wires directly to the key-switches to make a fun little test harness.

I have the light output turned way down for the sake of recording this demo video. Even just at 120mA and 250mA they’re painfully bright to be around normally. (The gray button I don’t press in the video is for IR. The one on the end is an old Cherry MX Lock switch that physically toggles on/off when you press it. That’s the “bright” signal.)

Eventually all of this will be connected to a microcontroller of some sort, but these clacky keys will work for testing in the meantime.

What’s Next

I’ve got a few immediate tasks before I can get started on anything more substantial. I’m planning to use MakerBeam’s 10x10mm and 15x15mm aluminum extrusions to hold everything together. Until it’s time to do wet-gate things, this is going to be a horizontal, tabletop build. So I need to get this mess connected to a rail system so I can start orienting things like imaging planes, the light PCB, and the condenser lenses:

So far this is an example of how not to build a stable platform. Most of the 3D printed parts will eventually be replaced by milled/lathed aluminum. The stacked XY and rotation tables will be moving to the light source (so I can properly center things per Köhler illumination) and I’d like to get the sensor+lens a little lower and closer to its center of mass instead of flapping in the breeze the way it is now. (The only adapters I could find for reverse-mounting the lens involved a Canon EF-mount, which makes the whole chain pretty hideous and a lot heavier than it needs to be. The step-down converters on the front are just there to protect the exposed rear lens element for now.)

After that I need to dust off some microcontrollers and convince myself I remember how to drive a stepper motor. Hopefully I’ll be able to tackle all of that in March.

Whew. That was a lot of typing. :smile:

I am excited to finally start participating here and adding what little I’m able to discover to the great pile of accreted knowledge on this board. Please let me know if any of the above was confusing or if anyone has any questions!

6 Likes

Nice work and writeup, also really interested to hear how the sensor is going to perform. I have been going back and forth on which on to select. I wanted to have a resolution high enough to see the grain. The directed light + wetgate option for 8mm film is also something I’m interested in because of Diastor and the old Mueller HDS+ website. It claimed it’s not just more contrast but also a higher resolution I thought. The HDS+ ‘faux’ wetgate is already pretty effective according to the users in this thread ( and runs at 3fps):

The thread also includes several other interesting scaner user experiences btw.

Is the ball screw linear stage accurate enough for fine adjustments you want with autofocus on the individual emulsion layer?
I was thinking of using a compliant mechanism to focus on the film. I have build several of those for a different project and they are pretty great, here is an example of what I mean:


This is just an example of a linear stage btw, the topology can be used, but the dimensions would have to be altered for your usecase.

2 Likes

Thanks. And I know what you mean. I’ve been going back and forth in my head for years. Whatever the outcome, I’m happy I’ve finally given myself a deadline so the “analysis paralysis” can come to an end! :sweat_smile:

I suppose I could see that being the case. The CoinImaging reviews show that the two are almost the same phenomenon, in the same units, plotted on the same chart. You stop counting resolution when you can barely make out the line pairs on a test target. Contrast is when the brightness difference between the lines and background are still above a certain threshold, right? So increasing the contrast should give you more resolution for free (until you’re down to single-pixel line widths in the case of using a digital sensor).

The first time I saw the FilmFabriek’s little sponges years ago, I remember thinking it seemed like a very clever solution. I still have full-immersion in mind, but am ready to double-back to the good old “soaked washcloth / surface tension” method if things get too messy.

And quite a bit of contention. That was an… interesting read. :grimacing:

Although, even that sort of discussion is still useful to a beginner like me. I have about three hours of experience with 8mm film projection and zero hours of film scanning. So hearing about a few of the more common problems that are likely to come up from people with decades of experience using lots of different machines is valuable. Thanks for the link.

This is something I was hoping to measure and characterize (and share my results here). I have a dial indicator (with a data cable) that is spec’d down to a repeatability of 3µm. Once I get the motor wired up and under control, I was going to mount the indicator and run a bunch of automated tests to see what the movement actually looks like.

If the layers shown in this Kodachrome cross-section (third image in the carousel) are to scale and we estimate using an overall film thickness of 0.145mm, each color layer should be around 7µm. This article gives another data point that agrees. Beyond that, the len’s own chromatic aberration will shift things around from there.

The results of those tests will also inform my decision on what level of microstepping makes sense (or whether I need to drive it through a reduction worm gear, etc.). This particular linear stage moves 5mm per shaft revolution. The stepper is the finer 0.9° per step type (vs. the more typical 1.8° step size), so on paper that should be 12.5 microns per whole step. So by 1/16th microstepping, it should already be sub-micron.

… I’m sure the mechanical reality of these parts is going to make things more interesting than that, though. At a minimum, I should at least have some cool graphs of jittery lines to show off here. :smile:

Even if it goes perfectly, we’ll see if any of that makes a difference at f/4.7. I know that an IR exposure (should I choose to include one) is almost certainly going to require per-frame refocusing due to CA, so the goal is just something reliable and repeatable that I don’t have to do by hand. In that case, overshooting the requirements seems like a better plan than undershooting them, hehe.

I’ve never had a chance to use them, but flexure stages are so cool! I was amazed by this demonstration (the minute or so starting at 6:10) on how CD players use EM-controlled flexures to track the pits in a CD in real-time while the disc is spinning.

The couple of times I’ve thought about something like super-resolution (what they called “microscanning” on page 4 of the thread you linked), it seemed like a compliant stage and a couple of copper coils should be enough (with lots of hand-waving) to turn any monochrome sensor into a pixel-shift sensor. I already have enough experiments queued up though, so I don’t expect to go down that path during this effort.

Using one with a screw for fine focus is an interesting idea. If the stage were made out of metal–like this one using at-home EDM(!)–I would trust it more. With a 3D printed version, I’d be worried about material fatigue. My three year old son once asked me to print this little single-piece catapult. He left it in the “loaded” position overnight and the next day it would barely return even 1/3 of the way to it’s original position.

But, if the coarse focusing was very good to begin with and the screw was only used ±1 turn or so from its neutral position, it’d probably be fine for longer term use. My background is not in anything mechanical, so I don’t know exactly what that property is called. After a bunch of searching just now, I think it might be creep? So, if it has to be printed, maybe a lower-creep material like PC or nylon would be better for the task than something like PLA, at least, according to Reddit.

Either way, I love seeing all these ideas! All of the commercial scanners have to be optimized for throughput, which means they tend toward using a lot of the same design language. Once you get into the smaller, low-volume, niche builds where the speed doesn’t matter as much, there is a lot of room for creativity.

That thread imho is a gold mine of inside info, it’s sources like that this that show me that building scanners is a lot more about saving ‘information from decay’ than anything else.

If the ball-screw stage is repeatably accurate than that’s very good, but they are also too expensive for myself.

Yes BAXEDM has made a very cool machine indeed, if I ever were to create an EDM machine this man is who I’d hit up. But if I was convinced of my design I’d sooner just order it to be made somewhere.

Pixel shifting is certainly cool, but I’d rather just use a higher resolution sensor when more pixels are needed.

Your right about the creeping issues with polymers, it’s especially bad with PLA! I personally only use it for unloaded stuff. I had more succes with PETG and a especially using a slightly more difficult material PP. I haven’t used Nylon and ASA, since my printer can’t actively remove moisture from the spool and filter the fumes (it’s on the to do list). But there are still plenty of other avenues to explore regarding additive manufacturing.

You can also creating using leaf springs and cnc milled blocks, and bolting everything together instead of EDM. But my quest to create 3d printed polymer compliant mechanism is naturally driven by cost and availability.

1 Like

March: Z-Axis Testing

To build an overkill auto-focus system, I need to be able to split hairs. So I spent most of my month staring at stuff like this:

Stuff that was tested

The stuff doing the testing

  • Mitutoyo 543-501 indicator
  • Mitutoyo 905338 cable
  • A NogaFlex magnetic holder and base
  • An Arduino

As an aside, that Mitutoyo indicator is way better than its spec sheet indicates. It lists a repeatability of 3 microns, but I saw hundreds of consecutive readings at the exact same number (despite a motion of >5mm between each reading). It’s also a joy to read from electronically. Their hardware interface requires zero additional components; each pin can be wired directly to an Arduino digital I/O pin. It’s designed to only need ground “signals” and can work at any system voltage (from 1.6V to 14V, anyway) so it can be dropped in just about anywhere.

Key Take-Aways

  • The ball screw exceeded my expectations.
  • Backlash was around 4.0µm average and 4.9µm in the worst-case, at all microsteppings. I was expecting an order of magnitude higher. (A human hair is 70um thick!)
  • Sub-micron positioning is possible without exotic workarounds like planetary or worm gears. It can even be done at convenient/fast microstepping like 1/32nd or 1/64th instead of out at the edges around 1/256th.
  • The frame, rigidity, and rotational smoothness were all nicer than I was expecting for the $180 price point. There are a few other listings on Amazon for similar axes around $100 but those reviews say that a lot of corners start getting cut and the quality suffers.
  • Doing all of your stepper driver configuration over SPI is convenient. The TI chip on the Pololu board can be chained together to control multiple motors using few I/Os, faults can be queried for more info, and it’s generally a nice, rich platform.

I actually spent a good portion of the month chasing weird patterns, strange behaviors, and other things that didn’t make much sense. I’d been using 12V up until that point because that’s what the LED board uses and I was hoping to get away with only needing a single power supply. But once I bumped up the “bulk capacitance” at the V_motor pins and cranked my benchtop supply to its max of 32V (immediately releasing the magic smoke on the LED board that I forgot was still plugged in :grimacing: ), all of the “weird” behavior disappeared immediately.

Coincidentally: all of the circuit protection stuff I mentioned in the February update came in handy. Only a single TVS diode burned out and the rest of the LED board was fine. The real lesson here is that the PTZ fuse should be rated lower than the TVS diode’s forward voltage. :sweat_smile:

After increasing the voltage, the testing became almost dull in its consistency. This motor can do short forward/backward motions all day long and it’ll read the same number on the indicator with one caveat: after hundreds of cycles there will eventually be a micron or two extension on the far end of the walk.

Whether this is actually the Mitutoyo indicator finally showing a weakness, I don’t know. (Every measurement inside that graph is technically inside its own 3µm repeatability error tolerance.) My working hypothesis is that the ball bearings are actually cutting a sort of “groove” in the lubricant after so many cycles. For my purposes, this is already well beyond the resolution I was hoping for and it shouldn’t matter. Worst-case, instead of doing a single auto-focus pass at the beginning of each reel, maybe it’ll make more sense to refocus every 50 frames or so.

There was only a single behavior I wasn’t able to track down: some combination of speed, length of walk, acceleration, (or some other factor that I’m missing) would cause it to lose position by about a micron or two every cycle. If you cut some or all of those variables down “enough”, the problem goes away and the position readings switch back to being as rock solid as the graph above. As much as I’d love to finish tracking this down, it’s kind of irrelevant for my purposes (I only need short, slow movements where it doesn’t show up) so I’m going to press forward instead.

Köhler lighting

I lost about a week in March to a stomach bug, so I didn’t get as much done on the Köhler lighting side of things as I wanted. I 3D printed lots of little adapters for these lenses and irises to make a kind of testing harness, but I haven’t really had a chance to quantify anything yet.

I mostly only learned that I don’t really have a working mental model for non-imaging optics. :sweat_smile: Anytime I thought I could predict what was going to happen, I was exactly wrong. So I’ve decided to try and reproduce the ThorLabs setup as closely as possible (from the 2nd video in the list there).

(Buying the lenses used there individually costs about $170 instead of the $11k(!) for the whole kit. I’ll take the 3D printed fixtures any day!)

What’s Next

Hopefully next time I’ll have more to say about smooth, even, “perfectly out of focus” direct light. My wife just had knee surgery a few days ago so I’m currently in charge of her along with our 3 and 7 year olds for the next five weeks, so I’m guessing April is going to be pretty light on hobby time. After that it’ll be full speed ahead again, though.

5 Likes

Was not familiar with the DRV8434S. I have been using the TMC2208 and was able to use it smoothly with 12V, but it will depend how fast the steps are required, and also taking advantage of linear acceleration.

As you mentioned, one of the critical items is the necessary added capacitor, and in the TMC datasheet it recommends using low ESR type capacitor (Equivalent Series Resistance, sometimes called Effective Series Resistance).

Another is the capacity of the power supply to withstand the peaks of the high current draws. I was using an ATX power supply, plenty of power available. Was also able to run the setup at 24V. I am using 32V (using a recycled power supply with limited current output) and will probably go to 36V when I buy one for the final build.

Köhler lighting looks interesting, looking forward to see your findings, after you are back in the shop. Well wishes for your wife, and enjoy the family time.

Pololu’s product page made it sound like the DRV8434S was a newer chip. The datasheet mentions the year 2020, but there isn’t nearly as much emphasis on being quiet like the (from 2016) TMC2208’s datasheet. Hmm.

I’m guessing the driving voltage will also depend on the size of the motor. This is the first time I’m using anything as large as Nema 23. It’s definitely more motor than I need for this application, but the linear rail came with that size motor mount, so it was the path of least resistance.

I think I’m using something like linear acceleration. I didn’t perform any integrals like the document you linked–it was more just typing in the first bits of code that popped into my head–but it should be doing something close. To be fair, I didn’t inspect it with a scope, so it might have quirks I haven’t seen yet.

Yeah, low ESR makes sense for this type of situation. I had some of these lovelies on-hand from a different project. They don’t mention low ESR specifically, but 47mOhm sounds pretty good. They’re also kind of ridiculous in the other specs, so I’m happy with them even if they’re a little oversized for the job. :rofl:

The TMC2208 has been ‘superseded’ by the also affordable TMC2209, been using them for a while as those ‘modules’ or within 3D printer boards. They are nice drivers, but not perfect. The silent StealthChop mode naturally has less power, and if you switch between modes there is a high likelyhood of missing steps.
In the more powerfull Spreadcycle mode it has been hard to find settings that quiet down certain steppers (it produces a nasty hiss in some combinations), so if you don’t need the torque use Stealthchop mode.
Also keep in mind these drivers do not source that much current if you plan to use NEMA23 steppers with them, although you could certainly just starve the stepper a little bit. And likely NEMA17 steppers are able to deliver enough torque at these speeds.

2 Likes

Good point. TMC2208 peak current is 2.0 A, the TMC2209 peak is 2.8A. The the NEMA23 (above) is rated for 2.8A A/phase. For the NEMA 17 I used, the current setup was to 1.5A, and it runs pretty hot (the driver), even with stop-motion.

The inter-step delay for the calculations is presented in the Atmel paper with a couple of formulas based on square-root, then immediately after they provide a Taylor series to speed-up the process. I used the Taylor approximation formulas, no square-root, no integration required. Playing with the parameters will give some smooth (and fast) moves to the stepper.

1 Like

April: Early Software and Köhler Lighting

As expected, I had almost no hobby time in the first half of April, but as my wife’s knee has been recovering, I found a little time to tinker over the past two weeks.

After a lot of time spent puttering with the 3D printed light path lens-holding setup I showed at the end of the March post, I still wasn’t getting anywhere. That initial design made adding/removing an element require sliding all the other pieces out of the track temporarily. So it wasn’t very friendly to the kind of experimentation I needed to nail down why I couldn’t hit both key Köhler points at once:

  • Condenser position adjusted until field diaphragm is in focus.
  • Field lens position adjusted until LEDs are imaged on the condenser diaphragm.

Despite moving the other elements around, I always seemed to end up with the two irises too close together or bumping into the endpoints of the aluminum extrusions I’d picked. The condenser diaphragm is supposed to adjust contrast, but when it’s close to the (in-focus) field diaphragm, it also starts to constrict the field of view.

So I made another iteration: longer rails (using MakerBeam’s bigger, 15x15mm extrusion profile) and a “cartridge” template that can be removed in-situ with a half twist:

The first set also had a tendency to bind up on the four rails while sliding, so I took this opportunity to make each lens cartridge a little thicker, which keeps them better aligned and easier to slide. This meant there was enough thickness to get a little better organized with labels, too. :grin:

Being able to pop the elements in and out individually paid dividends almost immediately. In my first tinkering session after I had everything assembled, I had one of those nice, accidental discoveries: while swapping the field lens out to try a different one, I happened to glance up and see that the LEDs were better focused on the condenser aperture than I’d ever seen them. In ThorLab’s own setup, the light source incorporates one of those big aspheric lenses (the ACL5040U) and that was enough to act as a field lens without an extra lens element on top of that.

Now that everything was set up the way it was supposed to, I tried closing the condenser diaphragm. Controlling the contrast of your image by moving a little lever arm is cool!

Early Software

I’m using a GigE camera from LUCID Vision Labs (which I believe was founded by ex-Point Grey Research employees that left after the FLIR acquisition). So this earliest step was just to get their library (the ArenaSDK) turning over and on the screen. So I duct-taped ArenaSDK, OpenCV, SDL, and Dear ImGui together as minimally as possible to get camera images streaming in, stored in an OpenCV Mat, uploaded to the GPU as an OpenGL texture, and displayed (effortlessly with friendly UI widgets) using ImGui.

There isn’t much to say at this point because it doesn’t do much. The first test was just to get a “line scan” feature up and running like the one shown in the ThorLabs video. I was happy that it only took a dozen or so lines of code and all four libraries played very nicely together. (I haven’t done any COM port integration to talk with the Arduino to control the LEDs or stepper motor(s) yet. That’s for next time.)

A few years ago I found an Edmund Optics 3"x3" glass slide USAF 1951 resolution target on eBay for 1/3 it’s usual price, and it has been useful for characterizing camera sensors ever since.

Here’s the (heavily cropped to keep it at native pixel size) kind of thing you get to see when you adjust the condenser diaphragm through it’s full range:

One limitation you can see even in this center crop is that there is more edge fall-off when things are adjusted properly. (The center line-scan appears to be tilted.) I’m not sure how much of that might still be fixed when I get a few more minutes to move these lenses around some more. If I knew more about non-imaging optics, I’d probably have a better idea where to look first, but I’ll resort to blindly guessing for now.

Down in the smaller group 7 elements you get an even better idea of what the contrast adjustment is doing:

2023-04-26 MTF50 Aperture Images

Again, the only thing changing between these images is an iris being very slightly adjusted. The camera, sample, and focus position are being held constant.

While looking around to see if someone had already done the hard work of being able to quantify these types of images, I stumbled on the delightful, free MTF Mapper app. You give it an image and it generates MTF response graphs from the rectangular-looking things it find in them. It’s got a lovely interface and was kind of a joy to use. There’s even a command-line version that I’m wondering whether I might be able to call right from my own (upcoming) auto-focus code to offload that work, since that author already did a better job than I’d be able to.

All you need to do is enter a pixel-size in the settings dialog to get the correct lp/mm units. I had a calibration slide on hand, so I could measure it more or less directly. (In doing so, I also discovered that my extension tube setup has all these images at 2.85µm per pixel, which is a little on the higher end of magnification–around 1.58x–than I’d expected (1.3x). It only barely fits a full R8 frame with only a small amount of perforation visible. I’m going to have to pull that out a little for Super8 frames.

That’s the chart for the previous series of images. This is still being focused by hand, poorly. All of this was captured at f/4.7 on the usual Schneider Componon-S 50mm.

My admittedly basic understanding of these charts is that the higher and farther out the line makes it (at the dip, before you start seeing the “reflections”) is better, but that you want a relatively linear response, too. So the green line is the best, even though the fairly wild, too-high contrast orange line is technically higher.

If you can hand MTF Mapper an image with a longer line to characterize, you can flatten out some of the Fourier artifacts past the dip. So I moved the calibration target to a lower-numbered group with squares that fill the entire field of view, switched to green light, refocused as best I could manually, and took a series of images with each color light: R, G, B, white, and IR. The results were really interesting!

(The early part of the green line (below 175 lp/mm) is essentially the same as before. I believe it extends farther and flatter because the MTF was calculated on a larger region.)

The interesting part is that taking the same image of the same calibration target without moving anything but only varying the color of the LED shows much worse results. (This is basically the definition of axial chromatic aberration.)

The units on both axes in these MTF charts are the same as the Kodachrome response curve (that I first saw in @PM490’s scanning resolution thread), except the Kodachrome datasheet shows them on a log-log scale.

I wanted to see the response I was getting with all the colors on the same graph as Kodachrome, so I went about trying to read the values from the datasheet graph. Reading data on log-log scales manually isn’t particularly accurate, so, again, trying to find a tool to help automate things, I bumped into the (also free, also delightful) WebPlotDigitizer which lets you upload an image, describe the axes, and even auto-trace lines. It supports linear and log axes in any combination, and it did an excellent job of giving high-precision numbers from Kodak’s log-log graphs.

I used an especially large screenshot from the vector graph in the Kodachrome datasheet PDF and carefully adjusted the points that didn’t auto-detect correctly. I repeated the procedure for both Kodachrome 25 and Kodachrome 64/200 (which, curiously in both the 2002 and 2009 year revisions of the datasheet have pixel-identical graphs for both ISO levels; I wonder if that was an accident that never went corrected).

In case this is useful to anyone else, here are the MTF data points I was able extract:

Each row is an X, Y pair with units of lp/mm and contrast response percent, respectively.

And now for the combined result with Kodachrome on the same graph as these test images:

White is looking pretty good, but focusing for an individual color does better! (IR wasn’t even worth plotting. It went to 0% almost immediately because it was so blurry.)

For the purposes of Nyquist and making sure this setup will be able to capture all of the signal available, when you look at the data table, there are some promising results:

The row highlighting corresponds roughly to where the green image is at double the frequency of the Kodachrome response, and everywhere above 40 lp/mm on Kodachrome, this system appears to be able to resolve as much contrast or more (often around 2x).

But the same isn’t true for white and especially not for the other individual colors, holding the camera focus constant. I’d been worried this whole time whether I was being a little… eccentric for wanting to investigate refocusing for each color channel, and at least after this early result, I feel a little vindicated. :sweat_smile:

It’s worth mentioning that there’s probably still a little more headroom on the table. These were all focused by hand, there isn’t any frame averaging so sensor noise is still a little higher, the light-path lenses are still loosey-goosey and not perfectly aligned, there is a ton of light spill from ambient room lights while my setup is still in prototyping mode, etc.

The obvious next question is whether the line for all three colors (or four if you count IR) can be pushed out as far as the green was when refocused individually. But, before I spend too many hours trying to turn this ball screw by hand to maybe-focus things and guessing whether I got close or not, I think it’s time for…

What’s Next

Auto-focus! It’s time to get the Arduino included with the rest of the early software and get that stepper motor turning in response to what the camera is seeing. If the problems @cpixip has described with the RPi HQ camera and its software library are any indication, I anticipate a bit of a learning curve with how long the different parts of the system are going to take to respond after changing the others. Will I need to wait after a color or focus change? How many frames? We’ll find out soon! :rofl:

For a short month, this was a good one.

2 Likes

Thank you for sharing your work, findings, and the tools that you encountered.
Very interesting work, and you are certainly taken the illuminant to the next level.

Given you are using a monochrome sensor, with a decent response to blue, a bit less to red (also reflected in your results), I am bit puzzled by the significant difference in the results between B/R and G/W.
Seems like White is tracking between R/B and G, and as it decreases (B and R are less contributors) and basically tracks closely green.

The results for G are outstanding.

If you check the monochrome sensor response (what they call the Quantum Efficiency) for the triton camera, IR at 940nm is only 9%. That may be a factor in the poor results for the IR channel.

Look forward to seeing your next installment, and thanks again for sharing your work and your progress.

I think it’s still axial chromatic aberration that was causing the difference. Here’s the image from the Wikipedia page that I like to keep in my brain for this kind of thing:

Each color simply has a different focal plane. And IR is much farther out from the others (mostly because it’s hard to design a lens that works well with both VIS and IR simultaneously and Schneider doesn’t have a good reason to care about IR performance on an enlarging lens).

Here is the (2x, nearest-neighbor) center crop from all of the images used in that test. Each of these was taken by changing only the light color via the (relatively distant) breadboard wires. Nothing else was touched in the setup between each capture. That little fleck of dust on the resolution target near the bottom makes it easier to see the difference. It’s amazing how out of focus the IR channel is relative to the others!

My hypothesis is that on an RGB camera you’re always dealing with this effect a little bit. The best focus will be right around the ideal for green wavelengths, which minimizes how blurry the red and blue are, but you can’t usually get all three at once (without, say, using an expensive apochromat corrected lens).

I am curious to see whether the seven or so microns between each emulsion layer in color film will work for or against this effect. My gut instinct tells me that the original film engineers probably took this into account and ordered the emulsion layers to minimize this effect. I plan to test this by scanning the film from both the front and back and then measure any difference in the ideal focus positions for each color. My guess is that scanning it from the normal side, the focus positions will be stacked closer on top of one another.

Otherwise, the quantum efficiency of the sensor will start to come into play once this much stronger effect is defeated. I still have auto-exposure enabled for the sensor, so these are still relatively close in terms of intensity values. I didn’t record the exposure time for each channel. I’ll do that as my testing continues. Between some combination of controlling the exposure time and possibly averaging several frames of each color to eliminate more of the sensor noise, I’m hoping to negate any QE differences. Again, this is still speculation at this point.

Frame averaging to eliminate sensor noise should also push the MTF graph out to the right a tiny bit more, in theory. But there is a can of worms here where any vibration while capturing the frames for averaging wouldn’t only negate the benefits, but probably lead to much worse results than if I hadn’t done it at all. I think I might eventually operate this machine directly on the concrete floor of my basement. :sweat_smile:

2 Likes

@npiegdon - you’re really digging into this!

Here’s a real scan example underlining your thoughts:

The red/green/blue channels are cutouts at 1:1 pixel size, the overview is zoomed down to match, the original resolution of the image is 4056 x 3040 px. You should click on the image to display it in full size on your monitor.

This is from a setup with the Raspberry Pi HQ sensor combined with a Schneider Componon-S 50 mm. I am using a slightly tigher f-stop of 5.6, as this gives me slightly better overall results, and I am not bothering to operate the lens in reverse (as the optical path is in my 1:1 setup anyway rather symmetric). Nevertheless, one can notice that the green channel shows a better image definition than the blue channel, and the red channel is somewhat in between that. I have tested the IR with the same result as you: it is quite blurry unless you use specifically the IR to focus.

I am rather certain that is caused by chromatic aberration, as I am able to get the red or blue channel in focus separately. Of course, than the other color channels get unsharp. My approach is to focus the green channel, as this channels information contributes the most to your result.

By the way - this film frame was scanned from the emulsion side, i.e. the camera viewed the film like a normal Super-8 projector would do.

2 Likes

Very interesting @npiegdon, thanks. I was somewhat aware of chromatic aberration, but did not see it that bad on my first scanner.

I actually used the aberration effect for precise focusing. While using the lens at f2.8, there would be a hallow (blue on one direction of focusing, red on the opposite direction), and I would focus for minimizing both. I suspect that the point is a compromise and maybe not the best focusing for green, but it produced the overall best picture. Then would set the lens to the half stop between f4 and f5.6, which provided the best resolution/sharpness/depth of field compromise.

Would a larger sensor made it less critical? I was using a Nikon DSLR D3200.

These also highlight that W would be the additive of G and R/B in the appropriate proportions, making it softer. If I am interpreting this correctly, then, there is no way to improve the white focus, since it will always be affected by the R and B additive (and soft) components.

In that case, the result may be noisier for IR, but there is no reason it would be softer. So As @cpixip indicates…

I would now also agree that is the case, thanks @cpixip. IR certainly behaves as expected, but R/B to G is significantly more than what I have seen (again with a larger sensor, and a straight light).

Would the lenses used for the Köhler effect setup also create a displacement, at different wavelenghts, that contributes/exaggerate the aberration results?

@npiegdon, kudos on your experiments, definitely a new great dig!

1 Like

It’s nice to see corroborating evidence. It looks like your focus was set just in front of the green focal plane, leaning on the red side.

Were those captured using a diffuse light source? I also have the same question as @PM490 about whether the longer, direct light path with these non-imaging lenses have any influence on this. I have a hunch that it shouldn’t matter, but I haven’t tested it yet. It’s a bit of a diversion from my end-goal of scanning my family’s films, but I kind of want to test all these variants (an integrating sphere in this case and maybe trying all the same things with the RPi HQ camera for a couple other questions I have) just for the sake of documenting the differences and sharing the information with the community.

I’m not sure how a larger sensor would impact this CA situation, but I know it would start pushing further into the limits of the Componon-S lens. With this 2/3" sensor the corner fall-off is already pretty bad using this style of lighting (even at f/4 and f/4.7) and increasing the magnification much past 2x starts to bump into its resolution limits.

Yep, ever since I saw this effect for the first time a few years ago during much earlier testing, that was my reason for striking out in this design direction.

My biggest fear right now is that moving the camera to adjust the focus position also theoretically changes the image size very slightly (at least according to a little optics sim I fiddled with for a bit). I’m hoping that amount is less than, say, a quarter pixel or so over the whole width of the sensor. Otherwise when the channels are recombined into a color image, that slight size change between each channel will show up as chromatic aberration at the edges and corners (despite a perfectly sharp center).

If it does end up looking bad when the channels are combined, I may need to add independent motion control to both the sensor and the lens. That’d be like a bellows riding on the (stepper-controlled) ball screw with another stepper attached to the bellows knob… which sounds like an exhausting amount of work and something that would require a ton of tuning. For now I’m just crossing my fingers that it won’t be a problem. We’ll see. :grimacing:

This may actually not turn out to be the case for a different reason: while I’m able to get a lot closer to sharp focus on the IR channel vs. the image above, I’ve never seen it get as sharp as the green channel. I think this comes down to the VIS vs. IR lens design trade-offs I mentioned earlier. When you get far enough outside the design range of the lens (and the Componon-S 50mm never had IR in mind), you start to see some strange optical behavior.

That said, your understanding (re: quantum efficiency and auto-exposure) sounds correct.

This is a very interesting question! I’m hoping to investigate the answer. :smile:

1 Like

They were capture with the opposite of your illumination setup, namely an integrating sphere. Here’s a somewhat older picture of the setup used.

Let me point to the discussion about different illumination setups in the dissertation I linked above; the interesting part starts at section 5.1.2 Callier effect, p 51 ff. In essence, the main differences in contrast (grain as well as image content) are seen with classical silver-based film, not so much with dye-based material (which I scan exclusively) and in the appearance of scratches, where the integrating sphere comes closer to the results of a wetgate with matched index fluid.

Another point in my decision to use an integrating sphere was the slight reduction in contrast you still see when working with dye-based material. Due to the large density range of color-reversal film, any contrast reduction helps you to get the material in the camera and finally on disk. The contrast of the final product will anyway be adjusted in post.

At this point I like to make a further comment: nowadays there is a balance between things you can improve hardware-wise (optics, camera sensor, etc.) and computational methods which you can apply to the digitized content. This is different from the situation 20 years ago. That’s the reason why mobile phone cameras nowadays come closer to DSLRs than they used to do.

For example, the differences you observed in your Köhler illumination when opening or closing the iris can be rather easy replicated by appropriate blurring/sharpening in post.

There is however a secret hidden in such an computational approach - you need to be careful to sample the signal you are interested in with sufficient precision. That is, in the digital version, the signal needs to be there. And of course, the better the signal, the easier the post processing algorithms are.

I think this will not be an issue. Let’s not forget that physics gives a lower limit on the resolution of the optical image the lens of your Super-8 camera can project (Airy disk). So even with an ideal, non-existing grainless film stock, there is a limit on the resolution a Super-8 film can transmit.

Given, if you really want to “capture the grain”, you are aiming at different scales and considerations like this might be relevant. But than, also the optical resolution your scanning system is able to achieve comes into play. Plus the hidden image processing algorithms your chosen sensor comes with.

As mentioned above and indicated also by the sections of the disseration I linked to, grain appearance is a function of the film frames illumination - and other factors as well, like the noise cancelling performance of the human visual system in a somewhat darkened projection room, viewing the projected image on a silver screen (which had different textures, depending on the make). I highly doubt that anyone can recreate this experience nowadays on a standard display screen, normally not viewed in a darkened room.

In the end, it will be an artistic and/or economic (bandwidth requirements!) decision how much grain your final product will feature.

It certainly will, especially since those lenses used for illumination purposes usually show all kind of image deficits. Will it matter in the final scan? Probably not so much.

2 Likes

That was in the other recent topic. :grin:

I coincidentally saw a different mention of the same silver vs. dye difference only a couple hours before this post. It was in a short blog post linked here, where you are mentioned by name, hehe. That was the first time I remember seeing the phrase “chromogenic monochrome” used as a description to mean “not using silver particles”.

So I might be making a machine that is better for silver-based film when I don’t actually have any of that to scan (save for the calibration target itself). It would be a more realistic test in this early stage to have some of that SMPTE test film (on chromogenic monochrome film!) than continuing to use this laser-etched, silver-coated glass USAF 1951 target.

I don’t think I understood what you meant here. Do you mean that by choosing to allow a reduction in contrast it’s possible to relax many of the requirements of the hardware so the time between starting to build a machine and when it’s able to produce a meaningful result is shorter? (I would agree with that!)

That’s an interesting notion that it might be possible to use the “highest contrast” aperture with all those over-sharpening artifacts and simply post-process it down to something nice. We have so much compute available these days that it’s kind of ridiculous.

Like I’d mentioned in the other thread, I’m not particularly interested in the grain. I’m hoping that when I reach the wet-gate stage near the end of this journey, it’ll have the same muting effect as shown in Fig.24 of the “Film Grain, Resolution and Fundamental Film Particles” paper you linked there.

Rather than going for grain structure, my goal is to get as much of the original picture signal by maxing out each of these parameters (best focus for each channel, ideal contrast, eventual wet-gate, etc.). I’m reasonably sure that this 1936x1464 sensor is going to be able to sample all the frequencies I’m looking for.

Taking a step back, my reason for all of these experiments is because my instinct is telling me there is a good 10-15% of something being left on the table by the usual RGB/Bayer + single focus + integrating sphere approach. I wanted to explore a little more of the possibility space than we usually see in the homemade 8mm machines to see if I couldn’t dig up that 10%.

That said, I am trying to mentally prepare for the event that we find all my extra effort only gives 0.5 or 1% of whatever this nebulous “benefit” ends up being. In that case the effort will have still been useful, if only as a cautionary tale for others to avoid in the future! :sweat_smile: At the very least, I just want to transform some of my conjecturing into real data points. And… you know, maybe scan my family’s 8mm films while I’m at it. :grin:

That’s another interesting position that is counter to my intuition, especially given your own demonstration of the same effect when using an integrating sphere. (Granted, my intuition is severely underdeveloped in the area of optics, to the point of being wrong most of the time. So these contradictions are useful for helping me sort out my own mistakes in understanding.)

Is your claim that that the magnitude of the aberration will just be lower with an integrating sphere? Isn’t the image focal plane determined solely by the imaging lens (between the sensor and subject) and the light wavelength? Would the angle of incidence of the illumination behind the subject have anything to do with it?

FWIW, last night I ordered some barium sulphate and those small cake pans that @PM490 has mentioned a few times. I want to test two things:

  1. The usual case: integrating sphere exit pupil placed as close to the subject as possible, just to compare/contrast my other results.
  2. Integrating sphere used as the input to the Köhler setup.

The reason for the latter is that despite trying to pack these LEDs in as tightly as possible on this PCB (in the top post), even with the “perfectly out of focus” lens setup, there are still (large, smooth, but) uneven hotspots in the image unless I put a diffuser or two at the beginning of the light path. Starting with an ideally even field and then using the lenses/apertures just to adjust the contrast seems like an interesting experiment.

… acutally not. Both feature resolutions which are higher than anything a real Super-8 film will be able to display. USAF targets are usually produced by a photolithographic process; the SMPTE test film is produced with a high resolution film emulsion - the definition this test film shows should be beyond the resolution of a normal Super-8 film.

Well, the density variation of a standard color-reversal film, say Kodachrome, easily covers a range equivalent to a 12 to 13 bit/channel sensor. That is: it is not possible to scan a Kodachrome film frame with a camera operating at a native 10 bit/channel resolution (the Raspberry Pi HQ sensor would do so in some modes). It is barely possible to scan a frame with high image contrast with 12 bit/channel, provided you get the exposure perfectly right (the HQ sensor does feature 12bit/channel modes as well). We are talking here about the raw image, not the processed .jpg or .png which is output from the image pipeline. Most DSLRs max out at 14 bit/channel. These numbers are from my experience with various film stock, scanned by the “soft” integrated sphere illumination. If you increase contrast with a different illumination, you make it harder for the camera to digitize either the highlights or the shadows. That is the reason why many people use multiple exposures which are than combined in the post processing. There are a lot of examples and discussions here on the forum about this topic. Again, you probably want to scan “soft” and modify your material in post processing.

Well, that is about my approach. Let’s reflect a little bit on that. In the last century, most people working in film and photography aimed at the same target: getting as little grain as possible. Why? Because the grain spoils the tiny image structures a good lens can image onto the film. Some people (including me) managed to get resolutions comparable to large format cameras (Kodak Technical Pan + special developers) with 35 mm film. Some time after the digital transformation, it became chic to introduce artifical grain to digital media - again spoiling the original image definition by “adding” spatio-temporal noise. From my point of view, it’s a valid statement, but one has to understand that it is an artistic decision. It has not much to do with replicating the “experience” of viewing a projected old Super-8 film.

Nowadays, there are ways of treating film grain (and sensor noise, for that matter) that were not available at the turn of the century. The basic processing flow goes like this:

  1. Get rid of the film grain as much as possible. This is usually done by employing the temporal coherence of film frames viewing the same area in consecutive frames. Basically, you search for the same image patch in a few neighbouring frames. Ideally, the scanned data should be identical, but because of the film grain, it is not. This actually allows you to subtract the noise (film grain) from the real image. The “search” is usually some sort of motion estimation.

  2. Once the cleaned image is obtained, you can employ various computational methods (like sharpening, local contrast enhancement, etc.) to the cleaned image. If you would apply a sharpening operation before the film grain is taken out, you would increase the grain, not adding any real resolution to the image. Added benefit: a noticable reduction in the band width requirements for the material.

  3. If you are really into this, you can add back again the grain you have taken out before…

Here’s a fast example to show you the effect:

The left half is the original image content, with a noticable grain structure. To the right, the image processed as described above is displayed (only a sharpening step was performed under point 2.).

This image was processed at a reduced resoluton of 800 x 600 pixel, zoomed down from an original 4056 x 304 px source image. I normally work with a source resolution of around 1800 x 1350 px for this degraining process. Nevertheless, I think the difference in image definition is noticable even in this small example.

No, I don’t think so. As you stated, this is a pure imaging lens effect. It’s there with your Köhler setup, it’s there with my integrating sphere.

Looking forward to your next update - it is really fun to follow your progress!

1 Like

Ah ha, I understand what you meant now. And while I’ll agree that reducing the contrast will help you acquire the full range of the image more easily (possibly in a single exposure), isn’t a loss of contrast essentially the same as a loss of information? (It’s almost a tautological statement: we need multiple exposures to capture high-contrast images because there’s more information there than a 10- or 12-bit sensor can capture in one go.)

Outside of presenting the final results on HDR viewing equipment, I understand at some point in the processing chain (hopefully near the end) there is a “mix down” step where you convert to some 8-bit/SDR “window”, but in general it seems like the goal should be to maximize the available contrast/bit-depth available right up until that step.

This particular sensor has a couple 12-bit modes and a 16-bit raw mode for retrieving image data, but that’s just a description of the range of expressible values over the wire. I haven’t done any testing to see what sort of real dynamic range I’m getting from the sensor yet. Although, starting from one of Sony’s Gen3 Pregius models, I have high hopes. With a well depth of 20Ke-, in theory there should be somewhere in the vicinity of 14-bits to play with, but reality probably won’t be quite so kind.

We have direct evidence of this claim from the 1996 demo of Kodak’s “Vision” system in this RobinoScan post. The host mentions film grain (in the context of it being a negative thing that you want to minimize) several times starting at 5m30s and again at 8m30s. Reducing grain is one of the four major feature points they tout as the reason to switch to the new film stock!

My favorite app for this is made by the NeatLab folks. They already have best-in-class de-noising for still images in their Neat Image product. And then their Neat Video product adds a temporal component on top of that. Combine that with very attractive pricing and I’ve been using it to clean up digital sensor noise for years. I haven’t tried it with film grain yet, but I’m excited to see what it can do. The results usually feel a little like magic.

I agree that temporal denoising is one of the first and most important steps.

This is getting exciting now that a couple sub-systems are starting to come together. I just finished moving everything from my electronics/assembly bench over to my usual programming workstation. It’s time to get this software doing more interesting things!

1 Like

Well, there are two different things in play here. If you loose information because you are not able to cover the whole dynamic range of a source, there is no way to recover that lost information. If you loose information because your signal is less resolved due to quantization effects, the (coarse) structure of the signal is still there. So in the later case, you still have something to work with.

For example, the log-encoding used in some cameras does coarser quantization steps in certain intensity ranges, but there is still image content in shadow and highlights. Also, as long as your image processing endevours are not too extreme, it’s hard to tell a 8bit/channel from a 16bit/channel. When I was writing my own image processing algorithms, I used to work in a special floating point format. That was long before for example the openexr-format came along. Nowadays I got lazy and processing is either float or 16bit, storage is always 16 bit.

The sensor will probably not feature 16bit resolution. It’s probably a 12bit sensor - the current upper limit I am aware of is 14bit sensors in high-end DSLRs.

Nice catch! :sunglasses:

There are several other options here. I guess the first one to do these kind of things in the context of Super-8 scanning was Videofred. He used avisynth-scripts, and in the context of avisynth, there has been an active development over the years to remove grain from exisiting footage. avisynth has a steep learning curve, but it’s for free.

From my experience, it is important to prestabilize the footage before a temporal-degraining operation is attempted. The reason is that the movement estimation of image parts is not trivial and a prestabilization helps here tremendously. Still, there are restoration artifacts in scenes with a lot of movement. Usually they are not that noticeable, and the improved image resolution outweighs these artifacts in most cases.

While I plan to come up with my own software, at the moment I am using daVinci for prestabilization, avisynth+VirtualDub for temporal degraining and again daVinci for editing and color-grading. Have not tried daVinci’s denoising since version 16 or so, but since they are now at version 18, I think I will recheck this at one point.

1 Like