The ReelSlow8 Film Scanner

Start with a small hole,

and then use a step-bit, these worked great.

For the larger output port, did a jig to hold the cake mold and be able to rotate, while holding the dremel still with a milling cutter bit.

Then used the same setup with a sander bit to finished it clean.

Finished the edges with a light/fine filing.

1 Like

Wow, so the side ports were done by hand with the step-bit?! I’m even more impressed. Nice work. :smile:

That jig was another good idea, too!

1 Like

June/July: Early Direct Drive Transport with Load Cell

The ReelSlow8 has sprouted legs (and more). I wanted to get it off the table so I could run wires and have easier access when building the side arms.

My 7 year old started his summer break from school so my hobby time took another hit. I didn’t have enough to show in June alone, so I’ll do a June/July and July/August update instead.

Even then, instead of working on auto-focus I decided to finally tackle the pain point I experience anytime I need to carefully hold film in front of this thing for a test. It was time this film scanner could actually scan film. :sweat_smile:

I opted for the simplest film path I could imagine.

Direct stepper drive on both reels, two rollers as close as possible to the exit pupil of the integration sphere, and one extra roller for the load cell. The side-arms of the machine were designed with the MakerBeam very specifically to allow easy height adjustment for the steppers.

The rollers are (SLA) 3D printed to have two 693ZZ bearings pressed into the center. Then, you can run them right on an M3 bolt and use the nuts to fine-adjust the height. The curve in the center means they barely touch the edges of the film, as per usual.

The most interesting part is the load cell. If it isn’t the default choice around here for tension sensing, it should be and I’ll advocate for it any chance I get!

  • It’s like $8 for a 1kg load cell.
  • The SparkFun HX711 breakout board seems to include a few more components on the board to ensure good function vs. the boards packed in with the load cell itself (at least according to the reviews on both sites), so I tried that first.
  • You can switch the SparkFun board from 10 Hz updates to 80 Hz just by cutting a jumper on the back of the PCB. (They advertise 80 Hz, but mine does about 92 Hz and I’m not complaining.)

There are complaints in the reviews that the data is especially noisy and you need to do all sorts of averaging over time, but those are use cases that need the accuracy all the way down to the least significant bits coming off the 24-bit ADC. For our purposes of roughly keeping film inside some, say 50g, tension window, we might need five of those bits.

When you plot the full-scale data with no averaging at all–just the raw data–look at how pretty this is:

Essentially zero-latency response and super clean data. The little bump each time the scale reverses direction is actually the static friction of the plastic piston against the cylinder breaking. You can feel it and, apparently, so can the load cell.

That little classroom scale is my plan for calibration. I haven’t gotten this far in the code yet, but having the app ask for a few measurements (maybe each 100g on the 500g scale to build a regression) before each scanning session should give sub-1g accuracy which is an order of magnitude more than I need.

5 Likes

Nice update, thank you for the information on the load cell, very interesting and something to consider.
I also like the roller design, very nice and simple.

I am only making this comment because on your design you have been very detailed; for a normal built, I would not even say anything…
One consideration on the current path S-path and air gate is that the film bow on the gate will change with tension, and that slight bow will change the focus point.
Additionally, on the pick-up reel side, as the reel fills up, the film angle to the air gate pick-up-side post will change through the reel, making the right side of the bow vary slightly, even when tension is held constant.

Understand in this case -with auto focus- that may not be an issue. And that the variations introduced by the pickup on the right side may be within the dept of field range, nevertheless, keep an eye for those.

My worst skill is building transport, and got lot’s of experience (what you get, when you don’t get what you want). To avoid the variation introduced by the filling pick-up reel, a simple fix is adding a roller post on the pickup side, mirroring-opposite to the one for the load cell, effectively changing from an S-path to a W-path.

Again, only making this comment given the extreme effort you have been doing on every little detail. Hope it helps.

1 Like

I’ll take that as a compliment. :smiley:

Also, these are great questions!

Regarding the film angle, this is a quick, not-to-scale mock-up of what I think you meant:

The darker blue shows that over the course of an entire reel, the angle will definitely change, introducing a cosine error term on the load cell roller (and one of the “gate” rollers). During calibration with the classroom scale, my plan was to try and hold it along the mid-line between the two extremes, so it’s calibrated on the average, halving the error.

Admittedly, talking about holding something by hand (and using a $3 instrument) probably demonstrates I’m not too worried about the results here. When I did a survey of all the recommended tension values I could find across as many different pro machines and archival reports, the tension values were all over the place. (Granted, these were mostly for 35mm film, but) I found–after converting all of them to the same units–all of the following mentioned as supported/recommended tensions: 200g, 300g, 170-450g, 170-850g, and “very little up to 340g”. So picking a conservative starting point like 200g seemed safe enough, but that I probably still didn’t have anything to worry about even if it ends up being off by, say 50%(!).

Also having the least experience with the transport (vs. all of the other relatively more known quantities), my plan was to cross that bridge when I came to it! :sweat_smile: My ace in the hole was that I intentionally used longer rails for the side-arms, so if the cosine error became a problem, I could just slide either the load cell or take-up reel back as far as another 8" or so. That would increase the length of the film path but also drop the cosine error fairly considerably.

Another design requirement I’m willing to bend on: out of the 39 reels I have to scan, 34 of them are 3" reels. If the couple 5" and 7" reels end up introducing too much error, I wouldn’t mind a pre-processing step of transferring them, piecemeal, to 3" reels for scanning. (Granted, I don’t want to do it, but I will if I have to, hehe. I already had that same workaround in the back of my mind in the event that the direct drive motors weren’t strong enough to turn the larger reels.)

As for film bow, this is probably my complete lack of experience showing. I had assumed that once you were wrapped around a roller far enough, the incoming angle wouldn’t matter anymore (given a particular tension level) and you could make assumptions about the behavior of the film from that point forward independent from anything that happened before it. And the “around a roller far enough” here could be modulated again by sliding things as far along the rails as necessary to reduce the cosine error. The “far enough” probably also depends on the diameter of the contact point on the roller. For very large rollers, I’d guess film bow is minimized/eliminated. For something like the LEGO cones in your design–which approach a diameter of zero–the tendency to bow out is probably accentuated.

Really, I’ve probably been puzzled over these questions longer than any other part of the project. Some pro scanners have dozens of rollers. Some have as few (or fewer!) than this design. Your own film path stood out as one that had an above average number of rollers and I’ve spent time scratching my head over what each one might be for.

I’d dragged my feet on this part for as long as I could (something like five years :grimacing: ) and it was finally time to commit something to paper–good or bad–just to get a move on. I was hoping the first iteration would get close enough, but I’m not so confident that I’d be surprised if I had to tear most of it down and try again.

The eventual addition of a full-immersion wet gate only adds the requirement that the film near the light source be convex (w.r.t. the camera), which the current S-shaped path permits (even if you’d probably have to slide those two elements farther down the rails toward the camera to make it more feasible). But your W-path suggestion also meets that requirement, so adding that extra roller in the event that film bow becomes a problem would be a good workaround.

All of that is a long-winded answer to say I am much less confident about this part of the design and am mostly crossing my fingers, hoping for the best, and am planning to tackle problems as I run into them. :smiley:

Yes, it is :slight_smile:

Here is my totally empirical summary, the point of view below is framed by these.

  • Angle of bend around a post or roller adds tension to the force needed to move the film. In my case, I had the rubber rollers with bearings (little friction) and the loose-lego-conical-posts without bearings (more friction). In general, the larger the bend, the more friction.
  • Old film is less flexible. Larger tension was needed to flatten the film on the air-gate-posts. Less tension would bow the film. Thought this could even be used for narrow focusing. In my case, the space between the gate-posts is larger, because I wish to capture almost 3 frames of 16 mm (one frame with more than two halves), consequentially the play bowing was more noticeable.
  • Because of the old film lack of flexibility, and the narrow lego-posts it was necessary to avoid large angles about a post, or risk the film breaking at the perforation (experienced it) when using more tension.
  • Eliminating the variability of the increasing/decreasing spool takes away one more uncertainty. The less uncertainties with areas I have the less know-how, the better.
  • The transport is intended to be 16 and 8, without any changes other than software. This is an aspiration, but the preliminary results were promising.

With that thinking framework, a few comments.

I was less concerned about the load-cell effect of the supply-reel angle changes, mostly because my lack of know-how on the cell, but you have a point.

I used two kinds of post, the narrow lego (conical only touching edges) and the wider rubber-roller-bearing (flat full contact). With old film, I saw bowing on both. The rubber is probably closer to your design, but on yours, these are only touching the edges, I think the opportunity to bow is there… similar to the lego (only touching the edges also). All speculation, testing is king.

Based on past experience (which may not be relevant to your rollers) I would expect some uneven bowing (very exaggerated in the illustration).
two_post_deformation
The angle on the left side does not change, the angle on the right side changes, providing an opportunity to bow differently than the left side. This may be so minimal that it may not matter, again, just mentioning it given the attention to detail on your work. Just keep an eye for it, and see if it matters.

Same here, with the added camera sensor and sensor software learning curve.

A big difference in the path is the type of motor used, probably cannot oversimplify/equate film paths of different motor kinds.

I’ll shed some light on my path design, a path for steppers. In very old posts, I started with an inverted V, with 3 rollers on each side of the gate.

Then came the proof of concept for the transport with Tof.

First reference to load cells was at the post of the Tof (square path above) from @friolator, but I did not rethink the initial inverted V path.

The first path, inverted V, is nice and symmetrical, and the 3 rollers take away any variations of the spool-size-changes, and two load-cells in the second roller of each side would give perfect feedback to control the steppers. For precision, one of the gate-edge-rollers can be numerically encoded. As I write, I think this is a neat transport alternative, and one that I may revisit down the road.

The one turn-off was that it was large, in order to make space for the large sphere, plus two large reels virtually in line. That’s when I started playing with ideas of a wrap-around path, to keep the project at a reasonable size for storage.

Then came the ToF path, which is virtually a square, and provided experience on turning old film.

The new path is based on an octagon, and it is virtually symmetrical between the supply-side and pick-up side. The pickup side has the capstan, and supply side has a roller where the capstan would be, which is the only difference (the middle roller in the 3 at the bottom.

Tried to eliminate variations induced by spool-size, both sides start with an entry post (right side of the octagon). Both sides have identical dancing-potentiometer, the entry-exit angle of the potentiometer is fixed by the entry-post and the first roller (from right to left). The next side of the octagon is the capstan (on the top) and the dummy-mirror-capstan, the three rollers at the bottom. The two left sides of the octagon are the gate.

The large sphere needed for 16 mm target (more than one frame) takes a lot of space. When working on the failed prototype, one of the takeaways of the experience was the fragility and rigidity of old film (have a couple of the 16mm close to 70 years old). So was looking for a compact arrangement with a path minimizing sharp bends. The entry post, with large spools are the only ones would be over 90 degrees (the old 16 mm film is in smaller spools, and the angle would be far less than 90).

So in short, all those rollers help make the transport plate a bit smaller, and lesser the film bending.
Please, don’t follow me, I’m a bit lost… but making good progress (paraphrasing Yogi Berra).

I suggest you change nothing until you test it, and use the information provided as things-to-keep-an-eye-for. Perfection is enemy of the good.

2 Likes

This kind of freely shared experience is the reason this forum is great and how it has become a veritable dragon’s hoard of collected wisdom for these kinds of project. :smiley:

Film bow wasn’t on my radar before, but it is now. I’ll keep an eye out for any problems while I’m testing things. Thanks again for sharing!

2 Likes

This is not unlike the Imagica 3000V transport. It was a scanner from the late 1990s - very slow - designed for digital intermediates. If you think your scanner will run slow, remember that the 3000V took 30 seconds PER FRAME to scan 35mm at 4k! I had one of these as the basis of my first conversion project, but ultimately moved to a different platform. but the basic transport is not unlike your new layout:

1 Like

July/August: Direct Drive Transport (part 1)

Last time, the wires for the transport steppers weren’t even connected. Now, the ReelSlow8 can reliably maintain tension while moving.

The picture hasn’t changed much; just fewer loose wires, the classroom scale is still there from calibration, and the E-stop finally found a home where it’s easier to press.

That said, there are some really cool details where things have started to come together in a way that feels like more than the sum of their parts.

Load Cell Linearity

I wanted to get an idea about what sort of curve I was going to need to model in the software to convert raw ADC counts from the load cell into a tension measurement. So, I mounted the classroom scale close to the same position and angle the incoming film would normally be coming from. Connecting the other end of the little snippet of film to the take-up hub, I can pull the scale out to some number and lock it in place with a screw.

Testing at each 100g on the scale, averaging a few hundred readings at each point, these were the results:

That the linear trend-line is almost hidden by the data itself across the entire range is about as good as you could hope for. So, now that I know the slope for this particular load cell, all the app needs to do is “tare” using a single 0g data point. Nice and easy.

Key takeaway: treating the load cell data as perfectly linear seems like a safe assumption.

Load Cell Creep

Something you notice when watching the ADC counts rolling in at ~90Hz is that those lower 10 or so bits (of 24) are always walking around all over the place. It’s not just white noise, it’s sort of shimmying around over time. I’d also noticed that just after making a large tension adjustment with the classroom scale (say, more than 100g from where it was before) that over the next couple minutes, the random walking behavior in the data seemed to trend more in one direction than the other.

To try and characterize this creeping behavior, I performed the following experiment:

  1. Remove everything from the film path (0g tension) for >24h.
  2. Use the classroom scale to tension the load cell to 200g.
  3. Leave it at 200g for >24h.
  4. Remove everything (0g again).

Then, immediately before/after and 5min before/after each of those steps, I would take an ADC reading.

I was happy to find that the wiggly lower bits were less trouble than they appeared to be. I never saw any deviations larger than 1.7% (of the 200g), so maybe a 3.3g deviation in the worst case (reading immediately after a large change when the load cell has been in a different position for a long period). In most situations, the error was below 0.8%.

My own target when moving the film is to maintain a range of about 15g from my desired set point, so the worst case is already inside my noise allowance. To improve things, I could pre-tension the machine to my typical running set-point the night before I plan to use it.

Key takeaway: don’t plan on splitting grams, but our application never needed to in the first place.

Film Path Safety

So, these reels are precious family relics. I want it to be near-impossible for my buggy code to do any harm to this stuff. Beyond the E-stop button (which physically severs all motor current), I wanted something automatic in-place that could react faster than I can.

I wasn’t quite sure how to get the kind of guarantees I was looking for from a single Arduino. I’ve been avoiding interrupt-driven microcontroller code just for ease of reasoning, so reading from the load cell involves polling and busy waiting. For similar reasons, I’ve opted to “bit bang” the actual step pulses to the stepper controller boards rather than using something like PWM and crossing my fingers that I get the timing/counting right.

So a natural architecture fell out of my (made-up) requirements that ended up working well for the safety I was looking for:

The “controller” Arduino sits and waits for messages, responding to each one in turn. The “sensor” microcontroller just has the fire hose of load cell readings dumping into its serial output. By adding a single command to the sensor board, I was able to get exactly what I wanted.

One of the first operations you perform when launching the desktop app is to tare the load cell by averaging a hundred or so readings when nothing is touching it. Once the app knows the appropriate zero-point (and constant intercept), it sends the sensor board a min/max limit. From that point on, if it ever sees a reading above or below that range, it immediately pulls the reset pin on the other board. The initialization routine on the controller board sets the motors to 0% current, so the tension is dropped to zero.

My favorite part of this scheme is the low-latency. No USB round-trips, no queued serial messages, and no I/O delays inside the OS. Microseconds after the load cell knows about the problem, it’s stopped.

I’ve already accidentally tripped that safety mechanism a half-dozen times and it’s worked flawlessly every time. It’ll trip just from tugging on the film with your finger.

The controller board also sends a startup message when it is reset, so the desktop app can interrupt its current operation (if any) and show a big, red warning message that requires user intervention before continuing.

Since I got that up and running, my worries about accidentally destroying these family memories have been allayed.

Key takeaway: it’s better to put the time in on this kind of stuff up front before an accident happens, if only for peace of mind.

Simple(r?) Super-8 Sprocket Detection

I’d been planning to use a scheme like @cpixip’s Simple Super-8 Sprocket Registration as most of the criteria there (sub-pixel registration done in a separate pass, etc.) applies to this machine as well.

My app doesn’t stop on frame boundaries yet, but I had a shower thought the other day about a method that requires many fewer calculations and steps to get a rough sprocket hole position and decided to try it.

It relies on the assumption that the film will be properly exposed so that sprocket holes are the only “burned out” areas in the image.

The shortest explanation: calculate the center-of-mass of only the over-exposed pixels. That’s it. That’s the y-coordinate of your sprocket hole’s center.

Here, red pixels are over-exposed. And the blue line is their center of mass.

If you haven’t seen the calculation before, there’s not much to it:

var moment = 0
var total = 0
for each line y in the image
   var count = number of over-exposed pixels in that line
   moment += count * y
   total += count

var centerY = moment / total

I already needed to walk the entire image for a different process anyway, so I test all the pixels. But, to save effort or if you are zoomed out far enough to see beyond the edges of the film, you could employ the same vertical band technique from cpixip’s post.

And if you’ll be moving close to a frame at a time before checking to see where you ended up, you’ll only ever have one hole visible at a time so, again, you’re already done. Otherwise, you can count the number of lines since the last time you saw any overexposed pixels and after some threshold (20 lines? 40?), start accumulating a new center-of-mass. That will find all of the sprocket holes.

The only weakness is that it gets a little inaccurate when a hole is only partially visible at the top/bottom edge of the sensor frame, but that’s exactly when you need the precision the least.


More Soon: There is a lot of exciting math involved in direct-driving two steppers to maintain tension (without a capstan and while still getting useful movement done), but it’ll take a couple days to put together the visual aids and record a video. So, I’ll split off this grab-bag of unrelated progress so I can focus on just that part next time.

2 Likes

Great update, thanks for sharing.

From my experience (experience is what you get, when you don’t get what you want) with Time-of-Flight sensors, I did try a pure math approach to moving without capstan, and it worked fairly well, except one my my goals of the project was to scan film without sprockets (develocorder), then loosing a position reference to work with.
Using 100:1 geared steppers (99+104/2057 to be exact), and measuring with a caliper each spool, and the film thickness, the PICO was able to hold a fairly steady movement for a few dozens of frames without any feedback loop.
You have a better transport setup, tension feedback… but the math says that with large diameters spools, will be challenging to hold precision with the 200 steps/turn of a normal stepper, even if using 1/16th micro-steps (what I had available).
A 50 mm diameter reel translate to about 5 stepper-steps, or 80 micro-steps, per Super-8 frame. Granted, error can be corrected digitally panning from the sprocket reference (when sprockets are available).
After trying, I have a healthy respect for the tolerances involved in the movement of 8mm film, actually seeing some even with a capstan. Godspeed mate.

1 Like

Thanks for the encouragement; I agree with all of that! :smiley:

I couldn’t imagine attempting it without feedback. (I’m still impressed by your ToF results!) Larger reels would definitely make it more challenging to stay inside my error margins. And 1/16th steps are the largest I’ve ventured for doing anything close to precise. 1/32nd seems to be a sweet spot for coarser moves with the smaller 3" reels.

The 10% or so over-scan shown in that frame above gives enough leeway that only having a mostly-centered sprocket hole should be good enough. That’s one of the reasons I’m not too worried about the precision of the center-of-mass tracking algorithm.

It’s been funny: you and I have been doing some of the same detective work in parallel. While you’ve been posting about the non-linearities in the new SnailScan transport, I’ve been marveling at how similar the sorts of oscillations I’ve been seeing on this side have been, too. Even just these old plastic reels seem to have a bit of lopsidedness to them, not to mention all the parts that I’ve made. :sweat_smile:

The neat, feedback-based, adaptive-style math I’ve got working is just for the tension half of the problem and not for getting frames centered yet. But the way I was able to break down the problem, I did most of the hard part up front, so centering on the frames should be the easier of the two. (Famous last words!) In the meantime, I’m still pretty excited at how well the first half worked out. It ended up being a very simple scheme (after trying to over-complicate it for a week or two) that I hope to explain in detail shortly.

1 Like

July/August: Direct Drive Transport (part 2)

It took a few weeks of trial and error to come up with the following, but it was a lot of fun. You can see half a dozen blind-alleys, over-complications, and wrong answers in my notes before it all shook out into the simple form in the rest of this post.

Interleaved Steps

Like I mentioned last time, I’ve been trying to write my Arduino code with an emphasis on being able to read it straight through so I won’t have to do so much tricky reasoning about what state things are in or which interrupt should be firing next, etc. So, when it came time to step two motors simultaneously at different rates, rather than attempt to do it with PWM units in hardware (where I’m less confident about stopping them exactly at the right step count), I wondered if there might not be an easy way to do it manually.

I’ve got all the stepper motor’s “step” pins connected to the same AVR “port”, so they can be updated simultaneously just by setting a single 8-bit value in the code. Which motor(s) get stepped depends on which bits are set when you change the value. You end up with something like this:

PORTF = mask;
delayMicroseconds(d);
PORTF = 0;
delayMicroseconds(d);

Whichever motors are selected in your bitmask will get stepped, the rest will sit still. You can do this in a loop or with some other timing mechanism to handle acceleration and deceleration, but it always comes back to the decision of which motors to step and when.

So, starting with an example where we’ve determined we need to advance the supply reel 9 steps but we only need to advance the take-up reel 3 steps (say, to decrease our film tension a bit while nudging the center of the frame a little), you could imagine we could choose our bitmask for each successive step like this:

The motor with the larger number always gets stepped every cycle. And, ideally, you find the right spacing for the motor that needs to move fewer steps so that the move is as smooth as possible.

At first, I was worried that writing the code to do that would be full of special cases, keeping track of different sub-patterns, worrying about round-off, and all sorts of other details. I mean, when you look at the rest of the ways to fit every combination of fewer pulses into the same 9 steps, it seems like the code to generate all these (nice, symmetrical) patterns would be tricky:

And that’s to say nothing of the real world cases where you might be moving hundreds of steps at a time. (Every fifteenth pair, there is a group of three steps on the 242 line in this example):

While doodling these step patterns in my notes, I eventually noticed some similarities to modular arithmetic. Adding another multiple of the smaller number each cycle, dividing it by the larger, and producing a step anytime the rounded result increased by one started to produce these lovely shapes without any regard to keeping track of… anything else.

It was an easy answer that worked with floating point math and round-off, but the Arduino is an 8-bit processor that doesn’t like doing floating point math if it can help it (and you always have to worry about numerical stability with those sorts of solutions). So, I played with the idea a little more until I found an all-small-integer solution that ended up even simpler.

Here it is, for posterity:

unsigned short s = floor(larger / 2)
for 1..larger
   s += smaller
   if s >= larger
      step both motors
      s -= larger
   else
      step larger motor

That’s it. One addition, one comparison, and maybe one subtraction per cycle. For any sensible number of steps, all the math fits into 16-bit numbers, which avoids generating an onerous number of AVR instructions. And there are never any large numbers that might overflow.

So now we can make arbitrary simultaneous interleaved moves. Now the question is how many steps should we be moving?

Maintaining Tension While Moving

Advancing the supply reel reduces tension. Advancing the take-up reel increases tension. After moving S supply steps and T take-up steps we get some change in tension, Δ𝜏.

To relate these, let’s introduce constants a and b with units of “change in tension per step”, which lets us write:


Really, a and b vary slowly over the course of the entire reel, but are relatively constant across many successive frames. While advancing, a is negative and b is positive. (Again, supplying film reduces the tension and taking it up increases it.)

If we can find good (current) values for a and b, we should be able to move reliably while maintaining (or deliberately changing) the tension on the film. One solution might be to stop scanning every once in a while to do a recalibration: make a small move or two on both reels, one after the other, measure the tension change between moves, finding a and b directly. But, short moves are more susceptible to all the sources of noise.

The good news is that there is an easy way to get much better accuracy without even interrupting the film scanning.

Every time we move, we get another set { S,T, Δ𝜏 } which, ideally, we could use to help narrow down a and b. Because this is a noisy dataset, there will never be an exact solution. Instead, we estimate them by finding a least squares fit across all our recent measurements.

(Geometrically, this boils down to finding the best-fitting plane in three dimensions, with one point per recorded motor move. Each point’s distance from the plane is an “error” term. The error is squared to–among other things–always make it positive. Then, the algorithm finds the best plane that minimizes the combined squared error of all the points.)

Least squares fitting is a built-in operation in OpenCV, so long as we can get our problem into the usual Ax=b form:


The only mildly interesting part here from a code standpoint is interleaving the S’s and T’s in an OpenCV Mat with two columns. Once you’ve got everything stored in the correct size cv::Mat, you just call:

cv::solve(A, b, x, cv::DECOMP_SVD);

Then, read the a and b coefficients straight out of the x Mat.

Despite a lot of noise in the measurements from all the non-linearities and imperfections in the ReelSlow8’s construction, the best fit against the previous dozen or so moves produces remarkably stable estimates for a and b.

With those in hand, we can now make the next move. We start with however many desired supply reel steps S (say, our guess for what it’ll take to reach the next frame), the current tension 𝜏0, and the desired tension after the move completes 𝜏1. Using the latter two, we can find the desired change in tension:


In the ideal case (where a move always lands exactly at the same tension as before), this would always be zero. But, in the presence of system noise and integer stepping, there are usually a few grams of under- or over-shoot that we also need to correct for.

Now we can solve equation (1) for T and find the number of take-up reel steps required to move how far we want while maintaining the tension we want:


In practice this works well over distances up to whole frames with the tension barely fluctuating over the course of the move!

That said, there is a danger: the estimates are so stable and the results so consistent, that it’s important not to make the same move over and over (say, moving exactly the length of one frame).

An example is illustrative: if it takes 750 supply reel steps and 700 take-up reel steps (in this particular area of the reel) to advance one frame without the tension changing (besides the usual noise sources), once your history buffer wraps around, you end up trying to solve:


Even though you have many equations and only two unknowns, the redundancy between them leads to a degenerate (or singular) matrix. All of those points that are being fit to a plane fall on the same line, which means many different planes could intersect the line equally well.

In this case, the coefficients could be -5.5 and 5.9. Or they could be -11.0 and 11.8. Or any other multiple. There is no way to tell! The coefficient estimates can become suddenly unstable once the initial “bad” points have rolled off the end of the list of historical points.

The solution is easy: just vary something. Instead of moving directly to the next frame, get there in two moves and vary the desired tension between them. If your working tension for image capture is 190g, have the intermediate move target somewhere around 140g, just to get a little “contrast” in the readings. Just about anything different will keep that best-fit plane from spinning around the degenerate line.

Once you take that caveat into account, the estimates provided by this model land the moves (at 1/32nd micro-stepping, directly driving 3" reels) within a few grams of the target tension after any number of steps you like, every time.

The best part is that it’s completely adaptive: over the course of the reel, the coefficients will slowly shift as the amount of film changes on each. The only real question is how large should the history buffer be? Too short (3 or 4 samples?) and things will be more susceptible to noise. Too long (60 samples?) and it might adapt slower than the coefficients are actually changing. A dozen or two movement history data points has worked well.

One last detail: there is a bit of a chicken-and-the-egg problem with the coefficients. How do you start gathering move data without knowing how far it’s safe to move? The easiest answer is to just use the weaker direct measuring scheme mentioned above, once, at the start. While longer moves provide more steps to average the tension changes across, it only takes a couple very small moves to get a reasonable basis to start the least-squares operation from. I have my app do an { S=5, T=0 } move followed by an { S=0, T=5 } move (disregarding any desired tension, just recording what happened after each). Because those are orthogonal as far as the solutions to the linear equation are concerned, it has worked as a good starting point. From there I use the estimates to walk a dozen supply reel steps, then fifty, and by that point the least-squares solution has locked on pretty closely and you can move however you like without fear.

What’s Next

With good sprocket hole estimates and the ability to move smoothly while maintaining tension, the last step before this machine could be said to actually do something useful is figuring out the number of steps to center the next frame in the camera viewport.

My guess is that an adaptive, best-fit scheme will work for that, too! We’ll find out soon. :smiley:

Thanks for sharing the insight of your project, it is great to see other projects perspective.
Math is not my strong area, so I may not even be getting 100% of your post. Most of what was not used frequently is hard to access :slight_smile:

I also had to deal with the interleaved steppers, first for the two-stepper transport. At the time, as I shared, the ToF was to provide the reel radious, from which it would calculate the corresponding perimeter of the circle and steps needed for a frame size over that perimeter.

When ToF did not work, I tested how it would work with math & measurements (no feedback).

The (correction) diameter changes are somewhat discreet, and for the supply reel in the form of:
Diameter of Empty reel + Film Thickness * 2 * Number of Turns

Easy to keep track of what turn one is in, or calculate. There are less than 100 turns for the typical 50 ft Super 8 reel, so it can even be implemented as a constant array.

There are some complications with the fractions of frames/steps, but is workable with code and variables.

The TakeUp side is the inverse horizontal flip of the Supply side.

One key difference I should have mentioned for context is that the load-cell does not have the elasticity (implied safety margin) that the dancing potentiometer does (spring or bands).

On the interleave approach, of the 3 steppers (capstan, pickup and takeup), the key is to find the one with the smallest radio (at any given time), and set the pace for the steps based on that scenario. At first, when using the 50mm encoder capstan, the takeup and supply had the scenario of the smaller radio (pickup at the start, and supply at the end).

The loop was set to provide those with the opportunity to have the ratio of steps at the smallest radio. In the large capstan scenario, two possible steps of takeup or supply for one of capstan. Then at every capstan step, the pickup and takeup tension is checked to decide if the corresponding stepper needs to skip the step. It is not elegant, but works extremely well, and tension is virtually constant (keep in mind all steppers are geared, and the changes are very small).

In your case (smaller number of steps than the geared, and no tension elasticity) I am unsure if the above approach would work. Given the quick reaction time of the load cell, I think it would with microsteps, but certainly your approach is more elegant.

Great work, and thanks for sharing the insights, certainly great reading.

September: First Footage!

Advancing to the next frame was like the tension math but only one variable instead of two, so it boiled down to keeping a simple running average of the last dozen or so estimates and errors. And… that was the last task before hitting minimum viability.

Right now it only uses a single focus point instead of per-channel focus, uses an integration sphere instead of Kohler lighting + wet gate, the software is still rather underdeveloped, and after these three hundred or so frames, a couple nuts loosened themselves to the point of having to stop the capture early, but now it does something! :smiley:

This is very early and I haven’t pinned down even half of the processing chain yet, but here is some footage of my grandfather showing off a silly birthday gift back in 1976. The comparison on the left is from the 480i MiniDV camcorder-pointing-in-a-mirror-box transfer that a couple of my uncles did back in 2011.

Notably, this footage was quite faded and I’m excited to see how much of that information was still hiding in there:

Again, this isn’t anywhere near final, but that is motion stabilized, color corrected, temporally denoised, and finally motion interpolated.

Temporal Denoising

The real star of the show here is NeatVideo’s temporal denoising (set to use the maximum of all five frames before/after):

That is pulling out details you never would have known were there! It wasn’t even clear that wall had wallpaper and after denoising, the pattern is perfectly legible. The double-quote on the t-shirt went from a fuzzy square to clearly double tick marks. The curtains went from “I think there is a pattern…” to being able to pick out details in it. So cool!

Color Profile

The color correction is 95% just done by eye right now. But, now that I’ve established good exposure times for each channel, I was finally able to dredge out my old Wolf Faust K3 Ektachrome calibration target and try to capture an ICC profile.

The machine is set up for scanning 8mm but it’s a 35mm calibration slide… so it took thirty four(!) separate captures, meticulously moving the helping hand that the slide was clipped into a little each time. Worse, I couldn’t find anything that didn’t fail to auto-align the images, so I ended up doing that by hand, too. :grimacing:

CoCa (alternate link) v1.9.7.4 (using Argyll v2.3.1) is free and can read IT 8.7 targets (and the spectral data file that was included with the target) and generate a profile that can turn raw camera pixels (left) into something calibrated (here, sRGB on the right):

Really, it’s a testament to the Pregius sensor that this isn’t doing a whole lot more than applying the 2.2 gamma ramp I requested at the time of profile creation. The response curves and gamut chart (generated by the also-free DisplayCAL) don’t include any surprises:


(The inset triangle is sRGB.)

Outside of the tiny wobbles (which I don’t know how to distinguish from measurement error), it’s almost a “perfect” sensor response.

In addition to generating those charts, DisplayCAL can also convert the ICC profile into a .cube LUT file for use with video editing software.

Because the response is so close to just adjusting the gamma by itself, I could probably leave this step of the color correction process out, but I am encouraged that anything my “deep red” LED might be bringing in strangely/non-linearly is getting corrected before the rest of the manual color correction process.

(As it turns out, once I took the time to catalogue my family’s footage into a spreadsheet of all my upcoming work, there was a lot more Ektachrome in there–about 30% of the collection–than I realized. So, even if this profile is only useful for that type of film stock, it was hopefully time well spent.)

Next Up

I’ve taken a slight detour to do better pre-registration before the motion stabilization step. At 0:10 in the video you can see some shaking. That was a place where the motion tracking (Deshaker 3.1 at the moment) couldn’t cope with the raw frames jumping all over the place so it fell back onto how they were captured. Right now if you watch the sprocket holes from the raw capture, the frame-to-frame movement can be upwards of 60px, jittering all over the place. (That’s about what I expected from my frame advance code. There is plenty of over-scan to compensate for the inaccuracy.)

So now I’m thinking I should at least try to make it easier on the stabilization software by getting the sprocket holes within a whole pixel of the same point by simply translating (and cropping most of the over-scan) before writing the TIFF to disk. Translating on whole-pixel boundaries (with no rotation or scale) is lossless, so it should be safe. On top of reducing the demand on the stabilization algorithm, it’ll hopefully save a little disk space, too, without the redundant bits of previous/next frame above and below.

So that will be some more fun computer vision algorithm work to beef up my center-of-mass based sprocket detection into sub-pixel accurate sprocket hole edge-finding. (So far it looks like it’s just going to be a few more applications of robust least-squares again. RANSAC is a powerful tool and a one-liner in OpenCV!)

While I’m tracking down sprocket holes, I also want to build a software tool to help me align/calibrate the angle of the film in the holder. Once those nuts started to loosen, the frames started to tilt more and more, so having a little gauge in the UI to watch for that failure condition will be helpful. The two rebuilt roller assemblies right in front of the light source use springs and Nyloc nuts now and can be adjusted easily with a single turn of a screwdriver instead of trying to hold/tighten two nuts (above and below the bearings) simultaneously. It’s a huge quality of life improvement.

Beyond that, I’ve got some fit-and-finish work to do on the machine and the software before it’ll be ready to endure a longer test. Still, things are as exciting as they’ve ever been right now!

5 Likes

Very nice results! Congratulations! :tada:

I am wondering what the rest of you Neat Video settings (I guess spatial denoising in particular) look like? In my own trials I always found that applying temporal denoising looked too smooth and actually made the footage appear less detailed/sharp. Your example on the other hand looks very pleasant and, as you describe, makes the details much more visible.

Hmm, the settings aren’t anything out of the ordinary. I think it’s all default except the 5 frames before/after thing.

The first step is always getting a nice noise profile from some neutral part of the image:

I don’t know much about the app, but I’m guessing that “Quality: 93%” is a good thing. I’ve seen it give worse results when that quality metric was lower.

Then, on the next tab, I have everything set to the defaults. If I go out of my way and turn spatial denoising off, the blue speckles show up a little more:

That’s temporal-only. Enabling spatial makes the last vestiges of blue dots go away (as seen before).

In the comparison in my previous post I may have had a mild Unsharp Mask applied to the final result (I made those side-by-sides a couple weeks ago to show my uncles and have since forgotten), but otherwise it’s almost completely NeatVideo’s temporal stuff doing the heavy lifting here.

2 Likes

Nice, thank you for the quick reply!

I think what this really shows is how important a good noise profile is in Neat Video. I’ve been copying the Neat Video filter from one clip to the next without really paying too much attention to the noise profile, assuming it will be roughly similar from one roll of Kodachrome to the next. Presumably, that’s why my results are not as good. (To be fair, the instructions for Neat Video do repeatedly point out the importance of a good noise profile :upside_down_face:).

1 Like

October: Lots of Software Work

I attempted a full 3" reel of Super8. It took 7 hours. There was a lot of babysitting. The last 10% of the reel in particular seemed to fight against my software and took probably half of the total time. Even when things were going smoothly, I felt like I had to sit perfectly still to avoid shaking the desk. I’ve got all 44GB of the resulting 3,609 frames, but there is some work before it’ll be ready to do the other 39 reels in my collection.

So I did some of that work in October.

I started with code that expected everything to work smoothly, failing poorly whenever it didn’t. The real trick will be to transition right past just detecting when things don’t go as expected and instead teach the software how to correct for common problems.

Partially out-of-viewport sprocket holes

The sprocket hole detection (that we were talking about in the SnailScanToo thread) was the first part of that work. I ended up dialing some of that complexity back. Now that I’ve got nice control over the vertical alignment, the problem became a little easier. I still calculate the center-of-mass of the over-exposed pixels. But that falls apart for sprocket holes that are half off the top or bottom of the viewport.

Here is a super simple scheme to make it work, even at the viewport edges:

  1. Once at the start of a reel, find a nice, clean, normal sprocket hole and measure its height.

“Height” here is as simple as finding the first and last (contiguous) image rows that contain over-exposed pixels. When the alignment is very close to vertical, this is fine. I actually measure two or three “nice” holes and have the software use the average height.

  1. With a known-good “prototype” sprocket hole height, we can use sprocket position estimates from several sources and average them all together. Each estimate gets a certain number of votes for where it thinks the y-coordinate of the hole is and at the end the votes are averaged together.

    • The center-of-mass estimate gets 2 votes.
    • The first over-exposed row + height/2 gets 1 vote.
    • The last over-exposed row - height/2 gets 1 vote.

Then, you just use these simple exceptions:

  • If the hole is falling off the top or bottom of the viewport, throw away the center-of-mass votes.
  • Hole touching the top of the viewport? Throw away the first-row vote.
  • Hole touching the bottom of the viewport? Throw away the last-row vote.

It’s very straightforward to implement. And for normal sprocket holes, all the votes basically land on the same Y value and it’s incredibly consistent. Even the dirty or lightly torn holes I’ve seen so far end up close to their center. And it continues to work (just not quite as robustly) without modification, as holes scroll into/out of the viewport.

Frame height

Part of speeding things up will be advancing an entire frame at a time (instead of in three smaller moves). To do that, I need an estimate of each frame’s height. That should be as easy as measuring the distance between the center of two sprocket holes, but until now (because of the zoom and framing), I’ve never had two full Super8 sprocket holes in the viewport at the same time.

With the new out-of-viewport sprocket detection, I don’t need to have them completely on-screen anymore! Now, while I’m grabbing the initial known-good sprocket heights at the beginning of a new reel, I can advance by half a frame and grab the sprocket hole separation (again, averaging two or three of them together to get some of the noise out).

So now I’ve got all the quantities I need for a much faster film transport.

Next I’m adding the notion of a frame ID to the app, which increments each time a sprocket hole goes by. Until now, there has been a rigid sequence of “advance by X to measure tension, advance the hole off the top of the screen, advance as much as you guess it’ll take to center the next sprocket hole, take an exposure”.

Now the plan will be something closer to “here is the list of frames we don’t have images of yet; you know which frame you’re currently looking at; plan moves that will center the frames we don’t have yet, then capture them.” Built into that is the idea of smaller correctional moves if any of our guesses/estimates were wrong (say, if the film wasn’t wound tightly and it slips, etc.) Ideally most moves will be correct on the first try, but having it able to take care of itself without being babysat the entire time is the goal!

Film Tension/Focus/Bow

I had a few minutes one day where I was curious about @PM490’s warning of film bow. I thought I had designed a test to check for it. Now that I’ve seen the results, I’m not so sure it tested what I thought it was going to. :sweat_smile:

Still, the results were interesting, so I thought I’d share. Here’s the set up:

  1. Tension the film to some level, T.
  2. Adjust the camera to be in the best possible focus (using the image’s pixel value standard deviation as a proxy for focus).
  3. Do a sweep across many tension values, measuring the focus at each point (keeping the camera position constant).

Repeat that for several values of T, getting a new curve on this plot each time. Finally, it works best when the curves are all normalized to their individual peaks.

The sanity check here is that the peak of each line falls at the tension it was focused on. That’s a good start. (The purple 180g line is arguable. I think there might have been some PETG creep by the time I got all my ducks in a row for that data series.)

Beyond that, reducing or increasing the tension (away from the value used for calibrating the focus) always impacts the focus negatively. That also makes intuitive sense to me.

My hypothesis for the mostly-linear decline as you increase tension (to the right of each peak) is from the physical deformation of the 3D printed spindle holder. The tension is presumably tilting the bolts closer toward the camera.

The most interesting part is the width of each peak. You can summarize it like “the higher the tension you start with, the more tolerant the focus is to variations in tension”. I don’t know whether that finding directly supports the existence of film bow (like I was hoping the experiment might), but it definitely seems related. At lower tension levels, film bow would play a larger factor. And it’s at the lowest tension levels that we see the focus falling off quickest. At higher tension, you’ve basically stretched all of the bow out of the film path, so there is less difference in that tension neighborhood.

I don’t know that there is much to learn here besides doing the tightrope walk between targeting the highest tension you can without also posing a danger to the film. For my part, I think I’m going to target something around 170g.

What’s Next

Hopefully I’ll have something really cool next time. I just spent the last two weeks designing a circuit for a new peripheral for this machine. I finished the PCB layout and placed the DigiKey+JLCPCB orders last night.

If it ends up working, I would be excited if it eventually saw wider use by the community (at least for stop-motion machine designs). It seems like it has a lot of potential to be helpful. And I kept Raspberry Pi signal voltage compatibility in mind while I designed it. This would be a completely open-source release once I’ve got any bugs worked out.

Still, until I can see it working with my own eyes, I’ll keep it a surprise for now. :smiley:

3 Likes

Nice work, and thanks for sharing @npiegdon. Would like to understand a bit better if you can elaborate…

Not sure I follow how you go from standard deviation to focus.

If I follow, all the curves are normalized to their peak (on the sharpness axis)… and the peak of the curve is the tension at focus. Since the x axis is tension the curves are shifting to their focus-tension point. Wonder how these would look if the x axis instead is Tension - Focus Tension + 180g, making the x position of all the curves centered at their focus.

Bowing?

Also, the higher the tension, the flatter the film.

One important factor that is not mentioned is what was the lens aperture. While the smallest depth of field would be at the lowest f, it is also typically not the sharpest f.