SnailScanToo or 2

In an effort to confirm that the frame movement is the 50mm silicone encoder, I rigged a capstan stacking the 15mm coupler and wrapping it with silicone rubber bands.
I was concerned if the smaller surface contact would be sufficient to provide enough grip on the film, fortunately the film moves well.

Below is a test sequence with the above rig. Except for the last portion, which is camera stabilized with Resolve, all the other clips are without stabilization.

The earlier test was placed at 3.3 times the speed to confirm the beat is similar to the new smaller coupler.

Given the completely different capstan configurations, yet the persistence of the similar movement, I am leaning to think that the problem may be in the stepper-gearbox combination. The axle has a bit of play. Another hint is that the issue is stretched by the perimeter of the capstan, as confirmed by speeding up the 50mm sequence by the diameter ratio with the 15mm sequence.

Still not sure if there may be something else that could cause it, but have less confident than the issue was the encoder.

The good bit of news is that frame is only slipping a bit over a sequence of 300 frames. Good start for a digital detector/feedback of sprocket position.

1 Like

1 rev. of the geared output shaft is 13.734*200 = 2746 steps

When you state that the frame advance is 2109 steps, I assume you mean microsteps. If so, then the frame advance 2109/16 ~ 131.81 whole steps. The period of the wobble in steps is 21 frames * 131.81 steps/frame which is ~2768 steps.

If I understood your posts correctly, the number of steps per wobble period is very close to the number of steps per one revolution of the geared output. 2768 vs 2746 steps.

Microsteps. Will update the post for clarity, thanks for mentioning it.

That is the case. Very close, since there is a fraction of a frame left over as reminder in the ratio between frame size and perimeter of the silicone encoder.

Yes, 2109 microsteps.

Another way of looking (maybe clearer to those unfamiliar with the TMC) is 3200 microsteps = 200 whole steps, then 2109/3200 x 200 = 131.81 stepper steps.

2746 = 1 revolution.
2768 = 21 frames.

The perimeter of the encoder is not equal to the length of 21 frames.

2746 (Encoder Perimeter) = 1 revolution

2168 is the length of 21 frames, not the same as the encoder perimeter.
The estimate of the woble is based on whole frames, the fractional reminder is there, but I cannot measure it… it is a fraction of a mm.

https://www.faulhaber.com/en/know-how/tutorials/stepper-motor-tutorial-microstepping-myths-and-realities/

https://www.monolithicpower.com/why-microstepping-isnt-as-good-as-you-think

The available torque of a stepper motor decreases with increasing speed. The incremental torque also decreases with increasing number of microsteps. Is it possible you are operating at a speed and microstep setting where the available torque to the encoder wheel is insufficient to overcome the load of transporting the reel. We assume that the stepper motor motion is synchronous with the step commands, but the stepper is still an open loop system.

Have you tried any other speeds or microstep settings. Also have you disabled stealth chop or tried using spread cycle mode?

Another thought, I seem to recall (YouTube videos about 3D printer motion accuracy) that stepper motors operate better at 24V compared to 12V.

1 Like

Thanks @justin, from the information posted and for taking the time to review.
Assume your hypothesis is that the movement cycle is caused by the stepper driver settings. Certainly worth exploring, and perhaps doing some testing to confirm.

Agree that is the case, in my first built (stepper driving a projector gate) I was using 12V and after switching to 24V improved the performance of the stepper significantly.

The setup for the current project uses a dedicated 32V power supply for the steppers.

Using the TMC in the legacy mode, only controlling STEP and DIR, for a set microstep configuration of MS1 and MS2. Controlling the settings of the TMC via UART is a project in itself, maybe will tackle that option on a second revision of the scanner, not at this time.

In writing the code for the movements, I did, but not in the context of the frame wobble.
Presently the system is using a very conservative speed, mostly for the purpose of keeping the scanner as quiet as possible. Also the timing of the steps is using linear speed control for smooth acceleration/deceleration.

Everything is possible, and worth confirming. However, since the system is using linear speed control, and a very conservative speed, it greatly reduces the likelihood of that being the case.

The consistency of the movement over the number of frames corresponding to one capstan turn, with multiple starts and stops while scanning corresponding frames, and then over multiple turns of the system, leads to think that the driver is doing its work.

Anything is possible. When able, I will setup some test in the coming days and confirm.
Thanks again, stay tunned.

Quick update

Put the testing of the steppers on a brief hold. Got stuck with an issue (looks like a PICO SDK getchar bug, more info in the post). The serial USB communication would hang randomly with a choke character in the PICO.
After a lot of testing, and a better code with guardrails, the workaround was to use getchar_timeout_us using a zero argument.

Currently implementing a simple command protocol / parser for the serial-USB communication for the PICO, which would simplify changing parameters for the steppers/transport for continue testing what now appear to be inherent eccentricity issues of the geared steppers.

It is always nice when reality imitates cad… the HQ mounting piece arrived, nice 3mm aluminum, and a big improvement to the stability of the lens/sensor.


The back mount uses small slots, so the assembly is adjusted to have the T42 extension nicely resting on the bottom plate.

It reduces the strain and arm of the lens on the HQ plastic mount.

The sensor is held with 2.5mm (10mm long) stands to the back, and also with its 1/4 inch screw to the side. The mount is now attached to the sliders with the forward mounting screw, a real improvement to mounting from the plastic lens mount of the HQ.

This was worth the price of the laser cutting/bending… the piece was US$14 (not including shipping/taxes/minimum order).

Stay tuned.

2 Likes

In order to continue testing the root cause of the transport advancing rhythm, it was necessary to have some objective measurement of the sprocket hole position.

One of the alternatives in the forum is the method by @cpixit described in Simple Super-8 Sprocket Registration.

Another method was proposed by @npiegdon in the Simple(r?) Super-8 Sprocket Detection

In drinking from the firehose of the picamera2, python, and opencv, here is another idea.

Using Successive Approximations for Sprocket Hole Detection
This is a very rough prototype, but the results are impressive. Using the full resolution of the HQ sensor, an area of interest is extracted, and successive approximations used to determine the top and left edges of the sprocket hole.

The blue lines are the axis chosen for the y and x sampling, green corresponds to the top edge detected, and red corresponds to the left edge detected. The time presented is from prior to capture to prior to adding text and presentation.

The movement control is via USB serial with a simple protocol to control the PICO. The Putty screen echoes what the command translates to.

Intentionally, the sprocket is moved down outside the range of the successive approximation, and moved back into the range for illustration. Also in this test sequence is a splice that covers the sprocket hole.

While not implemented in the prototype, it would be simple and fast to probe in multiple locations (blue lines).

Will polish it a bit to continue testing the mechanics and transport but think this alternative speeds up the frame location sequence significantly.

2 Likes

Having a hard time drinking from the Python Qt5 hydrant to make a decent GUI.
In the meantime, sharing a bit more detail.

Successive Approximations
This method is widely used for ADCs in the form of a DAC providing a comparison value, against a level that one wishes to measure.

If one takes a slice of the film within the area of the sprocket, the idea is Y becomes the level (analogy would be the DAC), and the pixel content (black or white) becomes the result of the comparison (if the DAC is higher or lower than the level, in this case the edge). For this to work, without additional processing, the requirement is that the edge of the film above/below the sprocket hole is darker than the value of the hole.

   #Detection Area
    im3 = im0[:1023,:511] # y:yw, x:xw

    #Detection Parameters
    edge_level = 127 #Threshold
    sampling_axis = 255

    #Detect Top Edge of Hole
    tedge = 0
    for yt in range (8,-1, -1):
        if (im3[tedge+(2**yt)][sampling_axis][2] < edge_level and
            im3[tedge+(2**yt)][sampling_axis][1] < edge_level and
            im3[tedge+(2**yt)][sampling_axis][0] < edge_level):
            tedge = tedge + 2**yt
    
    #Detect Bottom Edge of Hole
    bedge = 0
    for yt in range (8,-1, -1):
        if (im3[bedge+(2**yt)+tedge][sampling_axis][2] > edge_level and
            im3[bedge+(2**yt)+tedge][sampling_axis][1] > edge_level and
            im3[bedge+(2**yt)+tedge][sampling_axis][0] > edge_level):
            bedge = bedge + 2**yt
    bedge = bedge + tedge

    sprocket_heigth = bedge-tedge

    #Detect Left Side of Hole
    xedge = 0
    for xt in range (7,-1,-1):
        if (im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][2] < edge_level and
            im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][1] < edge_level and
            im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][0] < edge_level):
            xedge = xedge + 2**xt

In the above code, all 3 channels (RGB) would have to be above the edge_level value.

Simple Waveform
For testing, I also coded a simple python/opencv waveform, trying to keep processing reasonable. The code is quite simple.

    #capture from Camera
    im0=piCam.capture_array()

    #create 1/8 size for presentation
    im1 = cv2.resize(im0,(0,0),fx=0.125, fy=0.125) #resize to 1/8 (380,504)
    # 3040,4056 - 1520,2028 - 760,1014 - 380,507
    wvfim = cv2.cvtColor(cv2.resize(im1,(0,0),fx=1, fy=0.025), cv2.COLOR_BGR2GRAY) #resize to 1/8 (380,504)

    wvfm = np.zeros((256,504), dtype="uint8")
    # markers
    for gy in range (0,255,26):
        wvfm = cv2.line(wvfm, (0,gy), (503,gy),64,1)

    # waveform
    for gx in range (0,503,1):
        for gy in range (0,9,1):
            wvfm[255-wvfim[gy][gx]][gx] = 192

im0 is the full resolution.
im1 is 1/8 of resolution, image captured presented.
wvfim is compressed vertically to minimize the number vertical samples to 10.

Here is how the above looks on a Raspberry Pi4, all presented with opencv windows.
Commands are typed on Putty to the Pico controller, which handles light and transport (steppers and tension).

2 Likes

That’s looking great so far. I especially like the waveform view.

Reading the code, I was trying to understand what sampling_axis is. Are you just looking at a single vertical strip of pixels for the detection? (Is that what the blue line is showing?) Does that explain why the frame that appears at 1:28 (with the little aberration at the bottom, on the blue line) has the bottom edge detected a little higher than the other frames?

I’ve coincidentally been working on similar detection (to extend the simple center-of-mass scheme) and been wondering what to do about those little flecks and other bits that appear in the hole area. So far I’ve had some success with robust least-squares (M-estimators, RANSAC, etc.) because they throw those sorts of outliers away without affecting the resulting edge fit.

To find the left edge, I scan from the center of the image over to the side and record the first point (of each row) that exceeds some threshold. Then, it’s just a call to OpenCV’s fitLine against that list of points using anything except the DIST_L2 method (so you get the “robust” part of the line fitting).

This was a low-tech visualization, but here is the result after plotting the points and the line it found in Excel:

All of those outliers on the rounded corners were ignored automatically! You end up with this great sub-pixel fit. (I picked this frame because it was particularly crooked. I’m also using the angle of the detected line as an alignment aid.)

I haven’t gotten this far yet, but my plan for the top/bottom lines was to do the same thing: scan from above and below, looking for the first pixels beyond a threshold and fit two more lines.

But we don’t have to stop there. Our problem domain gives us more information/constraints than that: we know the lines for the left and top/bottom should be right angles. And we can keep a running average height of all the sprocket holes we’ve seen in this reel to compare against the distance between the top/bottom lines. (E.g., if they’re very far off from the running average we know we either have a damaged sprocket hole or our detection was bad.)

Combining those extra constraints with the fit lines (somehow, using a lot of hand-waving), we should be able to get very close to ground truth. At least, that’s the plan. It’s been a fun exercise so far.

(All of that said, it looks like the vertical alignment on your machine is so good that you can dispense with half the problem I’ve been dealing with.) :smiley:

1 Like

It is the x coordinate corresponding to the vertical sampling axis (blue line from top to bottom).

Yes, only one vertical line (currently). Although two instances of successive approximation, one for the top edge, another for the bottom edge.

Yes, that is a perfect example that although simple, one axis is not perfect.

Also, the horizontal blue line indicates the location where the third successive approximation is applied to find the left edge of the sprocket.

Keep in mind a significant constrain is that it is performed within the confines of the Raspberry Pi 4/Python processing power.

Interesting. When you indicate that this particular frame is crooked… does that mean that the angle varies from frame to frame?

An improvement with the successive approximation algorithm is, to perform a couple of additional passes. After the initial results for the 3 locations, use these as baseline sprocket hole location, and perform additional approximations -which would be less bits- at other strategic locations. The results would increase the measurement confidence and provide points to extrapolate a line.

Thanks! (a lot of work on aligning the camera, still in progress). Also without a gate, the location of the sprocket does move (left-right), considering building some geared guide rollers with vertical limits (8mm and 16mm), but for now would like to better understand the peculiarities of the transport.

Let me add a little bit to the discussion

That was one of the last frames I’d taken before I’d noticed the nut had become loose on one of my rollers. Over the course of 300 frames it had begun tilting out of alignment more and more. Normally it doesn’t vary from frame to frame, but that problem did highlight that if I’m already finding the sprocket holes, it wouldn’t hurt to add the angle information someplace in the UI as a simple alignment tool.

Hopefully I’ll only need to use it once to calibrate things, but if something loosens again, it’ll be there.

That’s a good reminder; I’m used to thinking from the perspective of having an overpowered desktop computer.

That said, just your 1/8 down-sample is more expensive than scanning through the rows once to find that line on the full resolution image. (Well, unless you’re using nearest-neighbor filtering or something that is actually skipping pixels). The method I described should be possible to do in real-time, even on the RPi.

That should solve the problem. That would make our methods even more similar. I can tune the runtime of my algorithm by skipping rows (or columns when checking for the top/bottom lines). And you can tune yours by adding more pixel slices. At some point we’d meet in the middle and it would practically be the same solution. :smiley:

Those are good example images. That’ll be a good “test suite” to keep around to run my algorithm against. Even that really damaged one in the middle should be mostly salvageable using the extra constraints I mentioned before. There could be a metric that “trusts” a particular top/bottom line more when it’s closer to being at 90 degrees from the vertical line, with more trust added to the line based on how many pixels of the detected boundary pixels fall near it.

For that damaged middle sprocket hole, the bottom line would be shorter, at a worse angle, and substantially taller than the average sprocket hole. At a minimum, a frame like that could be flagged for manual alignment later.

1 Like

Thank you @cpixip, I had seen the set, and with it in mind there was this caveat

Additionally:

  • Notice that for the horizontal sprocket position, it uses the hole edge away from the image, which in the sample images is cropped and appears to be obstructed.

  • The above is also used in a slice where the hole is expected not really intended to find the hole across the entire frame (at least not without some additional work to position the succession in the region where the hole is (such as is done with the bottom edge).

Setting aside the above, it should work if the illumination is flat, although the set level would be a bit more critical.

I found something consistently odd in most sprocket holes in your sample: with the exception of the first and the last, where it may not be visible, the top/bottom levels above/below the hole are brighter than through the hole… not sure why would that be, or how could it be. Automatic Lens Shading Correction?
It is particularly noticeable in number 2 and 3 from left to right, but also in number 4 where the markings are brighter than the hole.


If it wasn’t for that reverse-shading, and the left missing portion of the sprocket, the above would work… the correct level would be more critical for items 2 and 3 (from left to right), but it would work, and on the missing portion of the sprocket on item 5 should work too.

Interestingly, even with the reverse-shading, the blue channel looks like it will work for all.

I’m guessing that’s the local contrast introduced by Mertens fusion.

To make a test suite usable with my app, I just broke them apart into five separate images, made them grayscale (using the blue channel), and adjusted the levels until just the hole was over-exposed. These 16-bit TIFFs are most like the input my app is expecting, but just in case it’s useful to anyone else:

Challenging Sprockets.zip (326.5 KB)

(They are mirrored horizontally, which is required due to a quirk of my machine. But that should be easy to adjust.)

… could very well be. These are old images, and I abandoned Mertens recently for a fixed exposure and raw files. That would make it easier to set a fixed threshold to single out the area of the sprocket hole. It has an intensity slightly above any blank image area. With that, a center-of-mass approach could be quite fast and precise.

One could also introduce early consistency checks. For example, the height and width of a normal sprocket stays within close bounds. Similary, detecting the lengths of the straight lines at the top and bottom of the sprocket should exceed a certain limit for the border to be considered “non corrupted”.

All of your algorithms should run fine on a RP4 - my own algorithm does this as well and is used to fine-adjust sprocket position during scanning.

Here is a quick test, using the successive approximations, and the blue channel of each testing slice. The threshold is set from the maxVal using minMaxLoc on the slice. The waveform corresponds to the center of the slice, which is the axis used for the successive approximations.

As expected, on the third one, the broken sprocket hole is larger, however the top edge is quite nice.

This has been an interesting test, since normally I would not have considered the blue channel as the best across, until after creating the waveforms.

That is interesting, and something I did not know. Would Debevec also have similar effects?

Thanks you both for the insightful exchance.

Very nice results!

No, as Mertens employs behind the scenes a Laplacian pyramid and Debevec does not. The later is a purely local algorithm, operating on each pixel individually. Mertens is non-local and could produce such a behaviour. However, these scans quite old - not even sure what the illumination and the camera was at that time.

1 Like

Here’s a neat idea I’ve been tinkering with most of the day. Start with the same two steps as before:

  1. Center-of-mass on pixels above a certain threshold to find a rough center Y position. (I talked about that part in my August p.1 ReelSlow8 update.)

  2. Use a robust linear regression to find a nice sub-pixel left edge. (I showed that a few posts up.)

To pick some concrete point for frame registration you could just use the center-of-mass Y and find where it intersects the left edge line. But, that is sensitive to sprocket damage and dirt: there may be fewer pixels above the threshold value on one side of the hole and that would incorrectly shift the center of mass.

So, getting some better idea of the top/bottom edges would help… but trying to detect those lines would be sensitive to the kind of torn sprocket hole damage in @cpixip’s example images!

We do have some handy “constants” though: the height of a sprocket hole shouldn’t change much from frame to frame. And (one I haven’t heard mentioned yet): the radius of the rounded inner corners should also be consistent!

Check this out:
example1

To make the geometry easier, I’m assuming my machine will be calibrated to within, say, half a degree of vertical by the time this step matters.

What’s going on here is that I’m fitting two arcs (critically, of the same radius) to the rounded corner points. (I’m drawing the whole circle just for the visualization; the actual fit is only being done for quadrant II and III for the top and bottom corners, respectively.)

The small green circle in the middle is simply the midpoint of the two circles and should be a more accurate frame registration point than just relying on the center-of-mass.

This is using the downhill simplex method again, which is the algorithm that keeps trying different points in the search space to find the minimum value of some error function.

The constants (for a given frame) are the “left edge” points (like those plotted above in that Excel graph a few posts ago), the center of the left edge line (using the center-of-mass Y), and the running average of the sprocket heights and corner radii from the last dozen or so frames.

The parameters being optimized are the actual center Y value, the sprocket height, and the corner radii.

The error function (so far) used to find the best fit boils down to:

totalError = cornerFitError + heightError + radiusError.

Those last two use the historical running averages and end up being something like (height - averageHeight)^2.

Now I don’t have this pulled all the way through my system yet, so the frame-to-frame averaging is running continuously instead of only once per capture. If that weren’t the case, this second example would have been even closer:

example2

But, by the time I took the screenshot it already had a chance to “train” itself on its own input for a hundred or so frames. So it had the time to find a height and radius that were farther from the values established in the previous frames (because of that bit of junk at the top left corner of the hole).

Still, this is already a better result than the center of mass would have given. That shred(?) of film stock would have materially moved the detected sprocket center down a few pixels.

It’s a little tricky to run this against the test suite of bad sprocket holes because the dimensions aren’t the same and I don’t have a bunch of previous frames to build up a running history of heights/radii, but so far this is quite promising.

1 Like

Just like to mention that the typical way to detect lines and circles is utilizing the Hough-transform. In fact, if I remember that correctly, this was what Matthew used in the software of the first Kinograph.

Introducing early consistency checks during the algorithm could be used to drop a broken section of a sprocket from further consideration. So a simple kind of trust value could be calculated for each detected sprocket. In the extreme case, the sprocket algorithm should be able to signal “no sprocket found, you’re on your own”…

The Hough-transform detects lots of arbitrary lines. I just want a single, specific line that we already know a lot about. And for the post above, we’re not detecting arbitrary circles; we’re detecting exactly two π/2 arcs that are tangent to a known line and symmetric about an unknown point. I don’t know of any off-the-shelf algorithms that do that. So using argmin with an error function that evaluates all of those constraints at once seemed like a nice fit that is already working well in practice. The error function is the trust value you described.

Since writing “… and I don’t have a bunch of previous frames to build up a running history…” yesterday, I’ve reconsidered: why stop at just using frames from the past when we could also use frames from the future?! :rofl:

Instead of running this detection as the frames come in, now I want to try using an offline, two-pass version of the algorithm once the full, uncropped frames are already on disk.

The first pass would detect these sprocket parameters (height and corner radius) for every frame in the reel without any regard to the other frames. Once we’ve got that pile of data, we can throw away the outliers and come up with some definitive answer for height & radius of the sprockets in that reel using the average (or median) of everything we looked at.

Then, the second pass would go through the detection process again, this time forcing the height/radius parameters, in order to find the best fit for just the sprocket position.

By making it offline and after-the-fact, it could flag those frames where no fit or a bad fit could be found. Then, that last 1% could be tagged by hand, interactively. Just sort the list of frames by the error function value, descending, and you’ll be presented with all of the worst fits.

As an offline process that just operates on a folder full of frames, this could be a stand-alone tool that might be useful to a larger audience than just me, too. If it ends up working out well, I’ll try to share it with the community. :smiley: