SnailScanToo or 2

Quick update

Put the testing of the steppers on a brief hold. Got stuck with an issue (looks like a PICO SDK getchar bug, more info in the post). The serial USB communication would hang randomly with a choke character in the PICO.
After a lot of testing, and a better code with guardrails, the workaround was to use getchar_timeout_us using a zero argument.

Currently implementing a simple command protocol / parser for the serial-USB communication for the PICO, which would simplify changing parameters for the steppers/transport for continue testing what now appear to be inherent eccentricity issues of the geared steppers.

It is always nice when reality imitates cad… the HQ mounting piece arrived, nice 3mm aluminum, and a big improvement to the stability of the lens/sensor.


The back mount uses small slots, so the assembly is adjusted to have the T42 extension nicely resting on the bottom plate.

It reduces the strain and arm of the lens on the HQ plastic mount.

The sensor is held with 2.5mm (10mm long) stands to the back, and also with its 1/4 inch screw to the side. The mount is now attached to the sliders with the forward mounting screw, a real improvement to mounting from the plastic lens mount of the HQ.

This was worth the price of the laser cutting/bending… the piece was US$14 (not including shipping/taxes/minimum order).

Stay tuned.

2 Likes

In order to continue testing the root cause of the transport advancing rhythm, it was necessary to have some objective measurement of the sprocket hole position.

One of the alternatives in the forum is the method by @cpixit described in Simple Super-8 Sprocket Registration.

Another method was proposed by @npiegdon in the Simple(r?) Super-8 Sprocket Detection

In drinking from the firehose of the picamera2, python, and opencv, here is another idea.

Using Successive Approximations for Sprocket Hole Detection
This is a very rough prototype, but the results are impressive. Using the full resolution of the HQ sensor, an area of interest is extracted, and successive approximations used to determine the top and left edges of the sprocket hole.

The blue lines are the axis chosen for the y and x sampling, green corresponds to the top edge detected, and red corresponds to the left edge detected. The time presented is from prior to capture to prior to adding text and presentation.

The movement control is via USB serial with a simple protocol to control the PICO. The Putty screen echoes what the command translates to.

Intentionally, the sprocket is moved down outside the range of the successive approximation, and moved back into the range for illustration. Also in this test sequence is a splice that covers the sprocket hole.

While not implemented in the prototype, it would be simple and fast to probe in multiple locations (blue lines).

Will polish it a bit to continue testing the mechanics and transport but think this alternative speeds up the frame location sequence significantly.

2 Likes

Having a hard time drinking from the Python Qt5 hydrant to make a decent GUI.
In the meantime, sharing a bit more detail.

Successive Approximations
This method is widely used for ADCs in the form of a DAC providing a comparison value, against a level that one wishes to measure.

If one takes a slice of the film within the area of the sprocket, the idea is Y becomes the level (analogy would be the DAC), and the pixel content (black or white) becomes the result of the comparison (if the DAC is higher or lower than the level, in this case the edge). For this to work, without additional processing, the requirement is that the edge of the film above/below the sprocket hole is darker than the value of the hole.

   #Detection Area
    im3 = im0[:1023,:511] # y:yw, x:xw

    #Detection Parameters
    edge_level = 127 #Threshold
    sampling_axis = 255

    #Detect Top Edge of Hole
    tedge = 0
    for yt in range (8,-1, -1):
        if (im3[tedge+(2**yt)][sampling_axis][2] < edge_level and
            im3[tedge+(2**yt)][sampling_axis][1] < edge_level and
            im3[tedge+(2**yt)][sampling_axis][0] < edge_level):
            tedge = tedge + 2**yt
    
    #Detect Bottom Edge of Hole
    bedge = 0
    for yt in range (8,-1, -1):
        if (im3[bedge+(2**yt)+tedge][sampling_axis][2] > edge_level and
            im3[bedge+(2**yt)+tedge][sampling_axis][1] > edge_level and
            im3[bedge+(2**yt)+tedge][sampling_axis][0] > edge_level):
            bedge = bedge + 2**yt
    bedge = bedge + tedge

    sprocket_heigth = bedge-tedge

    #Detect Left Side of Hole
    xedge = 0
    for xt in range (7,-1,-1):
        if (im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][2] < edge_level and
            im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][1] < edge_level and
            im3[tedge+int(sprocket_heigth/2)][xedge+2**xt][0] < edge_level):
            xedge = xedge + 2**xt

In the above code, all 3 channels (RGB) would have to be above the edge_level value.

Simple Waveform
For testing, I also coded a simple python/opencv waveform, trying to keep processing reasonable. The code is quite simple.

    #capture from Camera
    im0=piCam.capture_array()

    #create 1/8 size for presentation
    im1 = cv2.resize(im0,(0,0),fx=0.125, fy=0.125) #resize to 1/8 (380,504)
    # 3040,4056 - 1520,2028 - 760,1014 - 380,507
    wvfim = cv2.cvtColor(cv2.resize(im1,(0,0),fx=1, fy=0.025), cv2.COLOR_BGR2GRAY) #resize to 1/8 (380,504)

    wvfm = np.zeros((256,504), dtype="uint8")
    # markers
    for gy in range (0,255,26):
        wvfm = cv2.line(wvfm, (0,gy), (503,gy),64,1)

    # waveform
    for gx in range (0,503,1):
        for gy in range (0,9,1):
            wvfm[255-wvfim[gy][gx]][gx] = 192

im0 is the full resolution.
im1 is 1/8 of resolution, image captured presented.
wvfim is compressed vertically to minimize the number vertical samples to 10.

Here is how the above looks on a Raspberry Pi4, all presented with opencv windows.
Commands are typed on Putty to the Pico controller, which handles light and transport (steppers and tension).

2 Likes

That’s looking great so far. I especially like the waveform view.

Reading the code, I was trying to understand what sampling_axis is. Are you just looking at a single vertical strip of pixels for the detection? (Is that what the blue line is showing?) Does that explain why the frame that appears at 1:28 (with the little aberration at the bottom, on the blue line) has the bottom edge detected a little higher than the other frames?

I’ve coincidentally been working on similar detection (to extend the simple center-of-mass scheme) and been wondering what to do about those little flecks and other bits that appear in the hole area. So far I’ve had some success with robust least-squares (M-estimators, RANSAC, etc.) because they throw those sorts of outliers away without affecting the resulting edge fit.

To find the left edge, I scan from the center of the image over to the side and record the first point (of each row) that exceeds some threshold. Then, it’s just a call to OpenCV’s fitLine against that list of points using anything except the DIST_L2 method (so you get the “robust” part of the line fitting).

This was a low-tech visualization, but here is the result after plotting the points and the line it found in Excel:

All of those outliers on the rounded corners were ignored automatically! You end up with this great sub-pixel fit. (I picked this frame because it was particularly crooked. I’m also using the angle of the detected line as an alignment aid.)

I haven’t gotten this far yet, but my plan for the top/bottom lines was to do the same thing: scan from above and below, looking for the first pixels beyond a threshold and fit two more lines.

But we don’t have to stop there. Our problem domain gives us more information/constraints than that: we know the lines for the left and top/bottom should be right angles. And we can keep a running average height of all the sprocket holes we’ve seen in this reel to compare against the distance between the top/bottom lines. (E.g., if they’re very far off from the running average we know we either have a damaged sprocket hole or our detection was bad.)

Combining those extra constraints with the fit lines (somehow, using a lot of hand-waving), we should be able to get very close to ground truth. At least, that’s the plan. It’s been a fun exercise so far.

(All of that said, it looks like the vertical alignment on your machine is so good that you can dispense with half the problem I’ve been dealing with.) :smiley:

1 Like

It is the x coordinate corresponding to the vertical sampling axis (blue line from top to bottom).

Yes, only one vertical line (currently). Although two instances of successive approximation, one for the top edge, another for the bottom edge.

Yes, that is a perfect example that although simple, one axis is not perfect.

Also, the horizontal blue line indicates the location where the third successive approximation is applied to find the left edge of the sprocket.

Keep in mind a significant constrain is that it is performed within the confines of the Raspberry Pi 4/Python processing power.

Interesting. When you indicate that this particular frame is crooked… does that mean that the angle varies from frame to frame?

An improvement with the successive approximation algorithm is, to perform a couple of additional passes. After the initial results for the 3 locations, use these as baseline sprocket hole location, and perform additional approximations -which would be less bits- at other strategic locations. The results would increase the measurement confidence and provide points to extrapolate a line.

Thanks! (a lot of work on aligning the camera, still in progress). Also without a gate, the location of the sprocket does move (left-right), considering building some geared guide rollers with vertical limits (8mm and 16mm), but for now would like to better understand the peculiarities of the transport.

Let me add a little bit to the discussion

That was one of the last frames I’d taken before I’d noticed the nut had become loose on one of my rollers. Over the course of 300 frames it had begun tilting out of alignment more and more. Normally it doesn’t vary from frame to frame, but that problem did highlight that if I’m already finding the sprocket holes, it wouldn’t hurt to add the angle information someplace in the UI as a simple alignment tool.

Hopefully I’ll only need to use it once to calibrate things, but if something loosens again, it’ll be there.

That’s a good reminder; I’m used to thinking from the perspective of having an overpowered desktop computer.

That said, just your 1/8 down-sample is more expensive than scanning through the rows once to find that line on the full resolution image. (Well, unless you’re using nearest-neighbor filtering or something that is actually skipping pixels). The method I described should be possible to do in real-time, even on the RPi.

That should solve the problem. That would make our methods even more similar. I can tune the runtime of my algorithm by skipping rows (or columns when checking for the top/bottom lines). And you can tune yours by adding more pixel slices. At some point we’d meet in the middle and it would practically be the same solution. :smiley:

Those are good example images. That’ll be a good “test suite” to keep around to run my algorithm against. Even that really damaged one in the middle should be mostly salvageable using the extra constraints I mentioned before. There could be a metric that “trusts” a particular top/bottom line more when it’s closer to being at 90 degrees from the vertical line, with more trust added to the line based on how many pixels of the detected boundary pixels fall near it.

For that damaged middle sprocket hole, the bottom line would be shorter, at a worse angle, and substantially taller than the average sprocket hole. At a minimum, a frame like that could be flagged for manual alignment later.

1 Like

Thank you @cpixip, I had seen the set, and with it in mind there was this caveat

Additionally:

  • Notice that for the horizontal sprocket position, it uses the hole edge away from the image, which in the sample images is cropped and appears to be obstructed.

  • The above is also used in a slice where the hole is expected not really intended to find the hole across the entire frame (at least not without some additional work to position the succession in the region where the hole is (such as is done with the bottom edge).

Setting aside the above, it should work if the illumination is flat, although the set level would be a bit more critical.

I found something consistently odd in most sprocket holes in your sample: with the exception of the first and the last, where it may not be visible, the top/bottom levels above/below the hole are brighter than through the hole… not sure why would that be, or how could it be. Automatic Lens Shading Correction?
It is particularly noticeable in number 2 and 3 from left to right, but also in number 4 where the markings are brighter than the hole.


If it wasn’t for that reverse-shading, and the left missing portion of the sprocket, the above would work… the correct level would be more critical for items 2 and 3 (from left to right), but it would work, and on the missing portion of the sprocket on item 5 should work too.

Interestingly, even with the reverse-shading, the blue channel looks like it will work for all.

I’m guessing that’s the local contrast introduced by Mertens fusion.

To make a test suite usable with my app, I just broke them apart into five separate images, made them grayscale (using the blue channel), and adjusted the levels until just the hole was over-exposed. These 16-bit TIFFs are most like the input my app is expecting, but just in case it’s useful to anyone else:

Challenging Sprockets.zip (326.5 KB)

(They are mirrored horizontally, which is required due to a quirk of my machine. But that should be easy to adjust.)

… could very well be. These are old images, and I abandoned Mertens recently for a fixed exposure and raw files. That would make it easier to set a fixed threshold to single out the area of the sprocket hole. It has an intensity slightly above any blank image area. With that, a center-of-mass approach could be quite fast and precise.

One could also introduce early consistency checks. For example, the height and width of a normal sprocket stays within close bounds. Similary, detecting the lengths of the straight lines at the top and bottom of the sprocket should exceed a certain limit for the border to be considered “non corrupted”.

All of your algorithms should run fine on a RP4 - my own algorithm does this as well and is used to fine-adjust sprocket position during scanning.

Here is a quick test, using the successive approximations, and the blue channel of each testing slice. The threshold is set from the maxVal using minMaxLoc on the slice. The waveform corresponds to the center of the slice, which is the axis used for the successive approximations.

As expected, on the third one, the broken sprocket hole is larger, however the top edge is quite nice.

This has been an interesting test, since normally I would not have considered the blue channel as the best across, until after creating the waveforms.

That is interesting, and something I did not know. Would Debevec also have similar effects?

Thanks you both for the insightful exchance.

Very nice results!

No, as Mertens employs behind the scenes a Laplacian pyramid and Debevec does not. The later is a purely local algorithm, operating on each pixel individually. Mertens is non-local and could produce such a behaviour. However, these scans quite old - not even sure what the illumination and the camera was at that time.

1 Like

Here’s a neat idea I’ve been tinkering with most of the day. Start with the same two steps as before:

  1. Center-of-mass on pixels above a certain threshold to find a rough center Y position. (I talked about that part in my August p.1 ReelSlow8 update.)

  2. Use a robust linear regression to find a nice sub-pixel left edge. (I showed that a few posts up.)

To pick some concrete point for frame registration you could just use the center-of-mass Y and find where it intersects the left edge line. But, that is sensitive to sprocket damage and dirt: there may be fewer pixels above the threshold value on one side of the hole and that would incorrectly shift the center of mass.

So, getting some better idea of the top/bottom edges would help… but trying to detect those lines would be sensitive to the kind of torn sprocket hole damage in @cpixip’s example images!

We do have some handy “constants” though: the height of a sprocket hole shouldn’t change much from frame to frame. And (one I haven’t heard mentioned yet): the radius of the rounded inner corners should also be consistent!

Check this out:
example1

To make the geometry easier, I’m assuming my machine will be calibrated to within, say, half a degree of vertical by the time this step matters.

What’s going on here is that I’m fitting two arcs (critically, of the same radius) to the rounded corner points. (I’m drawing the whole circle just for the visualization; the actual fit is only being done for quadrant II and III for the top and bottom corners, respectively.)

The small green circle in the middle is simply the midpoint of the two circles and should be a more accurate frame registration point than just relying on the center-of-mass.

This is using the downhill simplex method again, which is the algorithm that keeps trying different points in the search space to find the minimum value of some error function.

The constants (for a given frame) are the “left edge” points (like those plotted above in that Excel graph a few posts ago), the center of the left edge line (using the center-of-mass Y), and the running average of the sprocket heights and corner radii from the last dozen or so frames.

The parameters being optimized are the actual center Y value, the sprocket height, and the corner radii.

The error function (so far) used to find the best fit boils down to:

totalError = cornerFitError + heightError + radiusError.

Those last two use the historical running averages and end up being something like (height - averageHeight)^2.

Now I don’t have this pulled all the way through my system yet, so the frame-to-frame averaging is running continuously instead of only once per capture. If that weren’t the case, this second example would have been even closer:

example2

But, by the time I took the screenshot it already had a chance to “train” itself on its own input for a hundred or so frames. So it had the time to find a height and radius that were farther from the values established in the previous frames (because of that bit of junk at the top left corner of the hole).

Still, this is already a better result than the center of mass would have given. That shred(?) of film stock would have materially moved the detected sprocket center down a few pixels.

It’s a little tricky to run this against the test suite of bad sprocket holes because the dimensions aren’t the same and I don’t have a bunch of previous frames to build up a running history of heights/radii, but so far this is quite promising.

1 Like

Just like to mention that the typical way to detect lines and circles is utilizing the Hough-transform. In fact, if I remember that correctly, this was what Matthew used in the software of the first Kinograph.

Introducing early consistency checks during the algorithm could be used to drop a broken section of a sprocket from further consideration. So a simple kind of trust value could be calculated for each detected sprocket. In the extreme case, the sprocket algorithm should be able to signal “no sprocket found, you’re on your own”…

The Hough-transform detects lots of arbitrary lines. I just want a single, specific line that we already know a lot about. And for the post above, we’re not detecting arbitrary circles; we’re detecting exactly two π/2 arcs that are tangent to a known line and symmetric about an unknown point. I don’t know of any off-the-shelf algorithms that do that. So using argmin with an error function that evaluates all of those constraints at once seemed like a nice fit that is already working well in practice. The error function is the trust value you described.

Since writing “… and I don’t have a bunch of previous frames to build up a running history…” yesterday, I’ve reconsidered: why stop at just using frames from the past when we could also use frames from the future?! :rofl:

Instead of running this detection as the frames come in, now I want to try using an offline, two-pass version of the algorithm once the full, uncropped frames are already on disk.

The first pass would detect these sprocket parameters (height and corner radius) for every frame in the reel without any regard to the other frames. Once we’ve got that pile of data, we can throw away the outliers and come up with some definitive answer for height & radius of the sprockets in that reel using the average (or median) of everything we looked at.

Then, the second pass would go through the detection process again, this time forcing the height/radius parameters, in order to find the best fit for just the sprocket position.

By making it offline and after-the-fact, it could flag those frames where no fit or a bad fit could be found. Then, that last 1% could be tagged by hand, interactively. Just sort the list of frames by the error function value, descending, and you’ll be presented with all of the worst fits.

As an offline process that just operates on a folder full of frames, this could be a stand-alone tool that might be useful to a larger audience than just me, too. If it ends up working out well, I’ll try to share it with the community. :smiley:

… this statement would not hold in the case of our sprocket images, given a properly tuned algorithm.

With respect to your error-function: of course it gives you an estimate of how good your fit is. What I ment is an indication from the algorihm like "I think the upper sprocket line is corrupted, " plus a solution “…but I can estimate the sprocket center from the less corrupted lower edge plus half of the known sprocket height.”

In a way a robust fallback when a non-standard sprocket is detected. Well, just an idea…

Finally another comment: the more variables your error function has, the longer the parameter search is going to take and the risk of encountering local extremas will increase.

However, most sprocket holes are well-behaved anyway so the few outliers could easily interpolated from neighbouring correct values.

1 Like

I’ve been enjoying the conversation about sprocket hole identification.
@PM490 listed their motivation:

What is the motivation for others (@npiegdon @cpixip ) in identitying a sprocket? Alignment of successive frames? If so, what accuracy is required? Won’t a video stabilization step correct any jitter caused by errors in sprocket detection?

1 Like

I mentioned my own motivation at the end of my September post. The short version is:

  1. In extreme cases, the lack of frame-to-frame consistency made it harder for the stabilization plugin to do its job. (You can see some shaking around 0:09 in this video where it was too much.)

  2. Knowing where the sprocket holes are gives an angle that can be used as a hint while calibrating the straightness of the film transport.

  3. If you know the sprocket hole location and dimensions, you can infer the frame position and dimensions. This can be used as a safety check to make sure the entire frame is in the camera’s viewport before capturing an exposure. And while you’re at it, you can use the same dimensions to crop out any excessive over-scan (leaving, say, 5% instead of 20%) to save a little disk space and give the stabilizer (and other processes) fewer pixels to chew on.

For my part, the first one is the real reason. The other two are just nice things that you get for free once you complete the first one. :smiley:

1 Like

I really admire the work done on this sprocket holes detection, I admit that I wouldn’t be able to do it.

But my opinion is different:

  • A perfectly centered sprocket holes is not a guarantee of a stable image, because the cameras could also have adjustment faults. Sometimes we see a stable sprocket holes and moving image frames.
  • Anegdotally, I had imprecise sprocket holes on a film, they moved on the horizontal plane (probably a lack of perforation of the film).
  • On amateur films, the cameraman moved much more than the misalignment of the film :slight_smile:
  • The projector still added some extraneous movements.

I also find that the video stabilization does a very honorable and very fast job which suits me.
Here is a small example (not perfect), of a very damaged film.

Thank you for your exchanges which I read with great pleasure.

1 Like

with my scanner, I have experienced the same phenomene as Paplo @PM490 . Never was able to identify the root cause. Developed instead a fast enough sprocket registration algorithm and the issue was no longer relevant.