Sasquatch 70mm scanner

I’m also using a capstan on my machine.

At first I thought adding the Capstan would fix my all my tension issues but it didn’t :frowning: so I had to add sensors to regulate (load cells as you mentioned in another thread :slight_smile: The capstan did regulate the speed perfectly though and that is really great.

The oDrive is a LOT cheaper than the tekniks but if they can regulate tension without sensors I’m really in and will spend the $ for them.

The promise is that they will do that. However, even if they can’t do it directly, you should be able to use the HLFB data to derive tension and act accordingly. Right now, I can put my finger on the capstan and stop the motion you see in that video and the tension on the film doesn’t seem even remotely high. So I’m optimistic the capstan being engaged will finish the job.

I have mixed feelings about these motors, to be honest. They’re a bit expensive, very well built, and have lots of capabilities. But they have also clearly thought about the most common modes these will be used in, and made presets for those. Those presets tend to sacrifice one feature over an other to optimize for whatever that function is. For example, you can only have two speed modes on the capstan motor in the profile we’re currently using (which allows us track the positioning with a high level of accuracy). But this limits you if you want to use the capstan to drive ff/rw functionality.

In theory we could probably disable the capstan and then create an imbalance in the tension on the feed/takeup motor to do ff/rw, but in doing so we lose the ability to know where we are in the film, because the disabled capstan is where the positioning data comes from.

And you can’t use the feed/takeup motors in positioning mode and have them do what they’re doing now. So there are a lot of tradeoffs with these.

That’s a lot of cons, prob no deal breaker though. The oDrive is a pain to configure but when it’s working it’s working good. I’ll follow your updates and see how it works for you, looks like you’re getting really close to scanning right now! Your machine is looking real good.

Can you use this data to trigger the camera / light or will you use a specific encoder for that?

We’re not using a trigger pulse for the camera. It’s all done from software. The camera is connected via coaxpress, which allows for software triggering. The front end app is orchestrating everything. Nothing happens on the hardware until the software tells it to. So we send a command to turn on an LED channel at a certain brightness, then after getting confirmation we grab the frame, which gets dumped to a memory buffer in our frame grabber. In a background thread that image gets copied from the buffer and processed on the CPU (aligned, cropped, whatever needs to happen). While that’s happening, we turn on the next LED color channel and do the same with that color.

After the third channel, we tell the software to do the final image processing (for B/W this basically means writing to a file. For color, it means combining the three into a composite color image and writing that out). While that’s happening in the background, we’re advancing to the next frame.

1 Like

So, it’s been a minute.

We moved into a new office in January, and we are finally settled back in and I’m working on Sasquatch again. Sadly, our firmware developer became very ill last fall and was unable to complete the project. there were some bugs, and we had to ditch all the backend code he wrote and start over from scratch. And we have come full circle at this point: when I first started this project, my initial idea was to have the front end software control everything: lights, camera, motors, sensors, all of it. But for a variety of reasons, that didn’t happen and more and more stuff kept getting rolled into the backend code. As we learned when our developer was unable to work anymore on this, that left us in a precarious position.

So we got a referral for a guy who has extensive experience with the ClearCore controller that we’re using, as well as the motors. After consulting with him and getting a price, we decided to move forward with the project in the following way:

  1. The controller firmware would no longer have much logic of its own, it would simply be the relay between my software and the lights, motors, and any sensors.
  2. the method we’d use is tried and true: Modbus. This protocol is old school. Like, really old school. but it works and it’s reliable and fast.
  3. He’d use code he had already developed, to speed up the process and minimize testing.
  4. Most of the logic would be moved to the front end

Because of this, once he was able to start work on the project it only took about a week to deliver the firmware, the first version of which I got this afternoon. And I have to say, this is super cool.

Modbus basically reads and writes registers on the controller and then he responds to changes in these registers accordingly. When I want to move the capstan motor 10,000 steps forward, I simply set the capstan position register to the current position + 10,000. When the register is updated, the firmware handles sending the motor to that location. It constantly reports its position, and I can poll the registers at any time, to know where we are.

This is what it looks like in a test application called modbusPoll, which is reading the current values from the controller:

There are 60 registers total, some are unused.

We have changed the way we handle the feed and takeup motors. instead of them reporting the torque, the capstan motor reports torque in the form of a signed integer (so it can be positive or negative - if negative, the capstan is feeling more tension on the left. If positive, more tension on the right). The front end software will constantly monitor this and adjust the feed and takeup motor torque values accordingly, to try to get the torque on the capstan to zero.

In the past few weeks I got libmodbus working in Xojo, my front end development environment. In Xojo I instantiate a class for the controller, and then I can read and write the registers on that controller. For example, in Xojo, if I want to tell the motor to move to position 60,000 (steps of the motor):


But wait, there’s more! The way we were handling the camera and lens motors before was sub-optimal and there was the potential that we might not have landed where we wanted with those motors. Now, we’re using the dedicated motor ports on the ClearCore to run the generic stepper motors for the lens/camera stages and this is much more reliable. The thing is, there aren’t enough of these ports on the ClearCore for the 5 motors we’re using. So we just added another ClearCore. in Xojo, I have two instances of the libmodbus class, one for each controller. ClearCore1 handles the transport only. ClearCore2 is the LEDs, Camera and Lens steppers, and some generic clearcore ports we can use in the future for turning on relays or reading from a sensor.

> ClearCore1.Write_Register(0,60000)  //move the capstan to position 60000
> ClearCore2.Write_Register(20,65535) //turn the Red1 LED on at 100%

As long as we don’t get an error when setting a value, then the register we updated is updated to what we want, and we can move on to the next step - we don’t need to check to make sure the thing we set was set, because Modbus will tell us if there was a problem. This is so much cleaner than the crazy message parsing system we came up with for the last iteration, which was not entirely reliable and needed a lot more work.

I’ll hopefully be updating a bunch here soon. The next few days are going to be about testing all the registers to make sure they do what they need to do, and that things like homing motors works properly (one of the only functions left inside the firmware that’s not controlled by the front end software).

Once that’s done, we will be loading film of various gauges on the machine and figuring out what the optimal settings are for tension for each one. The goal in the next few weeks is to make the transport rock solid and to reliably be able to get to a location, and to be able to repeat that.

After that, we move onto grabbing images!


Another day another app. Things are progressing nicely. We have pretty much tested all the firmware at this point and it all seems to work as we’d expect. there are a few more things to do on that end still, including adding a couple minor additional features.

I’ve been testing in an application called modbus poll, which you see in the screen shots above. that’s a tool designed for communicating with devices that use the Modbus protocol, but it’s obviously not ideal for this. So I’ve been using a bunch of small apps that each do very specific things, and now I’m ready to put all those together and really start digging into things next week. This afternoon i built this simple application that will consolidate certain functions: like Loading the film, which entails setting half a dozen registers on the controller (torque for feed and takeup, enabling all three motors, etc). Now that will be in one button.

Baby steps, yes. But this will at least make testing a lot easier. The next steps will involve dialing in a lot of motor related stuff, so being able to test various tensions with film on the machine at the touch of a button is key.

This app will also eventually allow me to export the settings for a gauge (default torque settings, speeds, camera and lens positions) to an XML file for the real front end app to use.

We have also abandoned the ClearPath MCPV motor for the capstan. Their motors are super cool, but these positioning motors are designed for different applications and we couldn’t really get it to do everything we wanted. So we’ve switched to ClearPath SDSK motors, which are basically hybrid stepper/servo motors. The nice thing is that the acceleration and profiling we could do with the MCPV motors is still there, we can still measure the torque, but it behaves like a simple stepper motor, which gives us more flexibility. Took a week for the new one to arrive, but it showed up this week and I got it installed on Thursday. It’s the same form factor, so a drop-in replacement, and it works great.


Oops. Forgot to set the feed side tension to a negative value (negative means CCW rotation). Thankfully the failsafes kicked in, but this happened fast ! Need to tighten things up a bit so this can’t happen again.


it happens to the best us us!

1 Like

Movement! For the first time since we started this project, we have the transport moving properly. I spent about an hour on the phone with Teknic just now, remotely tuning the capstan motor.

Before, it was aggressively trying to correct its position whenever there was a move. The feed and takeup sides were over-reacting, causing the capstan to over-react, basically setting up a 3-way feedback loop that got progressively worse. Now that we’ve calmed the capstan down, we have the motor moving smoothly.

What you see in this video is the motor’s setup application, which has scopes you can use to view various parameters. The red shows velocity and the blue shows the error rate. The goal is to get the error rate as smooth as possible, and to get it to settle by the end of each move, so we don’t have to wait to snap the image. This is not moving super fast, but it’s enough to start dialing in the settings on our front end application. The combination of torque adjustments and increased speed, and possibly some additional tuning on the feed and takeup motors, should get this all moving nicely. We don’t need to go super fast, but we need it to move quickly between frames because the camera is pretty slow. a half second here and there really adds up. so the goal would be to move to the next frame in under 200ms.

Also today, we have a new “Load Film” function enabled that auto-balances the feed and takeup motors to settle into equilibrium regardless of how unbalanced the load is. That is, a full feed reel and an empty takeup reel will now settle into a non-moving state within a few seconds. Then we can enable the capstan motor, without risk of scratching the film. Some more tweaking has to happen on that, to try to make it go faster.


Very nice! - how do you stop on a frame and make sure you are framed before you trigger your exposures?


The gate is oversized. So we move the amount we expect to have to move to get into frame and then take an image. When we do the perf detection we’ll know how far off from the expected position we were (if at all), and adjust for the next move.

1 Like

Will you be using openCV for the perf detection as well or use hardware trigger? Just curious no need to explain if it’s your secret sauce :slight_smile: great progress!

1 Like

Open cv for perf detection, registration, autofocus, any scaling and cropping, and probably for color processing when we get there.

The camera is triggered from software through the frame grabber driver, via the cameralink connection

1 Like

Big push over the next two weeks. We had to put Sasquatch in a corner for a bit because the landlord replaced our HVAC system to one that’s just for our office (the big one for the building is old and failing). But today I started working on getting the positions correct for the camera and lens stages, for 35mm and 15 perf IMAX (this is 10perf film, but we can pretend for now!)

Here’s what the imax footage (shot on a modified 10perf military camera, of the Aurora Borealis in Norway), looks like in monochrome:

And some cool leader, a bit overexposed:

And the gate we’ll use for some specialized 35mm-like archival film, loaded up with standard 35mm:

Right now the images are being displayed in the frame grabber’s software. I have the DLL for that working in my software but haven’t fully implemented it, so I’m using their software to test for now. Similarly, I’m still using the modbus poll application to move the camera and lens stages. All that will be moved into my app soon.

There are some terrible light leaks on the left and right. That’s because the back of the camera enclosure is still open, and there are some issues with the integrating sphere. Once we get this reliably moving film and snapping images, I’ll be modifying the integrating sphere design and reprinting. This version is just the white PLA too, and isn’t painted inside. Good enough for testing, but the final version will be printed in black PLA, with a white/barium sulfate paint mix on the inside, and a holographic diffuser on top with some glass on it to protect the inside of the sphere.

Inching forward though!


I have had some time to get back to work on the software and as of this afternoon the Xojo class that interacts with the Euresys frame grabber we’re using is functional. Here’s a simple test app I built in about 20 minutes for the purpose of testing the frame grabber code. Once everything is working here, it’ll be moved over to the main application, and then I can integrate this into some code that sequences the movement and frame capture. Nice to see a picture finally though!


A bit more progress today. Something that I’ve been hung up on was setting various parameters on the frame grabber. I’m using a CoaxLink frame grabber that uses the GenICam standard. That’s a generic interface standard for machine vision cameras. So there are a million parameters you can get/set on the frame grabber, the camera, and the output stream from the camera. They vary from one camera or frame grabber manufacturer to another, and it really depends on the capabilities of the hardware you’re using, what you can set.

These can be anything from the exposure time, to bit depth, to how the camera expects to be triggered (in my case, all triggering is happening via the driver and my software, but you could set it to listen on the trigger input on the camera).

In any case, here’s the same app as above, but modified slightly to allow me to test the most important camera settings. Ignore that most of those fields are blank. I don’t yet have the code in place to retrieve the saved settings on the camera and populate those fields when the window opens, so some are blank.

Here’s a screen shot of the GenICam Browser application:

1 Like