Super8 and standard projector model to modify

Howdy, I’ve managed to build a super8/standard8 film scanner based on Manuel Angels efforts. However since it is a frame by frame scanner (which does an excellent job), I’ve some longer 400ft reels that have only small bits and pieces that I want to scan. Even with its continuous advance feature, it is quite slow to advancing the film.

Instead of scanning an entire 400ft reel which would take an enormous amount of time, I was thinking that there must be some existing projectors where you could remove the film from its existing place in time, manually advance (maybe with a hand crank or even on a different project) to the approximate location of the next ‘scene’ and then reinsert the film into scanner projector.

Ideally, I would have an altogether different scanner design from the get go, but this is what I currently know and haven’t and 3D printer capabilities at the moment.

So, with that being said, is there a super8/standard8 projector that would allow a partially viewed film to be reiserted into the projector (to be modified as a scanner).

I looked on eBay and saw that some vintage Bell & Howell’s initially look promising, but without a hands on look, its kinda hard to tell. Any ideas would be helpful, thank you!

On most of the projectors I own, you can very easily remove one of the side panels and gain open access to the entire film path. It shouldn’t be much of a problem to then just wind the film on an editing viewer and transfer it over to the projector.

Two projectors where I am quite sure this would work are the Bolex 18-3 TC and the Bolex 18-3 Duo (technically they are almost identical :grimacing:).

I think it would be better, though, to run the stepper motor (or whatever else you drive your projector with) continuously at a high speed with the projector in continuous advance mode.

1 Like

The possibility to remove the film and put it back depends on the projector, for mine it is very difficult without risking to damage the film. This is one of the advantages of the Kinograph type solutions but which require 3D printing
Now with my application similar to Manuel’s I capture frame by frame in HDR 3 exposures at less than 2s per frame.
For a 400ft reel this should give about a dozen hours. My application is reliable and if the film is in good condition I start in the evening and come back the next day and it’s over! This is what I did for all my 400ft reels
Then if you have a stepper you can as suggested by @jankaiser fast forward, for example at 10fps to the desired location.

1 Like

It is true that with the stepper motor we are not going to beat any speed record.

In my case, at the time I decided to use 32 microsteps per step. That is, it could make the motor run 32 times faster, but with the serious drawbacks of very noisy operation and lots of vibrations that would affect lens and camera stability.

With the 32 microsteps a very smooth and precise operation is achieved, at the cost of slow operation in continuous movement.

On the other hand, if we advance the film using the software, we have the advantage of tracking the frame that is in the capture position. You can go back and forth to go back to a previous frame, even if you are hundreds or thousands of frames away.

This advantage is lost if we extract the film and place it again elsewhere.

1 Like

Yes, microstepping can improve the smoothness of the engine. But 32 microsteps per step seems unnecessarily high to me. However it is not this microsteps value that determines the speed of the motor but the frequency of the wave sent. Whatever the microsteps value you choose, you should be able to make the motor run at the desired speed. You can absolutely have 32 microsteps and a high speed in continuous motion. For example in my case with 4 microsteps I run the motor at 5 rev/s in frame by frame capture and 12 rev/s in fast forward. However, to avoid blocking the motor, it is essential to increase the speed gradually, especially for high speeds. It is this gradual acceleration rather than microstepping that will ensure smooth and noise-free operation.

@dgalland Indeed, the frequency of sending the pulses is decisive for the speed of rotation of the motor.

If you use 4 microsteps in your device and I use 32 microsteps, to achieve the same speed of rotation I need a frequency 8 times higher.

In my software I send 25 us pulses in a loop that runs at the maximum speed of the Raspberry Pi 3. I haven’t measured it, but it doesn’t get anywhere near the spin speeds you mention.

It is also necessary to take into account the engine-projector mechanical coupling. In my case, with only half a revolution of the motor, advance one frame.

In my device, the issue of vibrations is very important since I have the motor and the camera mounted on the same chassis of the projector.


I think that’s what’s wrong with your code. You set the frequency and increase microstep to decrease the speed. You have less vibration not because of the microstepping but simply because of the reduced speed.

microstep_per_sec = rev_per_sec * step_per_rev * microstep_per_step
step_per_rev = 200 for most steppers
pwm_micros = 500 000 /microstep_per_sec

If I do the calculations with your values your motor runs at 3 rev_per_sec. I could get the same speed with microstep_per_step = 4 and pwm_micros = 200

To determine and vary the motor speed it is better to fix microstep and vary the frequency. This allows you to have a speed for capture by frame and a higher speed for fast forward or rewind.

Moreover, to reduce vibrations and motor blocking, it is necessary to practice a progressive acceleration to the desired speed (search motor ramping or see my code)

Finally a python loop to generate the PWM is not the best solution even if in your code as in Joe Hermann’s it is executed in a separate process. That’s why I advise the use of the pigpio library in which the PWM is generated in a daemon process from a hardware clock.

First of all, say that I am not an expert in stepper motor control. Before building my device I had never used them.

I’ve been doing some more research on the following page:

I highlight the following paragraph that I copy verbatim:

The most dramatic effect of implementing a microstepping control scheme is the reduction in noise and vibration, and therefore also a reduction in mid-range instability. When first introduced, microstepping was a real breakthrough and helped breathe new life into the step motor marketplace. Today it is a common feature even in relatively low-cost step motor control ICs and off-the-shelf integrated drives.

I think that in your device you use a TB6600 controller like me.
You refer to the PWM (Pulse Width Modulation) technique that I don’t understand what function it performs with the TB6600 controller. When this controller receives a pulse, it simply takes one step forward, regardless of the shape and width of the pulse.
Indeed, the PWM technique is used in the control of stepper motors, but with more sophisticated controllers than the TB6600.

In capture mode, my device advances one frame with only half a revolution of the motor, which it does in a short amount of time. Increasing the speed in principle would only make sense during continuous movement to advance or rewind the film.

Maybe I’m wrong, but these are my ideas about it.


@dgalland I have been reading the documentation of the pigpio library and now I see things clearer.

When you use the term PWM you are using nomenclature from the pigpio library that I did not know.

I think it is a good idea to use pigpio to control the stepper motor, instead of using traditional Python functions and I will try to incorporate this library into my software.

Thanks for your indications

@dgalland I have adapted my software to use the pigpio library.

I wanted to thank you for the information about this library that I was completely unaware of. I have always used the RPi.GPIO library.

Undoubtedly, for the control of stepper motors, the pigpio library is clearly superior.

Experimentally I have determined the maximum frequency for sending impulses to the motor driver, which in my case has turned out to be 8 KHz with 32 microsteps per step. At higher frequencies it also works, but steps start to be lost and therefore precision.

The engine runs faster and at the same time with better smoothness and consistency.

However, there is no improvement in capture mode. In continuous operation for fast forward and rewind the speed increase is very noticeable.


1 Like

Yes pigpio is really an excellent library.
Normally with pigpio you don’t need a python process for the engine.
I really don’t understand why you are trying to determine a maximum frequency ?
For me the normal approach for control of a stepper is :

  • Fix microstep, a value of 4 should be enough, 32 will bring absolutely nothing more.
  • Set the motor speed via the graphical interface, for example 5rev/s in capture and 10 rev/s in forward/rewind

Then calculate the frequency:
pwm_micros = 500 000 /(200*microstep)
Even better, do some ramping.

I don’t know what frame rate your camera can render, but I wonder if a stepper motor is useful.
A simple DC motor (sometimes that of the projector already does the trick) equipped with a voltage variator fulfills the function perfectly. Then the capture of the image is triggered at each rotation with a hall effect sensor or an optical sensor.
It’s easier to implement and you always have the possibility of adapting the speed of the projector to the capacity of your camera to render your images. no ?

Hi Roland,

Your approach is perfectly valid. In this same forum you can see solutions that use this method.

In my particular case, the original motor of my projector was AC and the speed could only be slightly varied. Once the substitution was decided, I opted from the first moment for a stepper motor.

From my point of view, the stepper offers the great advantage of controlling the rotation of the motor via software with great ease and flexibility.

On my device, for example, it’s perfectly possible to position the film we’re digitizing at any frame we like, or move an arbitrary number of frames forward or backward from the current frame, all from the GUI.

On the other hand, the Hall effect sensor or another type of sensor is not necessary to ensure the correct position of the frame to be digitized. I have never used it, but I think that the adjustment of the sensor mechanism must be quite critical in order to obtain good results.


I respect your choice, the main thing is to be comfortable with the method you use.
At how many frames/second do you capture your films?


My device is based on the use of a Raspberry Pi together with a Raspberry HQ camera, following the client-server model.

In order to obtain HDR images, I usually take 6 images of each frame, which are then merged using the Mertens algorithm.

Under these conditions, the capture time is about 4 s. for each frame.

I discover your realization, congratulations.
Nevertheless, for me there is a problem with the capture speed.
A Super8 film is made up of 240 images for 1 meter of film, so it takes 4 hours for 15m, 16 hours for 60m…
I have just been given 7 reels of 120m. that is more than 9 days of 24 hours, non stop. Still in my opinion, it is difficult to imagine. But great achievement.

I agree with @Manuel_Angel on why you might choose a stepper motor or a DC/AC motor. I believe in the end it’s mostly personal preference. I enjoy the level of control you get from a stepper motor and really didn’t find it too difficult to use in the end.

That said, I find in my setup that the motor really isn’t what is limiting the speed. My scanner is very similar to @Manuel_Angel’s in that it is a converted projector with a HQ Camera, a stepper motor and a Pi. My machine runs a little faster than 0.7 fps. That’s less than 1.5 hours for 15m of film and I think perfectly acceptable. Furthermore, the frame advance using the stepper motor only takes 0.4 seconds of the 1.4 seconds per frame. The rest of the time is needed by the Pi to capture the image. Note that I only capture one RAW image per frame, which is presumably the main reason my machine runs faster.

I also chose a stepper, and recommend the TMC2208 (or later) drivers. As mentioned before, the speed limit is not the stepper, particularly if driven with this kind of driver.

In general, regarding the speed, some of the films I handle are suffering from deterioration. In consequence, I chose going for the best quality (I could afford) regardless of the speed. In my case is about 20 frames per minute when capturing 12 bit RAW at 24MP.

The other consideration in my case is storage.

Depending on the content and budget, everyone makes a trade-off between speed and quality. So there is no right or wrong, whatever you can afford and have the patient for.

I am working on another setup for incremental improvement of the films already scanned, and it will be based on the pi HQ camera, with the intention of doing multi-exposure similar to @Manuel_Angel project.

With Raspberry Pi cameras, the multiple exposures taken to obtain HDR images greatly slow down the capture process.

The problem is that the camera actually works more like a video camera than a still image camera.
In practice, you set a certain exposure time in software, but in reality, the camera does not use it immediately. It is necessary to take several images beforehand so that the desired exposure time can be established.

In my first versions of the software, the function that controlled the exposure time and captured the images was incorrect. The multiple images captured did not correspond to the calculated exposure time, due to my ignorance regarding the behavior of the camera.

Currently, once the required exposure time has been sent to the camera, the program takes two false images that are sent directly to the null file. The third image is taken for granted and is used in the Mertens fusion.

When I say that the software takes 6 bracketed images, it has actually taken 18 images from each frame. We can imagine the time savings by taking a single image. In my case, the stepper motor uses 400 ms to advance one frame.

I am in favor of quality rather than speed and I have no financial interest in this work either, I only digitize my own family films and those of some friends.

One way to improve is to let the camera with a a fix exposure and change the light intensity to achieve the different exposures. I am planning to go that route, hopefully will make the multi-capture faster. The LED current driver is done, but for a faster change of intensities, it would require a DAC. Work in progress.