[v2 dev] Milestone 2 - Frame Detection

Summary
A series of tests to determine the best method for detecting and triggering the capture of a frame. The results should reveal a design that is most likely to give consistent results given a variety of film stocks.

The most important mechanism in Kinograph is determining when to take a photograph . There is more than one way to determine when the right time is. We could use the film’s perforations, or we could use a camera to detect when a new frame is ready to be photographed. This phase will test those two different approaches to see which is more effective.

Required

  • Variety of film stocks in 35mm and 16mm with varying levels of perf damage
  • trip to NYC to install camera system with BKR
  • purchase of computer system for controlling camera and its software
  • new gate design

Deliverables:

  • Design specifications for a frame detection system.
  • Weekly updates from BKR to be shared with community
1 Like

Hey all, a quick video here describing a possible approach to frame detection. Please let me know what you think!

One thing I forgot to say in the video, the value of L should be updated after each successful frame extraction so that it is the average of all the L2 adjustments made in the sequence so far. Something like N(L + L2)/N

Also, I’m searching for 35mm and 16mm reels we can test with. If you have film we can use (and not give back to you) please contact me: info@kinograph.cc.

Also putting this here as a new option. Thanks to Martín Piñeyro for sending me this. His blog can be found here: http://www.blog.mpineyro.com/

openCV has quite a few algorithms (optical flow/tracker) which might be able to estimate the shifts you are describing in your video. Some are also reasonably fast.

So one could just do the capture with a mostly free running camera, store the resulting images on disk and align them (calculate your “L”) in a later processing step.

One needs a sufficiently larger overlap between neighbouring captures, slightly more disk space (because of the overlap required between frames) and some additional processing time (from my experience, anything between 300 msec and 1 sec per frame).

1 Like

Here are notes from a meeting yesterday with the contractors. We talked about how to make design decisions and our testing approach for frame detection. Commenting is turned on so feel free to leave a comment on the doc.

1 Like

If you use a closed loop stepper motor/driver, it will calculate where you are with a high degree of precision. No need to add a separate encoder. We’re using one like this. The encoder is built into the motor, so if there’s slippage or if a step didn’t happen, the motor will automatically compensate for that. There’s a decent explanation of this here.

So with a reliable drive mechanism, you can know how many steps you need to take for an optimal (non shrunken) film, and know that when the move operation is complete, you moved the desired distance, even if there were issues. If your gate is a bit oversized, you have a built in margin on either side of the frame. With each frame, measure the position of one or more perfs against where you expect them to be (this would require per-scanner calibration to measure these positions, so a bit of setup). If they’re a bit high or low, you change the number of steps for the next frame accordingly.

You don’t need to do this for every frame, most likely, which saves on some compute power. If you’re doing frame registration using CV algorithms anyway, as long as you’ve captured the full film frame anywhere in the digital frame, you can handle a few frames of drift. So it’s possible to test position, say, every 2 or 3 or 5 frames, and make adjustments if necessary.

With OpenCV I’m pretty sure this stuff can be offloaded to a GPU as well. There are some libraries that are ported to CUDA, so it would be blindingly fast to figure this stuff out in real time, if you can send the image to the GPU to do the math.

1 Like

We are considering using an NVIDIA Jetson that is loaded with GPUs as the “brain” and this could be a viable solution. I’m passing this on to the contractors for their feedback. Thanks @friolator.

Hi all. Quick update here. We are working with the 2K camera right now to get frame detection working and using what we find to calculate our needs for handling 4K (see the thread regarding PC requirements for more on that)

Our approach for frame detection will be a combo of reflective sensors (more on that below) and a closed-loop stepper motor that will give us the rotary encoder readings.

In plain english: we’re going to count the average number of holes over time and distance to determine when to take a picture. This means that if there are missing or damaged holes, we will still be capturing frames based on the average and therefore not miss any frames.

Regarding the reflective sensors:
Pros: cheap, they work
Cons: you have to position them for each change of film gauge you do (theoretically, we might have a solution there)

The alternatives to the reflective sensor are:

  • a pointed light source (laser, LED, etc) similar to MovieStuff and other scanners
    • similar or slightly more expensive than our approach and still needs adjusting
    • we just decided we had more confidence in the reflective sensors (for now)
  • a separate camera trained to identify sprocket holes
    • expensive and something we can do later as an upgrade.
    • could be anything from a webcam to the Firefly DL by Flir.

We are aiming to have test results by the end of the month. I’ll post results here.

The GoldenEye scanner took this approach: a monochrome camera before the “taking” camera would image the film and know before the film got to the gate what to expect and how to adjust. It seems like a decent solution, but adds a hell of a lot more complexity. You could probably do it with a cheap high res board level shield cam, too. Those cost almost nothing and are likely good enough for this. Open CV can easily interface with webcams and similar devices, so it’s probably an avenue worth exploring. I don’t see why you’d need anything particularly expensive for this, but again, complexity…

I just found and ordered this today. I hope I have time to play with it soon and post the results. OpenMV Cam H7 R1 - MicroPython Embedded Vision Machine Learning [OV7725 Image Sensor] : ID 4478 : $84.95 : Adafruit Industries, Unique & fun DIY electronics and kits

PS @friolator I’m taking your suggestion and swapping my DC motor + encoder for the capstan and replacing it with a closed-loop stepper. Thanks for that!

Hi all,

I spoke with the contractors this week and as you may have noticed, we have not hit our first major deadline (Feb 1st). Here are the things we are working on and which we hope to have done in a couple of weeks (forgive the cross-posting for Milestone 1):

Arduino Shield:

Code: 

    Finalize protocol.

    Integration of all functionality together.

PCB To Do:

    Include LEDs on the front of the shield for visual feedback.

    Include longer Male Headers in the BOM

Motor Controller:

Code To Do:

    Finalize protocol

PCB To Do:

    Fix reverse polarity issues

    Fix reset-able fuse issues

    Change of the FAULT LED

    Breakout unused pins

Front Panel:

Code To Do:

    Finalize protocol

PCB To Do:

    Separate Mains Power Switch

    Fix 7 Segment Display issues

Capstan Motor Controller:

Code To Do:

    Finalize Protocol 

PCB To Do:

    Design Board for next phase  (if necessary).

Reflective Sensors:

Physical To Do:

    Attach mount to system (once new gate is installed).

Gate Light:

Physical To Do:

    Finalize mount system for sourced LEDs.

    Source different LEDs if necessary

Camera:

To Do:

    Capture tests with new gate

    Reflective sensor tests with new gate, mount, and speed control

    USB Throughput tests

    OpenCV analysis on captured images/video