Scanning single row of pixels using picamera HQ?

Hey! So I have a few thoughts on using the picamera HQ. For setup, I grabbed a Nikkor 55mm f/1.2 macro lens which will do 1:2 magnification, which seems just about correct for the pi camera scaling to super 16 frame. These can be had for relatively cheap with known good optical performance. I then grabbed a c mount to nikon F conversion adapter.

So I see two options more or less to get the captures:

  1. Capture full frame. With enough light the shutter speed should be quick enough, but the rolling shutter effect will likely slightly compress/enlarge the image. Another option if this proves troublesome is to stop the film transport using a stepper motor.
  2. Using the sensor as a line sensor at high rate. Digging into the firmware, it would appear the sensor can be instructed to capture one row of pixels. The firmware doesn’t obviously support a user-specified pixel array, but it is possible that with a single pixel row config in the camera registers would enable this to be done relatively rapidly.

Has anyone explored single pixel row operation? Unfortunately the datasheet for this new sensor isn’t public, but there’s a similar model (Pi Camera v2 IIRC) that is available. It might be a bit tricky software-wise to trigger at the desired frequency and ascertain the actual time taken to read it out from the sensor. Rough math for 2 ft/min is 1 frame every 250ms or for 3000 px/frame = 1 row is one row capture every 0.25 ms.

Hi @nickandre, and welcome to the club!

The usual setup with the HQ camera is to use an intermittent film transport, either by misusing an old Super-8 projector or by using an own design for transporting and stopping the film precisely.

If the mechanical precision is not good enough, usually a computer vision algorithm is utilized which uses the film sprockets for frame alignment. Since the sprocket which defines a specific frame position is by standards 2 film frames away from the film frame currently scanned, that is only an approximation - but it works ok.

With this intermittent film motion, it is possible to scan several different exposures of the same film frame. This enables various approaches to extend the dynamic range of your camera sensor - the color-reversal nature of old Super-8 film stock requires a dynamic range which exceeds most available camera sensors. The HQ camera features 12 bit per color channel at most which gives you quite some sensor noise in the dark areas of the film.

Another scanning approach uses a continuous film motion, with a short enough exposure/flash time to freeze the image of the film frame sufficiently sharp on the sensor, despite the film movement. This type of scanning avoids the constant movement/stop for the film material and is potentially sligthly safer for old material. You most certainly need a computer vision algorithm again to align the scans of each film frame with each other.

With continuous motion, there is not really a chance to do any multi-exposure easily.

As far as I know, a film scanner based on a line sensor has not been described here on the forum. The only one which comes close to this might be the Nordmende Colorvision CCS (here is the service manual, and here one which is currently on sale). Well, that’s actually a flying spot one, not a line-based scanner.

These kind of machines were used because at that time there were no viable image sensors available for that task. So the approach you are suggesting has certainly a kind of retro touch to me. Could it be realized?

Probably, with quite some efforts. The current RP approach (libcamera) is not tuned to such an approach. Very close to what you want to do is the work of HermannSW at the RP camera forum on high speed captures utilzing a cropped area of the image sensor - you might want to have a look at his various posts.

Be aware that the HQ sensor is a rolling shutter sensor - this might create issues if the scan direction is not chosen appropriately. Also, you trade in the precise 2D pixel alignment of the sensor with a 1d line scan where the alignment in the orthogonal direction depends on the mechanical aspects of your film transport system. Pushing two subsystems of the envisioned design - the libcamera software interface and the mechanical design of your scanner - to the limit would be two challenges one can simply avoid by using the HQ sensor in its normal mode, that is: taking always a full frame image.

Note that there is a global shutter camera available from the RP foundation, albeit with a lower resolution than the HQ sensor. Have a look around this forum - you will find a lot of different approaches towards digitizing film, including scan results of the different approaches.

2 Likes

Ah awesome! Yeah peeking around the RP firmware I saw that there was no support for cropped area save the specific video mode crops they use for 1080p with binning etc. so it would require some custom code. I’m still a bit curious about the feasible rate for single line readout. The rolling shutter is per line, so it wouldn’t apply if the readout were per line. I think the main issue is that it would rely on a measurable and continuous film transport, so any irregularities in the motion of the transport would manifest as distortion on the vertical axis. In theory though the line readout could obtain higher vertical resolution.

My intended application was for ECN-2 (e.g. scanner on the end of a home continuous processor/dryer) so presumably the multiple exposure issue isn’t as important since the contrast is much lower.

@nickandre - there has been some attempts to achieve faster frame rates with various RP cameras, mostly v1 and v2 stuff. Do not use the v1 camera, as the build-in noise reduction of the v1 gives you not the best scanning results. The v2 camera is better in this respect and has a somewhat larger resolution (forgot the precise numbers).

Here’s a post by HermannSW claiming to achieve 206 fps with the v2 camera. Here’s another one from the same author quoting 120 fps for the HQ camera. Finally, here’s a post concerning the global shutter camera with 300x200@293fps. You might try a PM to HermannSW on the RP camera forum to find out whether a linescanner-like crop is feasible at all.

As remarked before, mis-using a 2D camera chip as line sensor might be quite an adventure. Surely, cropping the image area transmitted from the sensor to the RP reduces the bandwidth requirement, leading in fact (as the above posts show) to an increased fps score. Whether this is worth the trouble you might find out at the end of a journey. Which might be all the fun.

I’d probably would use a fast flash freezing the continously moving film and work with normal image capture mode. The HQ camera should run fast enough for this at half of the sensors resolution - note that the image quality in this mode is somewhat reduced, compared to a scaled-down full res image (search the forum here for discussions about that – used to be Mode 2 vs Mode 3 in the old times were the ISP was Broadcom-based, not libcamera-based.)

Sprocket detection can be done (search again the forum here for that phrase) with laser sensors which can trigger both flash and camera (the HQ camera features a trigger input; some soldering required).

Yeah after due pondering during the late night hours it seems this thought experiment is quite interesting but likely there are probably better fish to fry by just stopping the transport and relying on the extant methods. I was a bit concerned since if the film is coming through a linear transport processor with remjet removal that stop/start wouldn’t be the best but I think worst case one could use two stepper motors and add some sort of tension buffer to enable the scan stage to stop.

However now that mention it, I do wonder if high speed video of the portion containing the sporcket would be helpful in lieu of an additional sensor to accurately stop the transport, i.e. to attempt to stop the transport in a similar/close pixel region each time.

I now have my HQ camera and hilarious lens contraption so I’ll do a few tests.

For general general philosiphizing allow me the thought experiment of the calculations for exposure timing for continuous transport at 2 ft/m for super 16mm film.

So I have a pi camera HQ which is 7.564 (H) x 5.476 (V) mm using a 1:2 macro lens gives 15.128mm x 10.952mm which provides a reasonable overscan for super 16 at 12.52 mm Ă— 7.41 mm and enough to image the sprocket hole for alignment.

At a vertical length of 10.952mm and 3040px each px = 0.0053mm per px. IIRC 2 ft/m = 750ms for a full frame of 7.41mm to transit the area, so with some math I don’t recall 750ms/7.41mm*0.0053mm = 536.437 microseconds.

The minimum exposure time per this post is 28.57 microseconds per line with a total sensor readout of 571 microseconds per line, so at its fastest rate it sounds like the smearing will me less than 5% of the pixel area and the total rolling shutter effect will be approximately 1 pixel top to bottom.

I think the main issue here would be whether the light necessary to achieve this exposure rate (if I hypothetically used an LED or something) would be enough to fry the film. I suppose we could use a high/low set of LEDs, use the lower LEDs for framing using high speed crop video, and only crank the big boys for the necessary time to expose the actual frame.

Interestingly this post also states that the unit of readout is 20 lines; unclear if the sensor can readout less than that. However it states a readout rate of 840 MPix/second, which would imply that a faster readout could occur for small swaths of the image like this.

Alright random rambling complete.

1 Like

This reminds me of the way film is moved both intermittently (for the picture) and continuously (for sound) in a projector. This video (right at 6:00) is the gold standard for seeing that really cool bit of engineering. That’s at least an existence proof that something similar is possible.

1 Like

continuous motion with a line scanner is problematic when there are splices. The result is a kind of warping at the splice. This is a common problem with all the line-scan based machines like the Spirit, Shadow, Scanity, GoldenEye. In fact, it’s so bad in the GoldenEye that DigitalVision includes a copy of their restoration tools with a plugin that fixes splice bumps in software. Kind of crazy. And as someone mentioned above, no possibility of multiple exposures per frame.

The Northlight used a different approach: the film is moved into a gate and held firmly in position, then the whole gate is moved past a line sensor. This eliminates the splice bumping that you get with continuous motion, but it’s a much slower process. That particular scanner took seconds per frame, even the faster Northlight 2 model. Great scanner, beautiful piece of hardware, but painfully slow. Modern hardware could probably be made to run faster but there’s a lot of extra engineering that goes into that.

A camera with a rolling shutter kind of works like this though, it’s just all done in the camera. We use one on Sasquatch (film held still then separate red, green, blue images are taken of that frame and processed in software into a color image).

3 Likes

I agree with what Perry said. :slight_smile:

Line-sensors were used because 1. they were CCD not CMOS, and 2. there didn’t exist area-scan global shutter CCD imagers that were of equal quality. So you had a 6K or 8K CCD line sensor for 4K scanning. Yes you read that right, 8,000 pixels across, not 4048.

There were two approaches as mentioned above by Perry, the first and most common is a fixed line sensor and the film moves through the gate over it. Basically a traditional Telecine design. The second is what Northlight and Imagicia did which is they have a “projector loop” and the film is held stationary in the gate and then the line sensor is moved up and down the frame as if it’s a rolling-shutter in an area-sensor camera (or the gate itself is moved with the line-sensor fixed, same difference).

The problem with the first approach is that they do not perform well on shrunken film. ALL film now over 40 years old has shrunk to at least a very slight degree and that results in a wobble when the film is scanned, and it’s present in many, many films that are released on Bluray that were scanned on a Scanity or a Spirit or a (god-awful) GoldenEye. The scanners themselves from what I understand even if perfectly maintained develop that wobble over time even on brand new lab processed film, aka dailies, as well.

The tl;dr is that line-sensors are anachronistic and were used due to the technological limitations of the time.

I would not be using a Raspberry Pi to build a film scanner. There are much better ways to build a scanner than starting with a Pi! The wobble that I’ve just described above is present in scanners that originally cost seven figures when new. Don’t get me wrong, creating a DIY scanner with a line sensor would make for a very interesting project, but it’s not what you want if your aim is to at the very least have a stable image scanned. If however your aim is to create the wibbly wobbly warping experience of the Scanity and the Spirit then of course a line-sensor scanner would make perfect sense as an area scanner can’t do that for you.