Image Sensor / Optical Components


What is their reasoning behind that? I can appreciate that doing it with a modified projector is a bit questionable, but if the pull-down mechanism is designed to be tolerant of damaged sprocket holes, and uses a gentle motion at low speed, what can be the objection? I think the pull-down method has got to be the most accurate, repeatable, and reliable answer, and makes very few demands on the trigger system and optical capture device. It succeeds because the mechanism that advances the frames is cyclic; in other words the same components are used for each frame advance so, even if they are not perfect, the delta-error between frames will be virtually zero. I refer of course to mechanical systems, but the same cannot be said for those driven by stepper-motors, even if they are judged to be cyclic. This is because they are digital, and are subject to step-resolution problems which, even if they do not accumulate, are hard to correct.

Another thing - with pull-down it is possible to make several captures of each frame, then average them to reduce grain and colour noise generated by the camera’s imaging chip.

In any case, these films must have been through projectors many times before, so what’s the problem?



The reason is that some are in a very fragile and very shrunken state, the spacing varies enough on the sprockets that no single mechanism could really cater for it, and it certainly would not run through a projector in the film’s current state.
Some of the film we get is incredibly fragile, you can check out some of our work here:
Jump to the 35 second mark and you can see what kind of shape the film is sometimes in, and this is just from the early 70s, some of the pre-war stuff we get in is in really bad shape and would not work in any sprocket-based feed system.
There are real advantages to a pull-down based system, the one in the Imagica scanners we also use is very gentle on film, and you can triple flash each frame to get a better dynamic range, as well as capture a damage matte, but some film will just not run through it, or more to the point, the film archive or owner will not allow it to be run in a sprocket or pull-down based scanner.
If the client won’t allow it as a matter of policy, then all the arguments in the world don’t help.

For most people though, a gentle pulldown system will work fine with film in reasonable shape.

I’d really strongly advise that while using webcams is cheap, the dynamic range on even the best ones is really awful, and gets results that are so far below even the $300 cameras with better sensors and better electronics design, that I would only consider using them when working out the kinks in one’s scanning rig.
$295 gets you a true 720P camera with extremely low noise, true 12bit processing, and fast enough to run at 25fps without a struggle. $495 gets you better than HD resolution - In the scheme of things it is cheap, and you get results that genuinely challenge $30,000 scanners in image quality, and save you a ton of time, even on small jobs.
The ability to scan 2 hours worth of film in 2 hours instead of 16 hours at 3fps, and the ability to then capture the soundtrack in real-time during the same pass is a massive timesaver, and the camera cost pays for itself very quickly in better quality, saved time and excellent stability. Plus you are tied to the scanner for less time, you really don’t want to leave film unattended, even in the best scanners.


My opinion is that both approaches have merit. If I’m a home user and on a budget, then the intermittent drive system may be more economical and the time isn’t a factor, especially since I can make my own judgement call regarding the film’s condition and suitability for the intermittent drive. I might be better served by investing in a better light source and a cheaper (slower) camera that does multiple exposures.

If I’m renting out my equipment or doing transfers for profit, then the time savings and quality upgrade might make sense in this scenario, as it would if I know I have poor quality stock that can’t be risked in an intermittent drive system.

All that said, this thread is specifically about the image sensor, and to me, the benefit of Kinograph would be the ability to incorporate many (any?) type of sensor as budget and needs allow. To that end, what would be the difficulty in regards to software and physical design of the machine that would need solved in order to do this?


You could use almost any sensor, the main constraint is the physical size of the sensor and the container it is in. A Canon 6D is a very different size to a PGR Flea or a webcam for instance.
So mounting the sensor will be a different design depending on the sensor chosen, as the mounting and spacing of the sensor really is critical.
The theory is the same though, you need to be able to adjust the camera in multiple axes with quite fine precision to be able to frame and focus the film correctly, so the design would be similar regardless of the sensor, but would vary depending on size and lens choice.

I agree, that for most people, the intermittent design would be the best fit, but it may work out cheaper or less complex to have continuous motion, or it may not. Both need to be pursued.


I did not appreciate that. The films that I am working with, although up to 90 years old, (fortunately none are nitrate based) amazingly, it seems, appear to be in good shape, but there’s no way that I would put them through a projector now. That’s why I have been working on a gentle pull-down mechanism of my own. However, I agree that something working on the lines of the Kinograph would be preferable, but I was initially concerned about the instability of the image framing, and the fact that I did not think I had a suitable optical capture device for continuous film motion.

Over the last few days, I think I have come up with something that may stabilise the frame triggering (I have emailed M about it for his opinion, but too early I guess to get a reply). The main problem then was to look at the digital camera aspect, the technology of which is something I knew least about, mainly because my past photographic experience has been with mechanical still film cameras.

The last few days of research on the subject have been enlightening, and turned up a few facts that surprised me, but answered a lot of questions that have been in my mind for some long time.

For example, I wondered how an image chip could capture a flash image, when the pixels on the chip must take a while to be scanned. I was relating this in my mind to the way a flat-bed scanner works, which was totally wrong. I now know that the image is captured globally as a charge stored in each pixel, Then the whole image chip is scanned and the charges converted to voltages once the shutter is closed. So the whole mechanism is very similar to conventional film cameras. So I then trawled around looking for an imaging camera that I could afford, that had most of the attributes required of telecine, but without a shutter and all the peripheral functions and glitz that goes to make up an expensive DSLR with remote triggering.

I won’t bore you with the details of my journey, but I have finally found something that looks very promising, and for which I have found full technical details of the image chip it contains from its well respected manufacturer. With a wry smile, I can see how the marketing people who sell the camera have taken these details and dressed up their product to appear better than it probably is, but I have taken little notice of that - facts are what interest me. Here is the camera - MC500 <$100

It is advertised as being a microscope eyepiece camera, which put me off at first, but I guess that is one of their strongest markets, together with telescopes.

Before you throw your hands up in horror, download and read this chip spec. Yes, I know its imaging performance does not compare well with a high-end 22MP DSLR, but so what? By all accounts, it should be more than adequate to capture stills from up to 16mm, and at a decent frame rate too, >5fps. If the data sheet is anything to go by, it promises a dynamic range just over 11 stops, which I think I would be happy with. It also has a standard C-Mount lens port, which is a big bonus. There is some confusion however over the pixel resolution, because I think this must include the three colours which brings the real resolution to nearer to 1.7MP. However, there are much more expensive cameras of a similar type out here that have similar real resolutions which quote 1080p performance (see Grasshopper3 from PointGrey the GS3-U3-23S6C-C is the cheapest - $995 ) , so I am not overly concerned.

Anyway, it’s too late - I’ve bought one to quell my curiosity, and it should be here tomorrow. I’ll let you know what I think.


I’ll be keen to see how it performs. With 2.2um pixel size, I would expect its low-light (i.e. shadow areas) to be quite poor, and the noise to be a problem, but I haven’t tested anything from this supplier, so it will be good to see how it goes.

Do you have any calibration slides or SMPTE test film strips to test it with?
If not, it might be worth picking one up from here:
They are made for transparency scanners but are a great test for sensors and are quite affordable.


I’d be interested to see how it performs as well.

You might get a bit less than the quoted frame rate if you are triggering it, and it is a rolling shutter so you’ll either have to flash it or hold the frame still.


Hi Folks,

I just took delivery of the MC500 a few hours ago, and I thought I’d report on my first impressions right out of the box. I have not plugged it in yet, or installed the software - that will have to wait until tomorrow, but I have checked out the construction, and worked my way through the software user manual, and from what I see at the moment, I’m impressed.

For the low price, and the illustration I posted, I was expecting a lightweight plastic housing, with no visible screws, and an insubstantial C-mount. But not a bit of it - the body is all metal, and assembled with camera-quality screws, both countersunk for body parts and hex-socket set screws for locating and adjusting the C-mount. There is also a set of C-mount sliding tubes, again of apparently high-precision metal construction incorporating friction o-rings, presumably intended to interface with standard microscope/telescope eyepieces. These will be ideal to interface with various M42 camera lenses that I have after I have made a suitable adapter on my 3D printer. The sliding aspect will be perfect for adjusting macro-zoom.

I was also expecting just a slim driver file, but there is a pretty comprehensive capture program included as well that seems to have wrapped up the complete functionality of the capture chip, but appears not to have precluded the use of external frame-grabbing software if required.

I cross my fingers that I do not come down to earth when I test its performance tomorrow!

Frame rate does not really concern me as long as it’s above 3fps, and yes, I realise that the lack of global shutter has the the constraints you mention, but most DSLRs are in the same boat, and have to rely on focal-plane shutters to truncate exposure after flash, unless used in very subdued light, which is what I would do if using Kinograph with MC500.

Will report progress very soon



It’s taken a couple of days, but I am now able to make my second report on this intriguing sensor. Rather than setting up elaborate resolution assessment experiments, I jumped in feet-first, and cobbled up something to quickly capture a single frame of an 8mm film that was shot over 50 years ago, and which is typical of most of the films in my late father’s collection (shot with a Bolex B8 - the best available in its day). I needed to know if it was worth digging in deeper to put numbers on all aspects of its performance, something that I know could take some time.

Firstly, I had to decided on the optics to use with the sensor. A few experiments showed me that one of my best classic still camera lenses, reversed and used with extension tubes should give the super-macro performance needed for this initial exercise. Then I had to design and make a couple of components to hold everything securely together, yet allow some degree of adjustment. I used my 3D printer to do this. Here is what I came up with, which fits to a vintage Eumig P26 8mm projector which I have used before because of its bare simplicity coupled with superb quality engineering.

I just put a short length of film in the gate, and started playing. Here is one of the results, that was captured at 720p resolution.

It is immediately obvious that the image is ‘soft’ by modern standards, but that is all down to the image on the film, which is typical of all the 8mm stuff I have from that era. There may be a slight performance hit from my optics train, but I see well defined scratches that suggest it is probably not that significant. BTW, I purposely did not clean the film, as I would normally do.

This image has not been processed in any way, except for cropping, and shows how much the colour has drained over the years. So this test does not give much indication of the MC500 colour performance, except to say it looks just like this in an 8mm viewer/editor.

The main thing I wanted to know was its resolution capability. The picture below goes a long way to providing that.

It is an enlargement of the area in the first picture, near the bottom left, of the heads of two old ladies walking by, and includes a small tree branch. It is immediately obvious the resolution of the sensor is several times better than this image needs.

Draw your own conclusions from this, but I am inspired to perform much deeper technical assessments of this sensor not only for telecine, but also for other things I am involved in which normally need a DSLR, mainly for lens interchangeability.

Finally (please don’t laugh!) I’m going to try dispensing with the lens altogether, and use a pinhole approach instead. You never know…


Witch machine vision camere and lens

Maybe you should reverse the film, the emulsion side on the other side. The dirt on the image is sharp but also unsharp and I think that is on the other side.


Yes, I tried this originally, but the image focus was not visibly improved. However, the scratches were not so well defined, and I wanted to have something sharp to focus on.


The grain should show up much more ‘sharp’ than in the image. I understand that the camera that took the film images originally must have been out of focus, but the grain should still look sharp in your captured image.


The grain is visible, but I agree it could have been sharper. This is because my camera setup was across the room from my PC at the time, and I could not see that degree of detail easily as I adjusted focus. I was concentrating on the image itself, which I could not appear to make any sharper. I had some difficulty in adjusting focus on the film scratches, which I do not think were on the emulsion side (where the grain is), but I wanted to see them to help assess the lens/sensor capability.

I have trawled over the internet for several months now looking for a definitive technical assessment of the equivalent digital resolution of Std8 film using the best consumer-grade camera lenses of the day. Many sources say that it is unlikely that these images could even match the performance of SD TV (or DVD), whilst other assessments, based on microscopic measurements, indicate the range 640x480 - 720p. From my own experience, I think I would agree with all of this.

We are talking here about the capture resolution, not the final production resolution, which may be much higher, with some justification as it allows post-processing to enhance the perceived sharpness and quality of the movie. A modern example of this is the way in which DVDs can be upscaled to HD, and Blu-Ray movies can be upscaled to 4K, with a perceived improvement in quality - which common-sense tells us is technically impossible as its original information bandwidth cannot be increased.

Another factor to bear in mind is the difference between scanning still and moving images. With still images, oversampling can produce an enhanced definition of film grain, which is stationary and can improve the visual impact of the result sometimes. With video the converse is true, because the grain is not static and can be quite distracting. I do a lot of video editing of digital camera footage, and it is quite common to have to add low-pass filtering to suppress this kind of artefact (OK, this is image noise, not film grain, but the argument is similar in that neither carries original image information).

Believe me this is a complex subject, and not easily equated with intuition.



There is a lot of misinformation on the internet about film stocks and resolution.
We have tested various stocks directly, shooting resolution test cards.
Super8 easily out resolves 720P, and can resolve 1080P details, but the parallel line test is slightly obscured by grain in lower light.
Grain is interesting, as unlike sensor noise the grain typically also carries some colour information that is true information and not noise.

Again, all of this depends on your goal. If it is just to digitise some home movies for friends or for one’s self, then a 720P capture from Regular8 or Super8 looks great, and scales well to 1080P for modern televisions. Also a lot of amateur footage is out of focus anyway, or utilised cheap optics, so in many cases 720P would be the upper end of its resolution, but film shot and processed well can certainly out-resolve 720P, grab a SMPTE test film on Super8 and you can verify this yourself.

If you are dealing with film archives, then you really want to scan Super8 at around 3K minimum, and downsample the final result to 1080P or 2K depending on their requirements.

Scanning at 4K even for Super8 has clear advantages in restoration, a large benefit is with stabilisation, If you scan at 4K then the stabilisation pass can be kept to whole pixel values, which keeps the image sharp and isn’t interpolating information. It also avoids a problem called ‘snapping’ where frames that have been stabilised at a sub-pixel level are slightly softer than the frames that were stabilised on a pixel boundary, causing the moving image to appear to snap in and out of sharp focus when in motion.

The same is true when having to interpolate information to replace lost information in a scratch or similar, you get less artefacts working in higher resolution and retain more genuine information.
The final result then downscaled to 2K looks siginificantly better than scanning in 2K to begin with (and miles better than scanning at 720P).
We have done tests at 6K and 8K and have found no difference in the final result with Super8 or Regular8, the benefits appear to top out at around 3.5K.
Of course I’d rather have a larger dynamic range than extra resolution in most cases, but it is possible to get both now if you use a large sensor, with well designed electronics.

For general use, a 2K scan still allows good stabilisation and detail results for 720P final delivery, and for just enjoying old family home movies, a 720P scan on a sensor with great dynamic range will look better than a higher resolution sensor with poor dynamic range, as the footage is most likely not sharp and poorly lit, you will want to extract as much information out of the shadows and highlights as possible without having it lost to noise.


Speaking of camera in the under $100 class, I am starting with a Leopard Imaging camera based on a Omni vision 5640,
the data sheet says 5mp, adjustable focus down to 5mm. I don’t have full specs, but a decided to make a start, learning along the way. It does have a rolling shutter, 1/4" format, H:2592, V: 1944: H:1.4um, V:1.4um, Focal length: 2.8mm
the unit has a M12 lens which means it is compatible with other lens available on Ebay, however, for super8 film, the magnification at a reasonable distance from the film is not adequate provide a large image on the software image diaplay. the lens has an FOV (H):72 deg.
I am using Amcap software (free demo)
A few questions if you can spare me the time;

  1. How can I calculate the proper lens parameters to get more magnification. Not sure what FOV means but assume it Field Of View. Is there a graphical solution?
  2. With this camera and similiar units, is it normally possible to trigger the shutter from a remote switch as matthew did with Kinograph.?


But isn’t this the case with all RGB sensors (with Bayer filter) - also the 5MP Blackfly? As far as my limited understanding goes, if the manufacturer states that the camera has a 10MP color sensor, in reality it will have 2.5M red, 2.5M blue and 5M green pixels.

I’m very interested in your gate/claw research and finding an affordable camera that could be controlled by a computer for purposes of diigtizing 8mm / super-8 film. Have you had time to do any more testing on the MC500? Would be interesting to see more samples from it. Can you get full resolution captures from it to your computer? Do you see any way to programmatically triggering it and downloading the full resolution image to a computer via USB?

Will drop you an e-mail regarding your claw mechanism excel sheet!



You didn’t mention the exact board you have, but I’m going to guess that is it this one: ?

This sensor is very similar to the raspberrypi camera (OV5647)

You would want a lens with a smaller field of view, or longer focal length to get a higher magnification. You could also move the lens further away from the sensor (extension tube) to increase the mag, but that will affect your focusing distance and depth of field.

Here is a calculator along with some explanation of the optics:

there is this also:

  1. You could find a way trigger it over USB. I think whatever library you’re using for image acquisition could handle that for you, but it would probably require a little programming. OpenCV maybe?


Just a question for you all:

Some of the challenge seems to be finding appropriate lenses, extension tubes, etc to get a really good view of the film. Has anyone looked into custom lenses? I just ask because with all the cell phone camera lenses on the market, they seem to be quite cheap. I know putting one of those on a high-end sensor wouldn’t be a good idea, but just wondering if having a lens for this specific use would be feasible if produced in some decent amount of quantity.

A custom lens might be more expensive than buying used versions of ones already manufactured, but it seems going that route still has its own set of pitfalls, and isn’t really a good long-term solution.

Anyone know of companies that do custom optics, or even who these companies are that create the lenses for cell phones or even the OEM lens that comes with the cameras we’ve been discussing?


Good thinking, but unfortunately it is too expensive.
Cell phone lenses are high quality and cheap as they are literally manufactured in the millions or hundreds of thousands, and they are tiny.
The larger the lens, the higher the expense.
For a small project you will always get a higher quality lens buying one off the shelf, and the optics will be better for the price.

There are high definition C-Mount lenses available from Fuji, ThorLabs and many other machine-vision manufacturers that would be suitable if you wanted to standardise on something with guaranteed availability.


@digitap. We have a factory in The Netherlands. They print custom lenses.