[Help Request] calculating PC requirements

Hi all,

I’m trying to figure out the minimum requirements for the PC that will control Kinograph. Does anyone have experience calculating that kind of thing?

Ideally, I’d like to know what our requirements would be in the following areas:

  • CPU
  • GPU
  • file write speed (which determines our frames-per-second speed)

I can provide specs on the cameras I intend to test with (bit depth, resolution, etc) and some notes about how I think the system will work as a whole.

Thanks in advance for any help or connections to people who might be able to help.


I don’t know how to calculate this, but I can say what I’m using.

My computer has;

CPU: i7 4790
GPU: GTX 1050ti (4gb)

The GTX 1050ti is the “cheapest” that can handle HDR10, H.264, H.265… with the NVEC/CUDA feature.

But I think the most important hardware component is a good USB host controller I use one that comes from FLIR. (ACC-01-1201: Renesas uPD720202, 2 Port, USB 3.0 Host Controller Card) This guy can handle the bandwidth needed for the data stream.

And everyone knows that an SSD has some highspeed capabilities but If you need lots of TB this can be pricy, maybe set up an HDD in a raid? This can boost speed?

1 Like

Hi @matthewepler. I have some experience with calculating file sizing/transfer requirements. I would suggest to split your requirements in 3 areas:

  1. From capture to storage - this would give you some key requirements on the throughput the PC would have to handle (once a frame is captured).
  2. Capture processing power - Depending on the camera and the demands it would have for the PC.
  3. Transport control - what you need to maintain accurate and real time coordination. Assume you would be using some hardware controller (arduino?) for keeping some of those tasks off the pc, but maybe not.
    I may be able to help (primarily on item 1).
1 Like

Thank you @PM490. This is helpful!

Just my two cents on this, in a somewhat broader perspective:

In my view, there are basically three different hardware architectures available which might be combined to entertain different aspects (hardware control/image aquisition/post-production) of a telecine:

  1. A high-end PC running Win10 or Linux: this hardware is ideal for post-processing the captured images. You want to register each frame perfectly, cut out the area of interest and maybe apply some inital color- or gamma-correction. Also, the final editing would be done on a PC-hardware, presumably. The advantages of a high-end PC are:
  • You can connect or disconnect large disk space to these machines and they can reasonably well handle this. However, from my experience, a Linux-box has less problems to move or to delete 50.000 image files compromising hundreds of Gigabytes. Win10 takes sometimes much longer in such a task.
  • The computing power which comes with todays machines is immense. Still, processing of thousands of 4k images takes its time. If (unattended) computing time is no issue, lower end or even in todays view obsolete hardware can still be used. However, a good gaming PC is probably not a bad start.
  • Highend-PCs equipped with good graphics hardware open up the possibiliy of GPU-accelerated processing. That would speed up processing substantially. However, it is generally no easy task to convert image algorithms to GPUs. Also, there exist extensively optimized computer vision libraries for standard CPUs (openCV).
  • More modern machine vision camera offer USB3-connectivity. A modern enough PC is probably the only available hardware to harvest the advantages the USB3-interface. However, software support is currently (end of 2019) rather limited. With most machine vision cameras, you have to use either the supplied software or a black-box library to capture image data. You might not be able to control certain variables (like white balance) the way you would want to.
  1. Small format Linux computers, most notably the Raspberry Pi. They run Linux and are, at least in their latest version, computationally quite powerful.
  • There exist cameras which can be directly attached to these units. These cameras are based on mobile phone hardware and are in principle able to give you high data rates because they use a quite fast direct interface. There exists an easy-to-use library for these kind of cameras (picamera). However, these cameras have their bells and whistels - some stuff you might want to use does not work as advertized.Several telecine approaches make use of this combination.
  • These computers with intermediate processing power feature general IO pins. So they can also be used to drive stepper motors and work with other hardware, especially if combined with appropriate intermediate hardware. One problem with the Linux operating system is that it is not a real-time operating system. So anything which needs precise timing might ask for too much.
  • The USB-connectivity of the Raspberry Pi series is not great. So I doubt it would be much fun to connect a USB3-powered machine vision camera to a Raspberry Pi.
  1. Arduino and similar hardware. These are microcontrollers and as such much more suitable for timing-critical processing. These units feature several standard hardware busses like I2C and SPI, plus a varying set of general purpose IO pins. Also, most have some sort of analog-to-digital input (needed for example if you want to read the position of a potentiometer, for example). USB-connectivity is for all practical purposes not existant. There exist several hardware levels:
  • The normal Arduino. This is an 8-bit machine with a rather reduces amount of program and variable memory. Best buy is probably the Arduino Nano, as this unit has a little bit more pins available than the standard Arduino, as well as a smaller form factor.
  • The Ardunio Mega: as the name indicates, just a bigger machine. Often used as the basis of 3D printer and cheap CNC machines. Basically “more pins”.
  • The Teensy: very similar to the Arduino boards, with nearly the same software support, but featuring a 32-bit processor with higher speed, compared to standard Arduinos. Probably already an overkill if you just need to drive a few steppers.

From my own experience in film production, I can say that, at least for the intermediate and final production steps, you want the fastest machine your money can buy, with lots of fast disk space.

Thanks for the write-up @cpixip. Here are what I’m currently using/considering:

Machine Control
Arduino Mega (easy to find, plenty of pins) with a custom shield (unused pins are available via screw terminals)

Image Capture and Processing

  1. Bring-your-own PC. We provide the recommended specs.

PRO: the user can spend as little or as much as they can afford on the PC of their choice.
CON: they may not get the performance they want, may make overall cost too expensive

  1. Onboard linux, bring your own storage/post-processing
    We might use something like the NIVIDIA Xavier (for GPU acceleration) to make a machine that is super robust and can handle both the machine control, frame detection, and post processing. We get you to a finished product (image files and video files) and if you want to edit, color correct, whatever then you take the files to another machine and do it there.

PRO: increased performance, can swap out the hardware as faster options become available, fast post-processing
CON: increased price, still doesn’t provide you with an OS where you can run your Windows software for color correction or editing. (Although Davinci Resolve does support CentOS)

  1. Windows NUC sold with Kinograph
    Off-the-shelf Windows or perhaps something like an UDOO Bolt. We load the software, we test with it, and provide storage and anything else you need to use Kinograph. An all-in-one package that you could use to scan and process (color, editing, whatever).

PROS: cheaper than the NVIDIA Xavier, a flexible OS
CONS: may not be fast enough for some people, would need to write all software to be Windows compatible

What are your thoughts?

I think an Arduino Mega is a good choice for machine control. This board has certainly sufficient input and output pins and close to real-time response.

Note in that context that within the CNC-, Laser engraver and 3D printer community, there are already very cheap add-on boards available for the Mega which are able drive several steppers.

Slowly, the 3D printing community is drifting toward faster 32bit processors, mostly to be able to calculate more evolved speed profiles. But currently, the Mega is still the main option. Also, lately, the stepper driver ICs have gotten more intelligence (“silent step mode”).

A little bit on my background with respect to the following comments: I have done image processing on anything from FPGAs to PCs and also spend a lot of my time in devloping neural network algorithms.

The most universal option from a user perspective is certainly a PC. Most people already have one. It might not be a fast machine, but most of the time, it’s upgradable. If not, you can always wait longer for the results or speed up things by lowering the resolution. You do not always need to work in 4k or even higher resolutions. From my experience, old Super-8 material does not warrent the use of 4k - only if you want to record and show to the viewer not only the content, but also every single the film grain… :slight_smile:

(Archivists might opt for that film grain. However, for Super-8 material, the scripts Fred Van de Putte (avisynth scripts) of are famous. But these script do everything to get rid of film grain and camera shake to enhance viewer experience.)

Continueing: you might want to base software development on a (small) community of freelancers - if so, you need a sufficient broad basis of more or less interchangable hardware. Which brings me to the next point:

Utilizing a special Linux machine like the NIVIDA Xavier or the like risks that this hardware combo might be rather short-lived (such things happened before) and that the supporting community is rather limited in size. So it might be difficult to find continuing support for the hardware chosen. In any case, I think these “special” computers are just a host processer (based on a standard ARM-design) and a more or less standard GPU for “AI” and “image processing” - however, if your bought your desktop computer with video-editing in mind, you probably already have a much better suited machine at your desk… (same comment applies also to any small factor PC like a Windows NUC).

Trying out another perspective: if one defines “the machine” as just the core, mechanical setup for advancing the film material, issuing trigger signals for the camera, maybe controlling the lights, than a combination of an Arduino-like microprocessor with a tiny host which can talk LAN and/or USB is sufficient as hardware setup. Probably the cheapest option around at this time would be a Raspberry Pi (Linux host) + Arduino Mega + host shield with stepper drivers.

Explicitly taken out of the above sketched setup is the “picture taking”. The reason is teh following: how the hardware/software for frame caputure needs to be realized depends highly on the camera chosen.

Currently, most machine vision cameras feature at least C- or C+±libraries for Windows-based machines. So in a straight forward way, you would end up in writing capture software within the C- or C++ software environment. Or you would have to use whatever software is available for that camera.

Specifically, general libraries, for example OpenCV, lack currently sufficient support for USB3. I tried recently to get a See3CAM_CU135 with USB3-interface working within the OpenCV environment. The basic functionality is there, no question, but for example the white-balance of the camera can currently not be set via the OpenCV-lib.

Ok, let me throw into the discussion yet another perspective: what will be the people who use the nect generation of Kinograph? Libarians and archivist, just wanting to preserve important material? They would probably not bother or even shy away from most post-procesing steps. Or the experimental film maker using old analog technology in the digital age? This guy certainly would rather see a lot of post-processing options availble on the system. Or the hobbyist, using the Kinograph design as a base for own ideas? Might only be interested in the mechanical setup. Or, probably all of the above?

I have no idea about the target group, but my gut feeling is that one could make a cut between (picture-taking/hardware-control/storing on disk) versus post-processing (frame-detection, color-grading, etc).

Being only software, the whole post-processing could certainly be written in such a way that the software runs on all major operating systems.

Options for programming languages which come to my mind are certainly C/C++ (not so easy to support multiple OSs), Java (in its “Processing” disguise rather easy to use, it is even possible to utilize shaders there to speed up certain image processing task) or Python.

I must confess that Python is were I ended up after a long journey starting with Basic, Pascal, Fortran 77, PL/1, …, and years of C/C+±programming.

One additional note in this context: I know that you can write in Python interfaces to existing C/C+±libraries, but so far I have never done this by myself. But that might be an (easy?) option for integration of existing camera libraries.

1 Like

A couple of comments regarding the perspective presented by @cpixip
I have been scanning 8 and Super 8 with a frame-by-frame DIY and I would disagree that 4K is not warranted. Having a better target resolution actually allows for much better results at HD. Do agree that many may not wish the extra work/storage/processing, but it is certainly worth it. And while we may have different perspectives and perhaps disagree on 8, there is no question that for 16 or 35, going for 4K is more than justified.

I fully agree with your perspective on the Mega, and in regards to using silent controllers for steppers. I am presently rebuilding the DIY frame-by-frame and have been using one of those (TMC2208), and it is a great improvement for controlling the steppers. Also came across this paper from ATMEL (maker of mega processor) which provides amazing insight on stepper control.

@matthewepler in regards to your options, and from prior lives, I think cameras will continue to improve, and keeping the machine control as much self contained as possible will make the kinograph the opportunity to be improved upon in the sensor as these become better/cheaper. In regards to whether it should include the hardware or not… maybe both. Structure the machine as suggested by @cpixip can easily open the door to simple version for access, or a complex version with all the bells for archiving correction.

@cpixip thank you for your thoughts on processing. You make a lot of great points. I think your arguments about the tight coupling between the machine and a specific computing device (ala NVIDIA Jetson) are very strong. I have been waffling between developing a cross-platform solution vs the baked-in Linux version and I think you’ve pushed me to the former.

In that direction, I’ve been thinking of two different paths: Electron app that calls Python files as needed, or a full python app. Any opinions there? I can write both Javascript and Python but have never built a cross-platform app with either.

As for the libraries, I did not know that OpenCV has poor support for USB. The SDK we would likely be using is the one offered by the camera manufacturer (FLIR), and is called Spinnaker. They have versions of the SDK for C++, C#, .NET and Python. So far I have not been able to find documentation of the Python support although a rep from the company told me exists. I’ll email them again this morning to track it down. Some cursory information is available on the product page here. I would very much prefer to write in Python.

@PM490 I very much like the idea of keeping the machine control separate from any software that runs imaging functions on the results of the camera. This will only help us in the long run. I like the idea of offering both - we can provide the PC for you, or you can bring your own. Flexibility for the win!

1 Like

Excellent write up cpixip! You made the exact point I wanted to make: It doesn’t exactly need to be completely one or the other. There’s something to be said for a combination, especially the one raspberry pi + arduino: Gets rid of the realtime disadvantage of the raspberry pi. The pi as a host computer could do the high level functions while the arduino controls all the things that require precise timing. And having a common host computer among all the adopters would definitely have the benefit of better software support.

As far as storage goes, the SD card on RPI could definitely turn into a challenge. I’d rather use a USB connected SSD. USB 2.0 is much slower than most SSDs but should be fast enough for uncompressed frame captures. Or the images could be shipped to a host computer over the network getting rid of storage needs altogether.

Here are the Spinnaker SDK download links for all platforms (MacOS, Windows, Linux): https://flir.app.boxcn.net/v/SpinnakerSDK

The rep got back to me and said that the Python library and examples are included in the download under a directory titled PySpin.

@pm490 @cpixip We’re getting dangerously close to a whole discussion about the virtue of Nyquist frequency in spatial resolution here :smiley: :smiley:

Oversampling is really good for the front half of the process (detecting frames) and really bad for the back half of the process (why did I just burn through 300+ GB of disk space for my 10 minute reel of exquisitely resolved grain?) - and coming from the digital cinema world where everyone is tripping over themselves to shoot 6K or 8K for 4K (or lower!) deliverables, I can tell it makes a difference.

Just curious, do you plan to archive/deliver your R/S8mm in 4K? I also realize that some S8 is in really good shape and shot on really nice fine-grain stock, so if any flavor of 8mm warranted 4K, that probably would.

@johnarthurkelly agree, maybe not all 8 / Super 8 is worth the trouble. I’ve scanned about 100 reels of 8 and Super 8 with a frame-by-frame 24MP 12 bit sensor, and file handling is no fun. But before getting into the weeds on Nyquist, here is someone’s Super 8 @ 4K… https://www.youtube.com/watch?v=JNMd_jsoOKI
PS. the link is not my scan, these guys do a great job.

1 Like

@matthewepler: … to just add some personal taste/comment from a guy who started working with computers in the 70s of the last century by using punchcards and lineprinters: as far as I know, Electron apps do carry a lot of wasted RAM with them (+100 MB just for the Chrome stuff). That probably carries over to quite some computational overhead as well, as a lot of interface libraries need to be called for achieving a simple task. But I might be just old-fashioned…

With respect to the OpenCV-interface: the last time I looked at it, it seemed that the interface code for Windows was just using one of the old interfaces available there, like the DirectShow-interface, for example. While you could force the use of alternative interfaces, OpenCV out of the box does not support camera-specific options. Which you most certainly want to use in an application. So using a vendor-supplied interface like FLIRs Spinnaker SDK is certainly the better option. By the way, I had a really quick look at the interface specs of the Python interface of the Spinnaker SDK and it looks quite complete.

@PM490 - I think the example you posted is from a negative scan. “Ocho y Pico” are certainly doing a great job with such material.

However, old color-reversal Super-8 stock usually gives much less definition. Here are some additional thoughts:

  • If its not Kodachrome 25, old film stock is going to have a quite noticable grain even at rather low scan resolutions. Kodachrome used a totally different technology compared to other companies like Agfa for example. From my experience, Kodachrome grain is going to show up very slightly at scan resolutions of about 2-2.5k and above.
  • In the old days, during normal projection, grain got averaged spatially (by deficits of the optical system during projection) and temporary (by your visual system). The original film grain is quite a noisy thing, displaying in a fixed spot/color rather dark pixels immediately followed by rather bright values. What the viewer experienced however (due to the spatial-temporal averaging mentioned above) is an average color, maybe with a slight overlay of color/intensity randomness. That was in a sense the “typical movie experience”. Now, if you want to recreate that original experience, you certainly want a scan resolution high enough to imagine the grain structure (or: add it artifically in the post :slight_smile: ). Seriously, if you are aiming at archival copies, you might want to have that grain defined precisely. On the other hand, if you just want share a few family movies via Youtube or Whatsup, you might not want any grain at all: the fine structure of the grain increases your bandwidth requirements substantially with standard encoding schemes like .mp4. You might even opt to deshake the original material (reduces bandwidth as well), recut it and finally adjust the color-balance. That would be as far away from an archive copy as you can imagine. Well, it all depends on your goal, I guess.
  • Coming back to the issue of the grain structure for a specific image spot/color, randomly alternating between rather dark and rather bright colors. This poses another challenge: the camera needs to be able to resolve these brightness variations faithfully. Which is not that easy with a standard camera. The illumination dynamics a good Kodachrome 25 was able to catch and display during a normal projection are quite challenging to capture with modern digital equipment (negative stock is much easier here!). The point I want to make: resolution is only one of several important properties your digitizing device needs to have.
  • An additional point with respect to Super-8 material I noticed: the qualities of the cameras used for filming are quite different. There were some high end cameras around which really require at least a 2.5k resolution to capture all detail recorded. Even so, you might notice already at that scan resolution a bad color-compensation of the camera lens, especially in the corners of the frame. On the other hand, the not-so-expensive camera I used in the 80s is so blurry that even a 2k format is sufficient to capture all image details recorded. Which brings me to my last comment:
  • The amount of processing time and disk space required goes up dramatically if you increase your scan resolution. @johnarthurkelly is right here: oversampling is great for frame registration, for example. After all, sprocket holes are not subjected by ayn film grain noise. However, if you want to register grainy image areas precisely, you probably even want to smooth out the film grain before registration. This is because the grain noise spoils your computations for registration! Anyway, later on, you just want to work in a resolution high enough for your final medium. From my experience, twice the end resolution gives you normally enough headroom for most post-processing steps. Do never work in 24bit mode - always use at least 48bit or, even better, HDR-image formats like OpenEXR.
1 Like

… - to illustrate the above comments a little bit.

Below is a raw scan of a 42 year old Agfachrome Super-8 movie. Camera was on a tripod, focussed on the center portion of the Japanese transistor radio shown in the scan. Scan resolution is 2.5k via a Raspberry Pi v1-camera.

Film grain is clearly visible and becomes more pronounced when any sharpening step is involved in the post-processing pipeline.

It seems that the camera has about the same resolution as the coarse film grain of this specifc Super-8 stock.

The camera was actually not so bad, a typical Super-8 movie camera of the later 70s, a REVUE S8 De Luxe Power Zoom, produced by Chinon, Japan. However, it was no comparison to the high-end cameras of this time.

Also, while Kodachrome 25 offered a much better resolution, it was more expensive than Agfachrome and had only about half of the ISO sensitivity.

For comparision, here’s a Kodachrome 25 frame from a 1988 movie, with the same 2.5k scan resolution and a presumably better camera. I do however not have any information on the camera used:

The film grain of the Kodachrome 25 is obviously much finer, but it also starts to get noticeable for that material at that resolution (again, this is a unprocessed, raw scan).

Note: if you right-click on the images, you can download the full resolution.

Thanks for the comparison shots and advice, @cpixip!

@cpixip Thanks for the write-up and comparison shots. I already did about 120 rolls of 8 and Super 8 in Raw on a DIY (about 12 TB), and completely agree that it is a pain to handle. But after looking at the results and some areas of improvements on the DIY, I am going to re-scan a few of those with a V2 of the DIY scanner. If anything, I am interested in better capturing the grain. And you are completely correct on your analysis, but I am going for a second round and staying well above 2K.
Now while we may debate if it is worth it for 8/Super 8… since the target is 35mm and the subject is the computer power, I think the target should be 4K resolution… unless @matthewepler is targeting the work primarily for access, which is a valid approach too.

That’s awesome, I was blown away by how good that looks and then I saw it was Vision3 50D stock :laughing: Even in 8mm it looks less grainy than some of the 16mm stock I shot in the 90’s and early 00’s!

While you are correct in assuming that access is my primary goal, I also plan to support 4K.

How open does the code for desktop app need to be? One thing you might consider is using a cross platform programming environment like Xojo - you can easily compile for Mac, Windows or Linux and it’s fairly straightforward to interface with external libraries or DLLs.

There’s a lot of added complexity with making something cross platform, so I’d suggest building it for the most common platform to get it out the door, but do it with cross platform compatibility in mind, so that once it’s shipped, you can add in another platform, one at a time.

Even if you do the programming in Xojo (which is a closed application) there’s no reason you can’t make the source code open source on github. It is more limiting because it’s not a freely available development platform, but it really is nice for rapid development. I’ve been using it for years to build applications we use in-house and we’re using it to build the GUI for our scanner.

1 Like