[Help Request] calculating PC requirements

Hi all,

I’m trying to figure out the minimum requirements for the PC that will control Kinograph. Does anyone have experience calculating that kind of thing?

Ideally, I’d like to know what our requirements would be in the following areas:

  • CPU
  • GPU
  • file write speed (which determines our frames-per-second speed)

I can provide specs on the cameras I intend to test with (bit depth, resolution, etc) and some notes about how I think the system will work as a whole.

Thanks in advance for any help or connections to people who might be able to help.

Matthew

I don’t know how to calculate this, but I can say what I’m using.

My computer has;

CPU: i7 4790
GPU: GTX 1050ti (4gb)

The GTX 1050ti is the “cheapest” that can handle HDR10, H.264, H.265… with the NVEC/CUDA feature.

But I think the most important hardware component is a good USB host controller I use one that comes from FLIR. (ACC-01-1201: Renesas uPD720202, 2 Port, USB 3.0 Host Controller Card) This guy can handle the bandwidth needed for the data stream.

And everyone knows that an SSD has some highspeed capabilities but If you need lots of TB this can be pricy, maybe set up an HDD in a raid? This can boost speed?

1 Like

Hi @matthewepler. I have some experience with calculating file sizing/transfer requirements. I would suggest to split your requirements in 3 areas:

  1. From capture to storage - this would give you some key requirements on the throughput the PC would have to handle (once a frame is captured).
  2. Capture processing power - Depending on the camera and the demands it would have for the PC.
  3. Transport control - what you need to maintain accurate and real time coordination. Assume you would be using some hardware controller (arduino?) for keeping some of those tasks off the pc, but maybe not.
    I may be able to help (primarily on item 1).
1 Like

Thank you @PM490. This is helpful!

Just my two cents on this, in a somewhat broader perspective:

In my view, there are basically three different hardware architectures available which might be combined to entertain different aspects (hardware control/image aquisition/post-production) of a telecine:

  1. A high-end PC running Win10 or Linux: this hardware is ideal for post-processing the captured images. You want to register each frame perfectly, cut out the area of interest and maybe apply some inital color- or gamma-correction. Also, the final editing would be done on a PC-hardware, presumably. The advantages of a high-end PC are:
  • You can connect or disconnect large disk space to these machines and they can reasonably well handle this. However, from my experience, a Linux-box has less problems to move or to delete 50.000 image files compromising hundreds of Gigabytes. Win10 takes sometimes much longer in such a task.
  • The computing power which comes with todays machines is immense. Still, processing of thousands of 4k images takes its time. If (unattended) computing time is no issue, lower end or even in todays view obsolete hardware can still be used. However, a good gaming PC is probably not a bad start.
  • Highend-PCs equipped with good graphics hardware open up the possibiliy of GPU-accelerated processing. That would speed up processing substantially. However, it is generally no easy task to convert image algorithms to GPUs. Also, there exist extensively optimized computer vision libraries for standard CPUs (openCV).
  • More modern machine vision camera offer USB3-connectivity. A modern enough PC is probably the only available hardware to harvest the advantages the USB3-interface. However, software support is currently (end of 2019) rather limited. With most machine vision cameras, you have to use either the supplied software or a black-box library to capture image data. You might not be able to control certain variables (like white balance) the way you would want to.
  1. Small format Linux computers, most notably the Raspberry Pi. They run Linux and are, at least in their latest version, computationally quite powerful.
  • There exist cameras which can be directly attached to these units. These cameras are based on mobile phone hardware and are in principle able to give you high data rates because they use a quite fast direct interface. There exists an easy-to-use library for these kind of cameras (picamera). However, these cameras have their bells and whistels - some stuff you might want to use does not work as advertized.Several telecine approaches make use of this combination.
  • These computers with intermediate processing power feature general IO pins. So they can also be used to drive stepper motors and work with other hardware, especially if combined with appropriate intermediate hardware. One problem with the Linux operating system is that it is not a real-time operating system. So anything which needs precise timing might ask for too much.
  • The USB-connectivity of the Raspberry Pi series is not great. So I doubt it would be much fun to connect a USB3-powered machine vision camera to a Raspberry Pi.
  1. Arduino and similar hardware. These are microcontrollers and as such much more suitable for timing-critical processing. These units feature several standard hardware busses like I2C and SPI, plus a varying set of general purpose IO pins. Also, most have some sort of analog-to-digital input (needed for example if you want to read the position of a potentiometer, for example). USB-connectivity is for all practical purposes not existant. There exist several hardware levels:
  • The normal Arduino. This is an 8-bit machine with a rather reduces amount of program and variable memory. Best buy is probably the Arduino Nano, as this unit has a little bit more pins available than the standard Arduino, as well as a smaller form factor.
  • The Ardunio Mega: as the name indicates, just a bigger machine. Often used as the basis of 3D printer and cheap CNC machines. Basically “more pins”.
  • The Teensy: very similar to the Arduino boards, with nearly the same software support, but featuring a 32-bit processor with higher speed, compared to standard Arduinos. Probably already an overkill if you just need to drive a few steppers.

From my own experience in film production, I can say that, at least for the intermediate and final production steps, you want the fastest machine your money can buy, with lots of fast disk space.

Thanks for the write-up @cpixip. Here are what I’m currently using/considering:

Machine Control
Arduino Mega (easy to find, plenty of pins) with a custom shield (unused pins are available via screw terminals)

Image Capture and Processing

  1. Bring-your-own PC. We provide the recommended specs.

PRO: the user can spend as little or as much as they can afford on the PC of their choice.
CON: they may not get the performance they want, may make overall cost too expensive

  1. Onboard linux, bring your own storage/post-processing
    We might use something like the NIVIDIA Xavier (for GPU acceleration) to make a machine that is super robust and can handle both the machine control, frame detection, and post processing. We get you to a finished product (image files and video files) and if you want to edit, color correct, whatever then you take the files to another machine and do it there.

PRO: increased performance, can swap out the hardware as faster options become available, fast post-processing
CON: increased price, still doesn’t provide you with an OS where you can run your Windows software for color correction or editing. (Although Davinci Resolve does support CentOS)

  1. Windows NUC sold with Kinograph
    Off-the-shelf Windows or perhaps something like an UDOO Bolt. We load the software, we test with it, and provide storage and anything else you need to use Kinograph. An all-in-one package that you could use to scan and process (color, editing, whatever).

PROS: cheaper than the NVIDIA Xavier, a flexible OS
CONS: may not be fast enough for some people, would need to write all software to be Windows compatible

What are your thoughts?

I think an Arduino Mega is a good choice for machine control. This board has certainly sufficient input and output pins and close to real-time response.

Note in that context that within the CNC-, Laser engraver and 3D printer community, there are already very cheap add-on boards available for the Mega which are able drive several steppers.

Slowly, the 3D printing community is drifting toward faster 32bit processors, mostly to be able to calculate more evolved speed profiles. But currently, the Mega is still the main option. Also, lately, the stepper driver ICs have gotten more intelligence (“silent step mode”).

A little bit on my background with respect to the following comments: I have done image processing on anything from FPGAs to PCs and also spend a lot of my time in devloping neural network algorithms.

The most universal option from a user perspective is certainly a PC. Most people already have one. It might not be a fast machine, but most of the time, it’s upgradable. If not, you can always wait longer for the results or speed up things by lowering the resolution. You do not always need to work in 4k or even higher resolutions. From my experience, old Super-8 material does not warrent the use of 4k - only if you want to record and show to the viewer not only the content, but also every single the film grain… :slight_smile:

(Archivists might opt for that film grain. However, for Super-8 material, the scripts Fred Van de Putte (avisynth scripts) of are famous. But these script do everything to get rid of film grain and camera shake to enhance viewer experience.)

Continueing: you might want to base software development on a (small) community of freelancers - if so, you need a sufficient broad basis of more or less interchangable hardware. Which brings me to the next point:

Utilizing a special Linux machine like the NIVIDA Xavier or the like risks that this hardware combo might be rather short-lived (such things happened before) and that the supporting community is rather limited in size. So it might be difficult to find continuing support for the hardware chosen. In any case, I think these “special” computers are just a host processer (based on a standard ARM-design) and a more or less standard GPU for “AI” and “image processing” - however, if your bought your desktop computer with video-editing in mind, you probably already have a much better suited machine at your desk… (same comment applies also to any small factor PC like a Windows NUC).

Trying out another perspective: if one defines “the machine” as just the core, mechanical setup for advancing the film material, issuing trigger signals for the camera, maybe controlling the lights, than a combination of an Arduino-like microprocessor with a tiny host which can talk LAN and/or USB is sufficient as hardware setup. Probably the cheapest option around at this time would be a Raspberry Pi (Linux host) + Arduino Mega + host shield with stepper drivers.

Explicitly taken out of the above sketched setup is the “picture taking”. The reason is teh following: how the hardware/software for frame caputure needs to be realized depends highly on the camera chosen.

Currently, most machine vision cameras feature at least C- or C+±libraries for Windows-based machines. So in a straight forward way, you would end up in writing capture software within the C- or C++ software environment. Or you would have to use whatever software is available for that camera.

Specifically, general libraries, for example OpenCV, lack currently sufficient support for USB3. I tried recently to get a See3CAM_CU135 with USB3-interface working within the OpenCV environment. The basic functionality is there, no question, but for example the white-balance of the camera can currently not be set via the OpenCV-lib.

Ok, let me throw into the discussion yet another perspective: what will be the people who use the nect generation of Kinograph? Libarians and archivist, just wanting to preserve important material? They would probably not bother or even shy away from most post-procesing steps. Or the experimental film maker using old analog technology in the digital age? This guy certainly would rather see a lot of post-processing options availble on the system. Or the hobbyist, using the Kinograph design as a base for own ideas? Might only be interested in the mechanical setup. Or, probably all of the above?

I have no idea about the target group, but my gut feeling is that one could make a cut between (picture-taking/hardware-control/storing on disk) versus post-processing (frame-detection, color-grading, etc).

Being only software, the whole post-processing could certainly be written in such a way that the software runs on all major operating systems.

Options for programming languages which come to my mind are certainly C/C++ (not so easy to support multiple OSs), Java (in its “Processing” disguise rather easy to use, it is even possible to utilize shaders there to speed up certain image processing task) or Python.

I must confess that Python is were I ended up after a long journey starting with Basic, Pascal, Fortran 77, PL/1, …, and years of C/C+±programming.

One additional note in this context: I know that you can write in Python interfaces to existing C/C+±libraries, but so far I have never done this by myself. But that might be an (easy?) option for integration of existing camera libraries.

1 Like

A couple of comments regarding the perspective presented by @cpixip
I have been scanning 8 and Super 8 with a frame-by-frame DIY and I would disagree that 4K is not warranted. Having a better target resolution actually allows for much better results at HD. Do agree that many may not wish the extra work/storage/processing, but it is certainly worth it. And while we may have different perspectives and perhaps disagree on 8, there is no question that for 16 or 35, going for 4K is more than justified.

I fully agree with your perspective on the Mega, and in regards to using silent controllers for steppers. I am presently rebuilding the DIY frame-by-frame and have been using one of those (TMC2208), and it is a great improvement for controlling the steppers. Also came across this paper from ATMEL (maker of mega processor) which provides amazing insight on stepper control.

@matthewepler in regards to your options, and from prior lives, I think cameras will continue to improve, and keeping the machine control as much self contained as possible will make the kinograph the opportunity to be improved upon in the sensor as these become better/cheaper. In regards to whether it should include the hardware or not… maybe both. Structure the machine as suggested by @cpixip can easily open the door to simple version for access, or a complex version with all the bells for archiving correction.

@cpixip thank you for your thoughts on processing. You make a lot of great points. I think your arguments about the tight coupling between the machine and a specific computing device (ala NVIDIA Jetson) are very strong. I have been waffling between developing a cross-platform solution vs the baked-in Linux version and I think you’ve pushed me to the former.

In that direction, I’ve been thinking of two different paths: Electron app that calls Python files as needed, or a full python app. Any opinions there? I can write both Javascript and Python but have never built a cross-platform app with either.

As for the libraries, I did not know that OpenCV has poor support for USB. The SDK we would likely be using is the one offered by the camera manufacturer (FLIR), and is called Spinnaker. They have versions of the SDK for C++, C#, .NET and Python. So far I have not been able to find documentation of the Python support although a rep from the company told me exists. I’ll email them again this morning to track it down. Some cursory information is available on the product page here. I would very much prefer to write in Python.

@PM490 I very much like the idea of keeping the machine control separate from any software that runs imaging functions on the results of the camera. This will only help us in the long run. I like the idea of offering both - we can provide the PC for you, or you can bring your own. Flexibility for the win!

1 Like

Excellent write up cpixip! You made the exact point I wanted to make: It doesn’t exactly need to be completely one or the other. There’s something to be said for a combination, especially the one raspberry pi + arduino: Gets rid of the realtime disadvantage of the raspberry pi. The pi as a host computer could do the high level functions while the arduino controls all the things that require precise timing. And having a common host computer among all the adopters would definitely have the benefit of better software support.

As far as storage goes, the SD card on RPI could definitely turn into a challenge. I’d rather use a USB connected SSD. USB 2.0 is much slower than most SSDs but should be fast enough for uncompressed frame captures. Or the images could be shipped to a host computer over the network getting rid of storage needs altogether.

Here are the Spinnaker SDK download links for all platforms (MacOS, Windows, Linux): https://flir.app.boxcn.net/v/SpinnakerSDK

The rep got back to me and said that the Python library and examples are included in the download under a directory titled PySpin.

@pm490 @cpixip We’re getting dangerously close to a whole discussion about the virtue of Nyquist frequency in spatial resolution here :smiley: :smiley:

Oversampling is really good for the front half of the process (detecting frames) and really bad for the back half of the process (why did I just burn through 300+ GB of disk space for my 10 minute reel of exquisitely resolved grain?) - and coming from the digital cinema world where everyone is tripping over themselves to shoot 6K or 8K for 4K (or lower!) deliverables, I can tell it makes a difference.

Just curious, do you plan to archive/deliver your R/S8mm in 4K? I also realize that some S8 is in really good shape and shot on really nice fine-grain stock, so if any flavor of 8mm warranted 4K, that probably would.

@johnarthurkelly agree, maybe not all 8 / Super 8 is worth the trouble. I’ve scanned about 100 reels of 8 and Super 8 with a frame-by-frame 24MP 12 bit sensor, and file handling is no fun. But before getting into the weeds on Nyquist, here is someone’s Super 8 @ 4K… https://www.youtube.com/watch?v=JNMd_jsoOKI
PS. the link is not my scan, these guys do a great job.