New camera options?

Sigma FP!

Thanks for the recommendation! I tried finding on the website what kind of shutter the camera has: global or rolling. I’m assuming it’s rolling. Do you know?

The advertised speeds are for 8bit, so they can run a LOT slower when you’re doing 12bit output. The manufactures will let you know about speeds (Emergent has a camera selector spreadsheet that shows the speeds - direct link). 4K is the highest resolution you’ll get fast speeds over USB3 and 12bit.

The Sony Pregius sensors are the best, Pregius S being the best choice. As far as cameras go, heat produces sensor nose and Basler cameras have more issues with heat. So if going with USB3 the Flir cameras (Blackfly S) are a good choice and can produce clean captures without cooling. There’s a good range available at different prices, e.g.: 2K IMX252 $765, 2.5K IMX250 $1,149, 3.2K IMX428 $1,879, 4K IMX253 $2,935, and 5.3K IMX540 $2,500. The 5.3K Blakfly S is an absolute bargain at that price too, but 12bit raw capture will be slow so it will suit those doing slower captures. For the others do note these cameras have been around for a while now and you can often pick them up used for half the retail price.

For faster captures at higher resolutions you need a 10GigE connection, and that runs hotter than USB, making cooling required to eliminate sensor noise. Good choices here are Flir or Emergent and at minimum you want a fan at 45 degrees to the camera body, I’m informed that’s usually sufficient. If noise is still present you could then think about attaching a heatsink as well. You can then go for the 5.3K or 6.5K resolutions and capture at real-time speed. The 5.3K cameras represent excellent value ($4,250 for the Flir being only $100 more than the 4K one), it’s highly unlikely you will need to capture at 6.5K to get all the detail in your films.

1 Like

Great to hear that the Flir cameras are a good choice for the money. I’m liking our options with them. In other threads, folks have mentioned that there’s no easy/direct way of getting raw images from their software. I hope this will be solved if/when we write our own software based on their SDK for developers. Other than that, I see no reason not to go with Flir moving forward for high-res, high-range captures. Not everyone will want to spend that amount of money, but for those who do we can say with confidence that it’s a good use of their precious dollars!

One thing worth looking at (eventually), is decoupling the capture software from the camera entirely. That is, instead of specifying a camera or even a brand of camera, specify the frame grabber (or frame grabber protocol), and use a PCIe frame grabber board. The reason I suggest this is that when you interface with an intermediate layer like this, you don’t have to care about the camera. That opens up the possibility of using different cameras from different manufacturers without having to rewrite software.

We’re using a Euresys CoaxPress frame grabber for the Sasquatch scanner, but their (free and extensive) API is the same for CameraLink or CoaxPress, and is based on the GeniCam standard. I think many of the lower end cameras are GenICam compliant, so this is worth looking into.

At least with Euresys, we got a C API from them that gives us all the functionality we need to get/set parameters on the camera, and to grab frames from the frame buffer. If we switch to another camera at some point, it’s mostly a matter of plugging it in and tweaking a few things.

1 Like

@friolator this is a great suggestion. In our case, the the camera is triggered with a single high digital signal via the GPIO connector on the camera. The camera then performs its transfer of data via its software.

Are you suggesting that we put an abstracted layer on top of the manufacturer’s software? So that as we add other cameras/software one would need to write a plugin for the Kinograph capture software that acts as a wrapper around the manufacturer’s SDK?

it’s even simpler than that. Don’t use the manufacturer’s software at all, instead look for cameras that conform to the GenICam standard. This is your abstraction layer and there are tools to access it for multiple platforms. In our case, Euresys has a simple C API, which we can easily call from our preferred programming environment, but we could just as easily interface with the GenICam libraries directly.

In our case, there’s a lot more that we’re doing than just triggering the camera. For example, the auto-focus routine will be done by taking images rapid-fire, of a small ROI of the overall frame, while stepping the lens position between images, and then comparing the processed frames to find the sharpest. If we did that at full res, it would take quite a while on a 3fps camera, but if we do an ROI, it’s reasonably quick. With the GenICam layer, we can tell it to switch to 8 bit mode and set an ROI, which will speed it up immensely. Then when we’re ready to shoot actual images, we switch back to full frame 16 bit.

Similarly, we’re going to use different regions of interest for different film gauges: 35mm alone would have 8, 4, 3 and 2 perf. The camera is in the same position for these, but we can run the machine faster in 2 perf than 8 perf, because we can set (in software) the region we’re interested in.

Basically any parameter you can set on the camera you can do from software. But using the camera’s software ties you to that manufacturer. Where if you use a more generic standard, you could swap out any camera you want as long as it complies with GenICam (which covers most modern cameras) - if someone wants 2k or 5k, it wouldn’t matter so much because they all use the same basic comm protocol.

More here: GenICam – EMVA

1 Like

Thanks for the clarification. It seems Flir is a member. I will contact them to see if they actually support the standards on their hardware and report back.

Separately, what license will you be using on your software, and what are you writing it in? Is it something that the Kinograph community could fork and build to their own needs?

it’s going to be completely proprietary. We’re building the app in Xojo, the programming environment I prefer. But it’s also going to be very specific to our scanner, and I don’t think would be useful for Kinograph, simply because we’re doing things very differently. all the motion control stuff is through an off the shelf controller, but we’re writing custom firmware for that box, and the LEDs are sequential RGB for intermittent scan, not white light. We’re not using any camera/light triggering from hardware, it’s all done from the control software through a TCP (or maybe serial) layer to the microcontroller.

That being said, we commissioned a couple programmers to make a plain C API for a good chunk of OpenCV to something called OpenCV-C. The reason for that is that Xojo can only interface with external dynamic libraries if they’re in plain C (or on the mac, either C or Objective C). OpenCV is C++ and they’ve deprecated their C API. OpenCV-C will be open source, on github. Soonish. We need to tidy some stuff up.

Along with this, I’ve been working with a couple Xojo regulars to port OpenCV-C to Xojo, so that all the functions are available as Xojo classes and methods, which makes it much easier to interface with than the raw C API. That will also be open source. We’re about 1/3 of the way done with that.

1 Like

I see now that this is all in C++. I understand why they would choose that, but this will be an obstacle for many, I presume. Then again, I could be wrong in assuming that the people who would want to do their own programming for Kinograph is separate from the community of people who know C++. I was hoping to make everything with Python so that folks would only need to know one language, and to lower the barrier of entry.

AN interesting idea, though and I’m glad you brought it up. Interoperability will be something very important to think about as we progress.

Best of luck with all your valiant porting efforts!

And for OpenCV, the Python support is extremely robust out of the box.

I like Xojo because it makes building a complex GUI application pretty simple. It’s essentially a cross-platform version of Visual Basic, but totally object oriented. It can be free to use if you don’t mind running your app from the compiler, or you can buy platform-specific versions which can be compiled for $99/platform.

The nice thing is that you can run it on Mac, Windows or Linux, and compile for all three from any platform. So I’m doing some of the dev work on my mac laptop when I’m away from the office, and some from the Windows workstation where the system actually runs.

But it’s not an open-source programming environment, so I can understand wanting to keep it on python.

Thanks for those links! There was no mention of that on the GenICam documentation pages. Poor documentation is one of my pet-peeves.

Thanks for the tip on Xojo. I’ve been looking at Qt as a possible framework since it has a license for open-source projects.

Not sure when we’ll get to that stage but it’s good to be aware of all the options. Keep the suggestions coming!

M

that python implementation isn’t part of genicam, as far as I’m aware. All the EMVA basically sets is the standard for the protocol, but the implementations are left up to others to put together. I think the C++ version may come from the EMVA, but there are others out there for various languages as well.

1 Like

I lost it temporarily but someone posted or sent me a link to a python library for genicam so I think we’re good on that front. I’ll know more when I get to that stage of the development.

just to be clear - the python genicam implementation you lined to above is the official python implementation, I think. The EMVA doesn’t make one, they make the spec, and it’s up to others to implement it in the language of their choice. So I think that’s the one you want.

1 Like

Hi all,

Please check out this as well: Camera Selector - find your cameras quick and easy!

Dear all,

I am following your project since the beginning and I really admire your work. Unfortunately I don´t have the means to construct the Kinograph, but I am trying to use the transport mechanics of a Siemens 2000 projector. I have already programmed Arduino to control a Nema 23 stepper motor. I am now searching a decent camera, which can be controlled by Arduino. With “controlled”, I mean just sending a command to the camera to acquire a frame. I know this is feasible with the usual digital camera (for example by using the remote of the camera), but I would like to use an industrial camera instead (because I fear that the shutter of a digital camera can wear).

Do you have any suggestion for me?
If I have violated the rules of this forum with my question, which is not 100% correlated to the Kinograph project, please inform me and I will cancel the post.

Hi @Marco_Leoncino . Welcome to the Kinograph community. Members have posted about cameras in several different threads. If you search for camera or C-mount or other terms related to camera hardware you will find them. Here is the wiki page for the camera stuff I"m working with right now. It includes a link to a spreadsheet that compares a few different cameras available. Oh, and one other popular choice is the Raspberry Pi Camera v2. Of course for that you’d have to move your code over to the Raspberry Pi and use its GPIO pins like an Arduino.

I hope that helps.

Hi @Marco_Leoncino, welcome. I am don’t have specifics on cameras, but would share my experience with DSLRs. The downfall of the DSLR is what you mentioned about the shutter. I’ve used a DSLR successfully and already far exceeded the manufacturers shutter by far. So depending on the amount of film that you need to digitize (in my case it was about 120 reels of 8/S8), that may be an option.
On a similar token, some digital cameras provide full hdmi output, and if 1080 is sufficient resolution, you can capture the HDMI live and not wear the shutter. I am not trying to convince you to use one, but if you already have one, it may do the trick.
I have a very similar setup with Arduino and a stepper, and if you are on Windows, you can also consider using DigicamControl. It has an Arduino plugin that allows you to send commands from the Arduino to the Windows computer to take the shot. What I like about it is if you decide to go with RAW, it will be stored directly in the computer. Similarly, in linux you can use entangle (no serial arduino plugin, but you can use the arduino as a keyboard and trigger the shot (hope this make sense).
This link to the description of the project with additional information.
I am overdue to document some of the changes, will try to clean up the docs. In the mean time, I’ll share info and answer whatever questions you have regarding my setup. Welcome, and you come the right place.

Dear Matthew and Pablo,

thank you for you warm welcome! I also think that this is a very nice place to discuss about the task to digitize film at home.

I forgot to mention that I started this new project not because I have thousands of movies to digitize in my cellar, but because I use to film with a 16 mm camera and to develop by myself. The big bottleneck for me is the fact that, to digitize, I need to ship the reels to a scan service, wait time and pay the service very much (even if I recognize that it is right that a scan service is expensive: man power, expensive machines etc…). Therefore I wish I could be able to do also this step at home!

I will read carefully your posts and link and I will try to make myself a better idea!

Thank you so much!

Marco

3 Likes