RobinoScan RT 16/35 Scanner

The 5.3K Pregius S IMX530 is the best one (is that the one you mean, it’s what you mention in the OP). If you’re finding Flir difficult, this one will also work. I think it costs about $5K which isn’t that much more than Flir’s price.

I’m very interested to see results from your new rig it looks beautiful and very promising!

1 Like

Yes sorry I meant the 5.3K - typo, I will get the camera next week.

I did get a quote from emergent but it was a bit more expensive and also would need extra hardware (pci card and expensive cables ) but it’s 25GigE which is very fast. Wondering if the 10GigE interface is going to be fast enough to push 12bit full res at 24fps to a hard disk?

I was not able to ask a sales engineer about it - can’t get through them at all which has been frustrating. I got a reply from Graftek (a FLIR reseller I did not even asked for) and they were vague.

FLIR has a 30days return policy so I’ll use that if all is fails.

Are you using a mono camera and sequential RGB lighting, or are you using a bayer camera? If bayer, you could always capture the raw image, which should be pretty lightweight. 10G ethernet should be plenty of bandwidth.

Yes bayer / color camera

Thanks for the info - each frame will be 5320 x 4600, good to know 10Ge should be able to handle this in raw.

So I’m a little confused: Bayer mask cameras are always color. Sequential RGB setups would require a monochrome camera (not a bayer camera in mono mode). So it can only be both if you’re swapping cameras.

sorry for confusion - I understand the differences, been tired lately and wrote dumb dumb stuff by mistake.

1 Like

Post Xmas update - making scanners is hard!

The camera is installed, not final but with a cheap rail and 3D printed dovetail. Once everything is working I’ll splurge on a good rail and Y Stage.

Ran into issues with camera smearing, original plan was to keep the light on and don’t bother flashing, but this did not work with my camera - I have a feeling it’s not just my camera, I guess we’ll know when some of you guys start setting up your cameras. I had to synchronize the LED flashes with camera exposures. Purchased a 2 channel oscilloscope, without this tool, I would never have never been able to figure that out.

The pulse system is a retro-reflective sensor(Keyence) sending pulses to TTL line of camera, The camera then sends pulses back via GPIO opto line out to an Arduino Mega who then sends PWM signal to DAC board “DC2197A”

…who then sends analog voltage to Picobuck LED driver. The timing is done using in camera delays and counters (for 4 perfs capture), I also have control over the pulse length in the Arduino code.

With this solved it was time to scan something and I was having a hard time getting started with Spinnaker SDK. It’s not friendly at all, to me at least. And thanks to @matthewepler for helping with Python we almost got it to work.

Until I found this software called “StreamPix 8” from NorPix.

They make this machine vision image / video capture software that is compatible with a LOT of cameras and is packed with features. Some I was dreading having to figure out on my own. Things like video scopes (especially RGB Parade) which are essential when calibrating your LED lights before a scan.

It’s pricy, 1K for the software. I’m a 2 weeks trial and using it for capture and debayering. So far so good and honestly I might buy it to save me some headache and get a headstart on scanning. I really would like to get back to shooting and forget about making scanners for a bit…

Here’s a first scan (done last night) of random shots from an old S35mm negative test roll.

**

You can watch in 4K if you click the vimeo icon… i think in browser you can’t even go full screen…

**

https://vimeo.com/660515988

Also made a 16mm gate and platter risers (“a la Northlight”)

16mm scans are also going good.


NEXT STEPS ARE:

  1. MORE LIGHT. I need more light, pulsing really lowered the intensity so now have to figure out how to “overdrive” the LEDs. The pico buck is not going to cut it, it’s limited to 36V and 1A per channel. I had to replace 2 tiny resistors per channel to get to 1A per channel… they are TINY. I got 3 new drivers from TI.

https://www.ti.com/lit/ug/slvuar5/slvuar5.pdf?ts=1640121410510&ref_url=https%3A%2F%2Fwww.google.com%2F

Now the issue with these drivers is that I can’t turn them off! At 0 (LOW DUTY) the DAC outputs 002.5mV which I guess is enough for the LED DRIVER to turn on the LED, albeit at very low intensity… this was fine with Picobuck since the range is 25%-100% or something… but these drivers are full range. Need help with that.

  1. Need faster debayer. Streampix is nice but post debayering is around 1.5 ~2 fps when saving a sequence to DPX 10bit. It defeats the purpose of real-time scanning if you have to wait that much time to save a sequence. Will be talking to them to see if it can be faster.

  2. Perf stabilization. Right now I load DPXs in Resolve and do the stabilization there as well as the negative inversion and grading. (I also use Photoline for creating base inversion luts - the level tools are the best I found in image editing software)

So when points 1,2,3 are solved I will consider this a working machine and start design on final frame.

Right now it works, and pretty well for what it is, I really like the camera / resolution - with more light it will be so much better! I currently have to gain the camera to around 11db which I don’t like. Can’t wait to be able to blast those LED flashes at high intensity.

3 Likes

OpenCV has debayering built in and is likely as fast as you will find. It’s worth figuring out where your slowdown is though. I would be surprised if it’s in the debayer itself, and not in something else like I/O, especially if it’s high resolution. The actual act of debayering is not especially intensive, so it should be fast. If you can, build timers into your code at the beginning and before/after each individual operation. Then you can display the duration for each task and see where things are getting bogged down.

This would also be doable in OpenCV. One method is to create some template B/W images of just the perforations for each type of film you’re scanning, then have OpenCV use its pattern matching tools to locate the perfs in your image. Then you can determine where that perf is in the frame and simply move the whole image on the X,Y axes to align it to a predefined position. It should be very fast. We did some tests on 14k images a while back and the time it took to do the pattern match was measured in microseconds. One thing with OpenCV - you generally want to work on a copy of your full color image, that’s monochrome, so that processing is faster. then apply whatever transform you want to the full color image before outputting.

OpenCV has really good python integration, and most of the documentation and example videos you’ll find will include both C++ and Python code.

3 Likes

That’s great to know - I’m really new to Python and OpenCV and I can’t find much examples out there. OpenCV doesn’t read raw files natively, have to use Numpy or something. I got a snippet of code that read and saves an image but the image comes out all garbled. Will continue hammering at it. As far as IO i have a 24TB Pegasus 3 raid - it’s pretty fast.

When testing other solutions, I tried debayering with FFMPEG which was working but it was SUPER slow even on my MacPro NVME SSD, I think openCV will solve that.

Nice photos! I see you’ve mounted a fan - someone else with one of those cameras I think is attaching an actual heatsink to it - so if you have any noise issues don’t be afraid to experiment with more cooling. You should be able to get captures that are completely free from digital noise with that camera.

The video you posted looks pretty good.

@robinojones What lens are you using in the setup? And what RGB chip is that?
It looks just like the 100W version of a 10W RGB LED module I found somewhere,
something like this? Epileds chips led 100 W RGB led module

I don’t use python much for OpenCV, but the way OpenCV works in general is that images aren’t really images, they’re a kind of array (a MAT in OpenCV speak). You would need to get the image into a Mat, and then you can manipulate the pixel data directly. This makes things like debayering very fast. And you’re correct that for raw images, numpy is needed along with a few extra lines of code… https://newbedev.com/opencv-python-display-raw-image – this would probably need to be modified to avoid hitting the disk, if you can, which will speed things up. If you can get the data from the camera’s frame buffer somehow, you would pass that to numpy instead of sending it to the disk to read a file. Should speed things up a lot.

You can quickly copy mats to make duplicate images (so you’d read in your main image, then maybe copy that to a new mat that’s B/W to make it even faster for things like perf detection, then take the information you learned from that perf detection pass and apply it to main image. It takes a little practice to get the idea, but it’s incredibly powerful.

numpy is separate from OpenCV, but it allows you to do all kinds of math on the open cv mats, so it’s good to know if you’re going to work in python in general.

I had a heatsink on it with 2 fans mounted but removed it as it was not really improving things and it was vibrating camera, I found the external fan, pointing at camera keeps it at a constant 34c. The camera is mounted on an aluminum plate which is secured to the temp 3d printed Y stage dovetail.

Temporarily using Tokina 100mm 2.8 set to f5.6, final version will be using one of my printing Nikkor 105mm but might research something else so the camera doesn’t have to be so far from gate.

Exactly that

Robin, have you posted your files for the lighting sphere and other bits and bobs that are part of your lighting unit? I’d like to try to re-create it and compare it to what i’m using.

I was planning on making a dedicated post for my light solution with schematics, parts, 3D Model and Arduino Code - so if anyone would like to participate in improving it would be great.

I should have everything posted this week.

In the meantime you can download the prototype model in .STEP below:

ROBINOSCAN RT_BACKLIGHT v1.step.zip (46.8 KB)

Edit: wanted to add, I printed it at 15% infill and coated the inside with Golden flat titanium white mixed with 10% barium sulfate powder.

I’m really interested in the RGB cob, and the potential for a driver made for it with the ability to strobe from a trigger input

@Andyw and anyone else interested - I just made a dedicated thread for the light system here:

Mini update.

I was out of a camera for the last 3 weeks, had to do a RMA for an issue I was seeing - it’s not there anymore.

Designed a PCB to clear up the mess of hand-made boards and cables.

I went from this…

…to this!

Here it is fully assembled.

I added header pins around both microcontrollers so I can continue developing, and change things if needed - it’s very much an in-progress development board but I learned a lot doing this. The next version will be a lot cleaner and include the light system on the board. Drive and Camera are both powered with a single 24V PSU. Light is on its own PSU.

Next steps is refining the speed regulation code, it works well as is but I know it can be better. Then it’s calibrating the sensor and hopefully if everything clears out I will order the final frame and put the machine to good use.

3 Likes

This is so cool! Just found this place, and am deeply inspired by what you are doing. I have a 35mm cartoon that I want to scan so badly. How did you get into building this? Does it do sound too?