New HQ Raspberry Pi Camera with C/CS-Mount

@peaceman: very interesting results! Indeed, your comparision shows clearly the heavy image processing which is nowadays so common and so unavoidable with consumer sensors.

By the way - even the raw image of the HQCam has already some image processing applied. I have seen noise measurement of raw HQCam captures which hint to some spatial filtering applied at the sensor level. Also, for certain there are dead-pixel algorithms at work before the HQCam ships out it’s “raw” frame.

Usually, it is hard to get such information, as the chip manufacturer tend to keep their secrets, and nearly always impossible to switch them off.

Anyway - what would be very interesting to test (if you have the time and setup) is to see how the 12bpp of the raw image file compare against the dynamic range of a standard Kodachrome color-reversal frame with large shadow areas.

To elaborate: color-reversal film is usually exposed so that the lights don’t burn out. During projection, detail loss in the shadows is not so annoying than burning out the highlights of the scene. Also, our eyes as “consumer” of projected film frames are surpassing the possibilities of current sensor chips by a wide margin.

In order to reproduce highlight details in a scan faithfully, usually an exposure value for a single frame capture would be chosen so that the brightest image parts are still within the dynamic range of the full digital image path.

For example, I am working with 8bpp (and HDR later in my processing pipelinep). So my exposure is set in such a way that the sprocket hole of the film (which is pure white by definition) maps to something in the 240-250 value range (the maximum value representable with 8bpp is 255). With the 12bpp of a raw HQCam image, you probably could go up to values of 4000+ or so for pure white.

Now it would be interesting to see how good the performance of such an exposure setting is performing not in the highlights, but in the darker image areas of a frame. Specifically, how far can you go toward darkness?

There are two interesting points here - first, there will be more noise in the lower bit channels of a capture. This will raise the noise floor in dark, dense image areas, as compared to other areas. Another point to look at would be the limited amount of bit variation available in these dense film areas. Namely, only a few bits of the image data will change here. This could lead to banding and quantization artifacts (it might not - that would be an interesting result).

From my experience, a normal Super-8 frame basically never shows real dense areas, even in very dark shadows. Usually, the black mask around the frame is the densest part you normally encounter (and you would probably not care to scan these areas in high fidelity). However, in a fade you might encounter really dark areas in the frame itself.

Well, it would be interesting to see the limits of a raw capture here, given the challenges of this approach - you have to cope with large file sizes, low transfer rates and the need to “develop” the raw frame into something displayable on todays 8bpp frames… (given, this last point is about to change with HDR-displays :smirk:)

1 Like

I’ll definitely run some Kodachrome tests once I got my light source sorted… currently trying to coat a ping pong ball with Bariumsulfate from the inside to make a mini “Ulbricht” intergation sphere. Lots of moving parts as you can see :slight_smile:

Your HDR results (like above) look really amazing, especially considering that you use output-referred 8bpc files. I’ve been remotely contributing to a HDR algorithm on my job and remember well that scene-referred input files (DNG from an iPhone in this case) helped a lot with proper tonemapping — so that one could darken e.g. the bride’s white dress without making the sun go gray. May I ask how you are compositing your output files? Is that a manual or an automatic process? And doesn’t taking 5 brackets + merging + tonemamapping take a substantial amount of time & processing, too?
[Edit: You answered this already earlier — I had to re-read the thread, sorry]

Regarding the “developing” of the Raw files into something eye-pleasing, I hope I’ll be able to create a camera profile for ACR, and if I can’t, I’ll try to bribe some colleagues to do so. (Disclaimer: I am a Product Manager for Adobe Lightroom, and ACR is built “next door”).

But we’ll see — first step now is the light source and getting going with some python scaffolding.

Well, the short answer is that it is an automatic process.

But here are some more details about my approach. Some background information first: my first image processing algorithms were coded in FORTRAN 77, I later switched for tens of years to “C” (and occationally C++/Java/Matlab/VHDL), but I finally ended up in doing most of my work in Python now. Shows my age, I guess… :sunglasses:

Initially, I actually implemented a real HDR capture pipeline with the Raspberry Pi v1-camera as a basis. That is, appropriate gain curves were estimated and a real HDR with image radiances was created from different exposures of a single frame. However, my hope to find some appropriate, generic tone-mapping algorithm for the very different material I had at hand did not realize in any way.

Having spend also some time researching the human visual system, it occurred to me at the time Mertens with others published their exposure fusion approach that this approach has a lot in common with the way our human visual system views the world.

Here’s what I wrote in another thread about this:

The above mentioned opencv-implementation is quite fast and usable, but I opted to write my own version. It is a little bit slower and but offers more ways to modify and tweak the basic algorithm. I am still in the process of optimizing this, but I am fine with the current results - especially because the process is fully automatic. No manual adjustments needed!

Here is my current work flow, with approximate timings included:

  1. Insert the film into the scanner, fix camera exposure to 1/32 sec, set the resolution to 2028 x 1520, set red and blue gain to a fixed value.
  2. Check for all 5 exposure values that the sprocket area is pure white, set analog and digital gain in such a way that in the lowest exposure setting the bright sprocket area shows values about 10 units less than maximum brightness. This ensures that any bright image areas of the film frame are properly recorded.
  3. Start scanning. Scanning consists of advancing the film one frame, waiting a fraction of a second for the film and mechanics to settle, and than taking 5 different exposures in short succesion by dimming the light source over a range of 5 exposure stops. This takes about 1.5 - 2 seconds for the 5 exposures.
  4. The images are transfered from the HQCam to a Pi4 which acts as a server to my LAN. Depending on the speed of the LAN, transfer might take additional 2 - 5 secs per film frame.
  5. The images are received by a client software which stores them to a disk. Currently, the client is also responsable to trigger the next frame capture, which introduces an additional delay. All in all, I usually calculate that for every second of Super 8 stock (with 18 fps) I need about 40 sec to a minute for scanning the material. So this approach is quite slow on the capture side.
  6. Once the capture is done, the set of five images are combined into a single output frame. The exposure fusion algorithm which does that is usually run not on the source resolution of 2028 x 1520 px, but on a smaller 1200 x 900 px resolution. At that resolution, it takes on my PC between 800 - 1100 msec to process a single frame. At full resolution of 2028 x 1520, the computing time for a single frame increases to 1900 - 2100 msec/frame. The 1200x900 px images are stored as a 16bpp RGB image on disk, with an average size of about 5MB.
  7. A final post processing program takes the 16bpp images and crops them 80% to get a final output image size of 960x720 px. At this stage, sprocket detection and alignment as well as color correction and grading is done. Also, a slight sharpening step is included here. This pipeline needs less than a second per frame (But I have not timed it really). The pipeline outputs the final frame as 8bpp RGB image with a size of 960 x 720 px.

Now, for me, 960x720 px is a resonable output format for my purposes (my old “Revue” camera combined with Agfa stock is far away from reaching even such a resolution… :upside_down_face:)

As the capturing and processing is slow, lowering the resolution at appropriate times in the pipeline gives me some gains in terms of processing speed. Actually, as capturing and post processing run unattended (a whole film is processed with the same settings), I usually do the processing overnight, just checking the results in the morning.

Anyway, I am still in the process of optimizing the whole thing. As this is the fourth or more version of my film scanner, I might end up with something totally different in a few months time…

Hope this wasn’t a too long response to read. Good luck with your light source, and it will be interesting to see what raw captures the HQCam can achieve!

6 Likes

This was definitely one of the most exciting reads for me here, yet. Thank you so much for all the detail. I’ll keep you posted about my results…

Super interesting, definitely something to learn from that is very helpful!

Do I read this correctly, that using multiple exposures and the exposure fusion algorithm allows you to essentially set the exposure once (say on the first frame) and then leave it, giving “good enough” results even if later scenes are not exposed quite the same?
I have actually been bouncing around the question of what to do about exposure (automatic vs. manually setting it to a fixed one once) for a while now. Never quite got around to it yet, though, because I’m still figuring out some of the mechanics on my build.

Yes, that is basically the case - of course, within limits.Let me give four further examples to illustrate that. The captures below are from the same run as the example above, with all settings unchanged. Here’s the first, rather extrem example:

Again, the center image in the lower row is the output result of the processing pipeline, slightly color-corrected and cropped with respect to the input images. As one can see, the central region of the film frame is most faithfully recorded in the -5EV image which is the leftmost thumbnail on the lower row. The same region is totally burned out in the +0EV image which is the right one in the lower row.

Let’s focus a little bit more on this central region - here are areas near the columns which were grossly overexposed already during filming. There is no structure visible here, the Kodachrome film stock was pushed here to the limits by the autoexposure of the camera. There is however still color information (yellow color) available here. You can see this if you compare these overexposed areas with the sprocket area of the -5EV capture. Here, the full white of the light source shines through. If you use a color picker you will see that in the sprocket area, we have about 92% of maximal RGB values. That is actually the exposure reference point for the -5EV capture and all the other (-4EV tp +0EV) captures which are referenced to this one.

In any case, the minimal image information available in the central region of the frame (which is just some structureless yellow color) is detected by the exposure fusion and transfered to the output picture.

Turning now to the right side of the frame, you notice a really dark area. Here, image structure can be seen only in the -1EV (top-right) and +0EV (bottom-right) captures. However, the little structure which is there shows up in the output image as well, albeit with a rather low contrast. Given, you might get a slightly better result by manual tuning, but for me, the automatically calculated result is useable (and less work).

Remember that the wide contrast range of this image has to be packed by the algorithm into the limited 8bit range used by standard digital video formats, convincibly.

Another point which is important in this respect is how the algorithm performs when there is actually a low contrast image - will the algorithm in this case just produce an output image with an even a lower contrast?

Well, it is actually difficult to find a scene in my test movie which has a low contrast. Here’s the closest example I could find:

Because of the low contrast, the capture with -4EV (top-left) or -5EV (bottom-left) is actually a perfectly valid scan of this frame. You would need only one of these single captures to arrive (with appropriate chosen mapping) at a nice output image. Some of the other exposures are even worthless, for example the +0EV image (bottom-right) which is mostly burned out in the upper part and can in no way contribute to the output image in this image area.

Now, as this example shows, the output of the exposure fusion turns out to be fine again. There is no noticable contrast reduction here. This is an intrinsic property of this algorithm.

To complement the above, here’s another example, again with the same unchanged scanning/processing settings:

This example frame was taken indoors with not enough light available, so it is quite underexposed in the original source. Because of that, the -5EV image (bottom-right), which in normal exposed scenes carries the most of the image information, is way too dark and more or less useless. However, the +0EV scan (bottom-right) is fine. Because the exposure fusion selects the “most interesting” part of the exposure stack automatically, the output image is fine, again.

(One question I would have at that point if I would have read until here - what about fades to black in the film source? The answer is that they work fine. :wink:)

Well, the Super-8 home movies seldom feature a finely balanced exposure setting; usually, they were shot just with the available light, mostly direct sun light. So here, in closing, is a typical frame a Super-8 scanner will have to work with: dark shadows mixed with nearly burned out hightlights.

2 Likes

Thank you for the great examples!

One question that comes to mind: What happens if the frame you expose for (say the first frame of a roll?) is not exposed correctly? Does this cause problems later down the roll? Or is this not an issue at all in practice?

It is no issue at all with the process described. The important thing is that the whole exposure sequence is set to the light source of the scanner itself. In setting it, I do not care about the frame content at all.

The exposure is set with a measurement of the sprocket area, where there is no film at all. Actually, it was measured once and I have never changed it between scans.

As through the sprocket hole the full intensity of the light source is shining through, even the clearest film stock will have a lower exposure value anywhere else in the frame.

This trick ensures that any structure in the highlights of the film source does not overwelm the capture device (at least in the -5EV exposure).

Now, any other structure present in the scanned film will be darker, actually much darker. Remember that color-reversal film is exposed to the highlights, not the shadows.

Capturing the shadow details is what the other exposures are for. You need to got into the darkness as much as possible.

The most challenging film stock I encountered so far is Kodachrome 40; here, the 5EV-exposure range is really needed for most of the film frames.

The darkest part of a scan is always the unexposed frame border, and if use a color-picker on one of the examples above, you will notice that unexposed frame border is slightly above the background in the brightest exposure (bottom-right).

In fact, my illumination can not go too much over the 5EV range, for various reasons. Ideally, I would like to push it a little bit further than currently possible. This has to do with the gain curves of the sensor.

Let me elaborate this a little bit.

I do not have any gain curves measured for the HQCam (yet), so let me use the gain curves of the Raspberry Pi v1-camera:

GainCurvesV1

The y-axis shows different EVs, the x-axis the range of 0x00 (0) to 0xFF (255) of 8bpp pixelvalues these EVs are mapped to. There is a more or less linear segment from 6EV to 8EV approximately. That is the “beauty range” of this camera.

Clearly, there is another range above 8EV, mapping to values 230 or higher, which is different from the main transfer range (from about 20-230). This is the range where the camera compresses the highlights.

Ideally, you would set the exposure in a scanner in such a way that the bright sprocket area (the light source itself) is mapped to 230 or less, not using the range where the highlights are compressed. If you just take a single exposure per frame, you actually can not afford this, especially because at the other, darker end of the range, shadows are compressed as well (in this example in the range of 6EV to 3EV).

The exposure fusion algorithm in fact ignores for each intermediate exposure the compressed regions in the highlights and in the shadows. The compressed highlight region is only used in the darkest exposure, and the compressed shadow region only in the brightest exposure.

I am setting the exposure reference in the darkest frame a little bit higher than 230; experiments have shown that this does not affect the visual quality of the scan at all and I gain a little bit more structure in the dark shadows this way. But I would love to have some slight additional head room here, say 0.5 EV or so. Can’t realize that with my light source, as the DACs have only 12 bit resolution and the LED color changes at different illumination levels pose an additional challenge as well.

I hope the above was not too technical. To summarize: the exposure in my scanner is never set/changed with respect to the film source. It is only set once, with respect to the light source. It is normally not changed from film to film.

2 Likes

ok, out of curiosity, I did a quick rerun of gain-curve estimations for the cameras and source materials I have at hand. Nothing precise, but interesting to look at, I think.

Here are recovered film response curves (using a similar algorithm as described in the 1997-paper by Debevec & Malik, “Recovering High Dynamic Range Radiance Maps from Photographs”) for the 3 camera sensors I have access to:

(I have swapped here the axes with respect to a previous post so that the film response curves have a more familiar shape. “V1” is the response curve of the Raspberry Pi v1-camera, “see3” of the See3CAM_CU135 camera, and “HQ” the new Raspberry Pi HQCam.)

The x-axes of each graph are the different exposure values (or scene radiances). The y-axis shows the full dynamic range a 8 bits per pixel format can work with (jpeg, most movie formats or a usual display, from 0x00 (0) to 0xFF (255)).

The S-shape of these curves is the practial standard of mapping the huge dynamic range found in natural scenes onto the limited dynamic range available for display. If you look at any of the curves, you find an intermediate, more or less linear section, adjacent by flatter sections in the shadows (low EVs) and highlights, which compress the information.

Actually, the problem of huge dynamic range is and was present right from the start of analog photography. In those days, the chemical process of developing film stock lead by itself to an S-shaped curve, and the expertize of the photographer consisted in choosing exposure, development times etc. to optimize this transfer function for a given scene. Ansel Adams was very influential in this respect (and put some science into the subject) by developing his zone system.

In electronic times, this whole process is handled by the image processing pipeline which transfers the raw sensor data (digitized to 10 to 14 bits) into the 8 bits available in standard image/video formats and displays. If you work directly with raw files, you have to come up on your own with a suitable transfer function (“developing the raw”), if you work with jpegs or similar image formats as output, the transfer function is a given thing by the camera manufacturer.

The exposure fusion algorithm I described above works with 8 bit data (“jpegs”), but intrinsically, only data from the middle section of the film response curve is used, mostly. This is because there are appropriate weigthing functions build into the process which drop the information from pixels too dark or too bright. For such outlying pixels, the algorithm simply takes information from an exposure which is better exposed.

Only in the darkest exposure of the sequence, there is simply no additional data available for the brightest highlights. Here, the data has to come from that darkest exposure.

That is why in my scanner the darkest exposure is the reference for the all other exposure settings.

In fact, I do not care about the film itself, I require that in the darkest exposure, the light source itself is mapped into a suitable digital range.

Looking at the response curves, one can see that the curves flatten out at around a 8 bit value of 240 or so. As I do not want the highlights of the film frame to be compressed, that is the value I am aiming for the full illumination intensity. As anything in the film frame will certainly be darker, I can be sure that I am using only the linear section of my transfer curve, as intended.

2 Likes

… continuing the testing of the Raspberry Pi HQCam further.

Here’s some information on the behaviour of the sensor + processing pipeline with respect to the quality setting.

The sensor used in the HQCam is primarily designed for video applications, not actually for taking still images. That’s probably why it has more video modes available (4) than image modes (2).

Well, that matches my use case, namely capturing massive amounts of data which are only later combined into an image representing the scan of a film frame; I use as camera output neither raw, nor any RGB-format, but straight video.

The camera outputs video as compressed MJPEG frames. There is a quality setting available in the JPEG/MJPEG-format, and it might be interesting to see how the camera performs with various settings.

Here’s a small cutout of a 3456 x 1944 px frame at a quality setting of 60; which I will be using as reference:

It was taken with the Raspberry Pi standard 16mm lens. This lens performs quite good here (f-stop was at 5.6 in the experiment). By the way, forget about the alternative 6mm lens for scanning purposes - the quality of that wide-angle lens is inferior.

Let’s compare the capture above (q=60) to one with a quality reduced to q=5:

Clearly, there are JPEG artifacts showing up here, but it is still an amazingly good result. The file size of such a single frame with q=5 is so low that my image capture pipeline can run the 3456 x 1944 px resolution at the maximum speed the sensor chip is able to: 10 fps. For comparision, with the q=60 setting, the frame rate achievable is currently about 2 fps. That is mainly caused by my decision to use a client-server setup which transfers the images via LAN - a direct storage of the images at the Raspberry Pi might be a better approach here. I am currently not doing this, as the Raspberry Pi tends to get quite hot during capturing: 81°C CPU-temperature is easily achieved if the Raspberry is not actively cooled.

Well, the Raspberry Pi used in the film scanner is actively cooled and stays less than 50°C always.

The effect of the quality setting can more easily seen by looking at the pure color information of the frames. Here’s this color signal extracted out of the q=5 frame:

One clearly sees now the beloved blocking artifacts of low JPEG-compression. Raising the q-factor to 10, the results improve:

One step further, at q=20, the color information

improves further. This is especially noticable in the top-right corner of the image. But already at that low q-factor, a useable frame is received. Comparing with the q=60 image,

one sees especially in the dark area of the trees some further improvements. However, returning to the real image data, the difference between a q-factor of 20 (left image) and 60 (right image) is visually rather low:


The JPEG-encoder of the new HQCam seems to perform much better than the encoders of the old v1- and v2-cameras; here, such low quality settings did not perform well at all. I am currently using a q=10 setting (7 fps) for focus adjustments, and a setting of q=60 for scanning. The above posted exposure fusion examples were actually captured with a low quality setting of 10.

Finally, here’s a full frame of the scene, with 3456 x 1944 px resolution and a q-setting of 99 (close to the max), for reference (acutally, the original resolution was lost - the following image is only 1728x972 px):

… to complement my above experiments with MJPEG-output of the HQCam:

The " Strolls with my Dog" blog has two excellent and detailed investigations concerning raw image capture with the HQCam:

and

Very informative for the technical minded, right to the point - actually, so much that the blog-author got banned on the Raspberry Pi Camera forum for asking too detailed questions… :sunglasses:

3 Likes

Hi there
Rare poster here, but eager reader :blush:
And please bear with me if I ask stupid questions.
Just was wondering if any of your knowledge might be applicable the other way round, by inserting an image sensor (raspi hq cam or arducam) behind a super 8 camera film gate and have the raspberry take an image every time the shutter opens, I.e. when dark changes to light.
Then writing it on an card and get an image sequence.
Would love converting a super 8 cam to digital while retaining the original triggering method.
Son just using a video camera would not do the trick, the rolling shutter usually gets really nasty.

Arducam has a global shutter color cam, which could avoid this. Still wondering, if the hq would be able to capture without the rolling shutter effect if it was single-image-triggered by the super8 cam’s shutter at 12 - 24 FPS? After all, the specs for it sensitivity- and resolution-wise speak in favor of the hq cam.
Sorry if this is a little vague, ask me anything you like if you need to.
Also if I’m in the wrong forum…

you should not pair a mechanical shutter with an electronic sensor if it is not nescessary. If you do so, you definitely need to use a sensor with a global shutter. With a rolling shutter, you will get funny results, but probably not what you are expecting. The Raspi HQ camera is, like any original Raspberry Pi sensor, a rolling shutter type and will not work with a mechanical shutter in front of the sensor.

Also probably important: you can not cancel the rolling shutter effect by adding a mechanical shutter. The rolling shutter effect is backed into the way the sensor functions, no chance to change that.

If you goal is to convert an old Super-8 camera into an electronic one, there have been various attempts to do so. Look for an example here. Google might bring up a few newer attempts for such a conversion.

Thanks for taking the time to respond :blush:
Yeah, of course I’ve combed the net. There’s a lot of vapor ware, like the good old Nolab cartridge which shows up regularly in various blogs still. Then there’s numerous contraptions with external gear an processors, stuff sticking out of the cam as in your example and people just gutting a camera and shooting through the hole were the lens was… not that I thwart these things, I also think the builders went through a lot of tinkering and mostly did a good job. There’s one promising digital super8 cartridge by some guys in the Netherlands, though that one too relies on an external stuff and kinda stalled for some time now.
I’m aware of the rolling shutter of the HQ cam, but thought capturing single images activated by the light intensity of the opening shutter could avoid that.
On the other hand there’s an Arducam with 2MP that takes color imagery and has a global shutter. The sensor is also pretty close to a super8 frame size wise. What I don’t know is if a raspberry pi would be fast enough to react to the light, shoot an image, write it to memory and be ready for the next one in time. Using the shutter light as a trigger would be convenient for being able to use a selectable FPS from the camera. Plus it would be closer to the roots of film making. Just like your scans. I also thought about using the claw mechanism to trigger, but I guess that has to be removed in order to get the sensor on the focal plane.
My main goal, as probably that of a bunch of other people, is to have a device I can drop into any stock super8 and shoot video with that good old tactile feeling and handling I love about those cameras.
Plus I like the fact that there are many cameras available that have great optics at a great price.

That is simply not the case. On a sensor with rolling shutter, different horizontal lines of the sensor are exposed at different times. Therefore, really fast moving objects get distorted. This is just how rolling shutter sensors work, and there is no work-around this.

Instead of ripping out all the wonderful mechanics of an old Super-8 camera to create something in disguise, may I suggest to look at a more modern approach to a cinematic Raspberry Pi camera?

Just followed your link, that’s quite some impressive background info you put there!
See, that camera may be great and feature rich, but to me 1500x1080 would be ok, also could live with a lower resolution. And definitely with a more compact case, like e.g. a stock super8 one :wink:
In case you didn’t notice, “ ripping out all the wonderful mechanics of an old Super-8 camera to create something in disguise” isn’t exactly what I want. As I wrote, I want to keep as much of the original camera intact. Thus the need to trigger exposure via the light level variation caused by the original shutter. Removing the film gate to better get the chip at the focal plane, which can be reinserted again (at least with some Nizos I know of) and popping in a digital replacement instead of a film roll doesn’t seem too cruel to me.
And if it would help those cameras to get used again instead of lurking in some boxes in collections not doing what they were intended for: To me that would be a great plus.
Just my opinion.

Ah, understanding now slightly better what’s your plan. Well, first: the difference between global and rolling shutter is barely noticeable for usual subjects. Your mileage will vary, depending on your intension. Sensors with global shutter are more complex and thus more expensive than rolling shutter sensors with otherwise similar specifications.

While a lot of available sensors have sizes similar to the Super-8 format, the surrounding support circuits need a noticeable larger space. Either you will need to rip out the complete film gate, or you need to redesign a new circuit board for the sensor with an appropriate footprint.

Not all sensors/cameras feature an external trigger input. You would need something like that to make your sensor running in sync with the mechanical trigger. The Raspberry HQ camera actually has one, but it is at the moment largely undocumented, runs at a rather low voltage and needs additional soldering to be usable. Some Arducam clones have at least solder holes instead of small pads, which make operations like this slightly easier. One of the first fixed topics on the Raspberry Pi camera forum discusses the external sync.

Actually, I think it would be possible to design a system which would fit completely into an old Super-8 cassette, so no modifications would be needed to the camera. You would need a transfer optic to get the sensor away from the film gate and use one of the smaller formfactor Raspberry Pi computing units. You could actually get the frame trigger directly from the film transport pin with an appropriate sensor. A global shutter sensor with external trigger would be essential. It would be a quite a challenging project, with optical design challenges, designing and handling special surface-mounted PCBs and quite some software challenges…

Wow! I love this setup;
Can I ask where you got it from (the models to 3D print) or if you’d be willing to share them (if you did them yourself?

Hi @oldSkool - welcome to the Kinograph forums!

I do all my designs by myself, from optics over software to 3d-models. However, I share only what I consider to be a somewhat workable solution. So I won’t release (at this point in time) the design files of my xyz-slider, as I do not consider this to be a workable design. But, at the bottom of this specific post, you can find at least the .stl-file for the camera-body of the Raspberry Pi HQ cam, in combination with a Schneider Componon-S.

As far as I remember, I did post two core designs of my scanner, namely the integrating sphere and the 3D-printed film gate. I was able to find the film gate (including stl-files). If you scroll down a little bit in this thread, you will also find links to the stl-files for the integrating sphere.

1 Like
2 Likes

I was preparing a similar post! So there are very important changes in the camera software with the bullseye release. With this release is installed a new stack based on libcamera and applications equivalent to raspistill, raspivid,… The old applications based on MMAL including the Python Picamera library are no longer available (but it would be possible to reactivate them?). A new officially supported python library would be under development. So for Picamera users it is absolutely necessary to stay with Buster until all this stabilizes.

1 Like