The Missing Link? -- Sound Capture with Hardware

Are we the only ones interested?
I also eagerly await for its release.

Time is the main issue at the moment.
The software has a name now: OSIRes for Optical Soundtrack Image Processing Restoration
And I have started to develop some functions for dolby stereo soundtracks.
But my time is very limited at the moment, so I can’t really say when OSIRes will be
finished.
I hope it will be before summer, but no guarantee.

If we raised some money would you be able to devote some time to working on it? Like a freelance project?

Matthew

I don’t think so.
It is a turbulent time for me now. And to be a freelancer is also risky.
Let me a little bit time till summer.
If OSIRes isn’t finished then, I will publish the source code, so others can work on it too.

Hi Andreas. Summer has come. Do you have news for us?

Hi dan74,
yes I have some news. I have posted them a month ago in another thread, so here is a quote:[quote=“Andreas, post:4, topic:176, full:true”]
Indeed, I have some updates.
I am in talks with the Film University Babelsberg in Potsdam.
In September I am going to present the current state of OSIRes (followed by a paper) at the IASA convention (IASA: International Association of Sound and Audiovisual Archives) in Berlin.
OSIRes is soon to become a joint project between the Film University and the HTW Berlin, maybe also the IASA as investor (incertain at the moment).
I know, it is not as fast as everybody hoped (including myself), but with access to various film material, investment and time, I hope it will become a better solution, than it was originally planed.
Meanwhile I have concentrated on other video/audio software, maybe also interesting for the kinograph community.
So I have written a hybrid between median and average plugin for avisynth, for restoration of VHS-Cassettes.
The motivation was to reduce noise and salt and paper artefacts by combining several digital captures of the same analogue source.
Another software I am currently developing, reduces clicks and pops of stereo recordings by interpolation within the stereo field, using a kind of dolby pro logic.
[/quote]
I guess more news will follow in one or two month.

1 Like

Andreas, we look forward to it!

Waiting eagerly. Congratulations.

Andreas as success ?. News on the program? Do not leave us, we are waiting for you.

Hallo,
I don’t leave you.
I was on this year’s IASA conference, but it somehow was too late to show OSIRes there via presentation. But I had a few very interesting conversations and took the opportunity to present OSIRes on a small scale and make connections. Maybe it will be shown on the next IASA conference.
So, I am currently working at the Film University Babelsberg at an archival digitisation project and OSIRes shall become part of it. And I am still making connections to some companies for financing that project.
I know that ARRI has also developed a sound extraction software and also some polish guys (obviously not as open source).
And it is really hard to find a company for a subsidized project that is open source at the end.
They want to co-operate, but they also want me to sign a confidentiality agreement and make a business out of it.

I also don’t had the time to make further work on OSIRes, since my job at the Film University.
I working there, but still haven’t found an affordable flat to live in.
So I travel 510km per train twice a week and rest in a guesthouse there.
I am working hard to get OSIRes started as soon as possible.

But there are also so many other things at the moment to work on (like developing long term archiving strategies and the right digitisation workflows), and it will also help kinograph, I guess.

1 Like

@Andreas! Great to hear from you. Sorry to hear about your housing troubles. If it was up to us, you’d live in a luxurious home with all the time you needed to work on OSIRes as much as you wanted.

Sounds like you are doing great work at the the university. If Kinograph can help you crowdfund, we would certainly be willing to do so. Just let me know!

Still working on V2 of the hardware. I haven’t announced it yet but I just got a small residency with a small amount of funding for the next 3 months. It should help cover the cost of imaging components. So that is good news.

Please let us know if there is anything we can do to help you, including making connections at conferences or funding sources. There may be interested people here in New York. If you know someone here that you’d like to talk to I might be able to reach out to them through my contacts.

Matthew

Hope that we will see the completed OSiris soon,

I also hope to see…

Actually I have some news.
During the last month I did a lot of tests with an ARRIscan (not only audio) and modified OSIRes to newer stereo soundtracks like Dolby and Dolby SR.
I have found, I guess the right partners to finish the software, namely MWA Nova and Cube-Tec.
So it is in the making to start as a zim project between MWA Nova, Cube-Tec and the Filmuniversity.
I hope for, that it will be sure in the next months, but I have already started improve the software.
Maybe some comparing material of Dolby SR and OSIRes vs. classic photocell in the next weeks.

2 Likes

Glad to hear about the progress. Congratulations.

Any update on the progress?

@Andreas would also be curious to hear your thoughts on how CV might be used to capture and process sound at the same time as image using the same camera.

I’ve purchased an NVIDA Jetson TX2 to be paired with this Blackfly S model. If there’s a way to process the sound while the capture is in progress, we would be able to significantly speed-up the process. Do you think that’s possible?

I can put you in touch with the person I’m speaking with about CV algorithms. I hope to contract him to do some work in a couple of months. I have to finish the mechanics and lighting first.

what made you to change the camera?

I realized that if I was using OpenCV to determine when to trigger the camera, I would need a much higher frame rate than just 24fps. So I paid for the extra speed of the Blackfly S.

Sorry, I have lost my sense of time a little bit.
For a ZIM project in an approval procedure it is not that good to have an open source to start with.
But it is possible to release the results as open source. The project is still in the approval procedure.
So we all have to wait a little bit until it is over.
And at the moment I am using my time on other things in archival work, as example modify a magnetic tape machine to capture the perforations of sepmag and commag tapes as reference for the shrinkage.
So it can be dynamically resampled afterwards to keep it in sync with the image.
Shrinkage of magnetic audio tape is a major problem in our archive.

So you want to use a colour sensor to capture the image.
That is not the quality approach to image and audio. De-bayering can be less visible in image when you have a bigger source resolution to a smaller target resolution, and it will be also audible/noticeable in the soundtrack.
It would be better to use a monochrome sensor and intermittent film transport with several colour illumination passes to get the best possible quality.
Yes it would be possible to implement the extraction and clean up in real time, but it would be better to use a separate line sensor to capture the audioimage for that case. A shrinkage compensation would not be possible directly, except you might capture the perforation to use it as reference for resampling.
But if there are splices and defect perforations that has to be detected somehow.