yes I have some news. I have posted them a month ago in another thread, so here is a quote:[quote=“Andreas, post:4, topic:176, full:true”]
Indeed, I have some updates.
I am in talks with the Film University Babelsberg in Potsdam.
In September I am going to present the current state of OSIRes (followed by a paper) at the IASA convention (IASA: International Association of Sound and Audiovisual Archives) in Berlin.
OSIRes is soon to become a joint project between the Film University and the HTW Berlin, maybe also the IASA as investor (incertain at the moment).
I know, it is not as fast as everybody hoped (including myself), but with access to various film material, investment and time, I hope it will become a better solution, than it was originally planed.
Meanwhile I have concentrated on other video/audio software, maybe also interesting for the kinograph community.
So I have written a hybrid between median and average plugin for avisynth, for restoration of VHS-Cassettes.
The motivation was to reduce noise and salt and paper artefacts by combining several digital captures of the same analogue source.
Another software I am currently developing, reduces clicks and pops of stereo recordings by interpolation within the stereo field, using a kind of dolby pro logic.
I guess more news will follow in one or two month.
Andreas, we look forward to it!
Waiting eagerly. Congratulations.
Andreas as success ?. News on the program? Do not leave us, we are waiting for you.
I don’t leave you.
I was on this year’s IASA conference, but it somehow was too late to show OSIRes there via presentation. But I had a few very interesting conversations and took the opportunity to present OSIRes on a small scale and make connections. Maybe it will be shown on the next IASA conference.
So, I am currently working at the Film University Babelsberg at an archival digitisation project and OSIRes shall become part of it. And I am still making connections to some companies for financing that project.
I know that ARRI has also developed a sound extraction software and also some polish guys (obviously not as open source).
And it is really hard to find a company for a subsidized project that is open source at the end.
They want to co-operate, but they also want me to sign a confidentiality agreement and make a business out of it.
I also don’t had the time to make further work on OSIRes, since my job at the Film University.
I working there, but still haven’t found an affordable flat to live in.
So I travel 510km per train twice a week and rest in a guesthouse there.
I am working hard to get OSIRes started as soon as possible.
But there are also so many other things at the moment to work on (like developing long term archiving strategies and the right digitisation workflows), and it will also help kinograph, I guess.
@Andreas! Great to hear from you. Sorry to hear about your housing troubles. If it was up to us, you’d live in a luxurious home with all the time you needed to work on OSIRes as much as you wanted.
Sounds like you are doing great work at the the university. If Kinograph can help you crowdfund, we would certainly be willing to do so. Just let me know!
Still working on V2 of the hardware. I haven’t announced it yet but I just got a small residency with a small amount of funding for the next 3 months. It should help cover the cost of imaging components. So that is good news.
Please let us know if there is anything we can do to help you, including making connections at conferences or funding sources. There may be interested people here in New York. If you know someone here that you’d like to talk to I might be able to reach out to them through my contacts.
Hope that we will see the completed OSiris soon,
I also hope to see…
Actually I have some news.
During the last month I did a lot of tests with an ARRIscan (not only audio) and modified OSIRes to newer stereo soundtracks like Dolby and Dolby SR.
I have found, I guess the right partners to finish the software, namely MWA Nova and Cube-Tec.
So it is in the making to start as a zim project between MWA Nova, Cube-Tec and the Filmuniversity.
I hope for, that it will be sure in the next months, but I have already started improve the software.
Maybe some comparing material of Dolby SR and OSIRes vs. classic photocell in the next weeks.
Glad to hear about the progress. Congratulations.
Any update on the progress?
@Andreas would also be curious to hear your thoughts on how CV might be used to capture and process sound at the same time as image using the same camera.
I’ve purchased an NVIDA Jetson TX2 to be paired with this Blackfly S model. If there’s a way to process the sound while the capture is in progress, we would be able to significantly speed-up the process. Do you think that’s possible?
I can put you in touch with the person I’m speaking with about CV algorithms. I hope to contract him to do some work in a couple of months. I have to finish the mechanics and lighting first.
what made you to change the camera?
I realized that if I was using OpenCV to determine when to trigger the camera, I would need a much higher frame rate than just 24fps. So I paid for the extra speed of the Blackfly S.
Sorry, I have lost my sense of time a little bit.
For a ZIM project in an approval procedure it is not that good to have an open source to start with.
But it is possible to release the results as open source. The project is still in the approval procedure.
So we all have to wait a little bit until it is over.
And at the moment I am using my time on other things in archival work, as example modify a magnetic tape machine to capture the perforations of sepmag and commag tapes as reference for the shrinkage.
So it can be dynamically resampled afterwards to keep it in sync with the image.
Shrinkage of magnetic audio tape is a major problem in our archive.
So you want to use a colour sensor to capture the image.
That is not the quality approach to image and audio. De-bayering can be less visible in image when you have a bigger source resolution to a smaller target resolution, and it will be also audible/noticeable in the soundtrack.
It would be better to use a monochrome sensor and intermittent film transport with several colour illumination passes to get the best possible quality.
Yes it would be possible to implement the extraction and clean up in real time, but it would be better to use a separate line sensor to capture the audioimage for that case. A shrinkage compensation would not be possible directly, except you might capture the perforation to use it as reference for resampling.
But if there are splices and defect perforations that has to be detected somehow.
Glad to know there is progress. I based my design on an intermittent film transport with a monochrome camera. The options to capture sound were limited to AEO and real time capture through a projector. Then came the good news about OSIRes. I hope that we will see a completed version during 2018. Good Luck.
I have sent an email to Point Gray asking about the availability and expected delivery time of Chameleon3 3.2 MP Mono USB3 Vision camera. Unfortunately I did not get any reply from them todate. Did you receive any response from them ?
@Udayarangi would you be interested in having a Skype call about our builds? I’m interested in learning more about how you’re handling intermittent motion.
I did not enquire about the Chameleon. Only the Blackfly and Grasshopper.
No problem with Skype. But all this assembling work is been done at a machine shop. First I will send you a video footage of the machine. Then it will be easier to understand.
What I intended to know was whether they respond to emails. It seems they are not interested in replying to emails. ( Still no reply to my second email)