[v2 dev] Milestone 4 - Machine UX

Summary
Improvements to interface to improve usability. This will include at a minimum: rewind, visual feedback on scanned frames, lighting adjustment controls.

The use of a Kinograph should be easy and intuitive. The current design is full of switches and knobs that were added ad-hoc to accomodate development. We want to replace this system with one that is user-friendly and easy to understand. In this phase we will make a list of all the basic actions a user may want to perform with a Kinograph and come up with a physical interface on the machine to achieve those tasks.

Required:

  • list of desired improvements

Deliverables:

  • Blog with video showcasing new improvements
1 Like

Hi, @matthewepler I don’t know where I put this… But I got two questions:

  1. Can you make a ShapeWays account? So in that way you can set for each part the right material so we can just let it made (buy) straight away without bothering if the dimensions are correct or not and for those who don’t work with 3D printing at all…?

  2. And will version 2 get a “cleaning” sprocket like Filmfabriek machine?

Thanks.

@Gunther_Weygers great idea about Shapeways. I’ll do that!

V2 doesn’t have film cleaning as part of the design…yet. I did come across a couple of good and cheap ideas (which I can’t seem to find right now) how to approach this but haven’t incorporated them into the design. I am, however, leaving room in the film path for additions such as this that I hope the community will take advantage of as they build their own features.

If you find any designs you like, please add them! I found this old thread: Wetgate Solution? - #3 by tguinan

More of an implementation suggestion than a UI design suggestion.
Implement the UI through a web interface, allow for end-user devices to attach and control. This would allow for remote monitoring and a host of other advantages.

@Ric, it’s unclear still how much UI we really need. I had originally hoped users would be able to bring their own PC to plug in to Kinograph and then run some kind of web-based software as you mentioned.

However, if we go with a computer vision solution it may become necessary to use something like an NVIDIA Jetson to handle rapid processing of the computer vision information.

In either case, I agree that a web-based platform would be the best to develop in. I’m hoping Electron will be able to support the back-end python scripts required to run OpenCV algorithms as well as display the recorded images fast enough.

I’m open to suggestions!!

One approach that mike make sense is to do it like the Northlight scanner did: everything is done with command line applications, but Filmlight provided a basic GUI. All the GUI does is call the command line apps, one frame at a time. Obviously it would be a little different if you’re doing it in realtime, but the basic idea is not a bad one. It would allow people to build their own UI. All you’d be doing is exposing an API, pretty much. The application could be web based, command line, or a more traditional platform-specific GUI.

1 Like

This is a great idea I like very much. I had Considered something like this a while back and you mentioning it again has me thinking it warrants a re-visit.

Sometimes all you need is another voice of reason. Thanks @friolator!