With a little bit more time, a quick rundown with some comments and pointers.
Suggest you consider the Raspberry Pico instead of Arduino. Many advantages, one disadvantage is: it does not have NV memory. But settings can be stored at the RPi or PC driving the transport controller (PICO).
Normally when talking about separate RGB the typical setup is a monochrome sensor with R G B illuminant. The pixels (size of the sensor) determine the resolution.
I have been experimenting with separate RGB exposure, using a somewhat novel approach. White LED illuminant and a color (bayer) sensor (the HQ for now). Capturing RAW (each channel with a different light intensity) and binning (combining the Bayer filtered pixels into a 1/2 resolution RGB picture). This approach requires being able to adjust the intensity or the exposure if one wishes different levels for different color channel.
Not sure how well the warped film will work. This is a similar goal of what I have been building, and for that reason I presently have no gate. The sprocket and edge detection is done from the images captured.
Not an issue. I am attempting to also do 16 on the same.
The issue to consider with very large reels and steppers is that the step resolution is lessen as the reel radius increases (perimeter is angle (radiants) x radius). I settled for 400ft with steppers driving the pickup and take up directly. 800ft is certainly possible, but it may require other approaches. It would not be an issue for a design with a gate+pinch roller, like the Tscann.
Budget vs Speed. The faster, the more expensive.
It depends. For claw systems, and pinched roller (like the Tscann) is not as critical. For something like what I am doing with an open gate, it is.
There are multiple approaches at controlling tension, some very simple and loose -like the Tscann on-off switch- others very accurate, like nick’s ReelSlow8 load cell, and some commonly used like the potentiometer arm or dancing hat (which I use).
For the optics, first decide what sensor you wish to use. Many here are using enlarger lenses, many using the Schneider Componon-S 2.8/50mm. Another similar alternative is the Nikor EL 2.8/50mm. For 8mm and the HQ sensor, the lens to sensor distance is around 75mm (depending on crop tightness or full frame) and that will be approximately 90mm from the lens to the film for focus.
On the subject of light, first decide if you are going with a monochrome or color sensor.
For diffused even illumination many have been using an integrating sphere, and for 8mm only is not that big. In my case (8 and 16 mm) I am using a sphere with 100mm diameter. If using white light, look for very high CRI (98+).
In addition to the above, there are great tools shared in the forum. Manuel Angel has made a nice software for the raspberry pi.
Having access to 3D printing and laser cutting (as you mentioned) Tscann is quite feasible and has completed an impressive number of builds.
One area for improving the Tscann is addressing the extensive contact between active film area and the 3D printed components (rollers are flat). I would prefer a roller design minimizing contact to the edges of the film, and if unavoidable, make the contact on the opposite side of the emulsion with silicone or similar surface.
My present design is being tweaked, especially the transport control firmware. Hope to get it good enough to make all components available as open source (laser-cut templates, STLs for 3D parts, and PCBs)
There is extensive know how in the forum you can search for, and many long-time participants will give you great feedback, especially if the posting provides good context and specifics… like you have.