Well, that sounds to me like you are putting 8bit channel data (in opencv/numpy language “np.uint8”) into a 16 bit channel image (“np.uint16”). Check the minimum and maximum values you are getting from various parts of your image pipeline, with some simple print statements similar to this one:
print(img.min(),img.max())
In case of a well-exposed image, you should see something around 0-10 units for the minimum, and in case of an 8 bit channel image 220-255 units for the maximum. For the 12 bit image, you should see as maximum something in the range of 3900 - 4095, in the case of a 16 bit channel image utilizing the full dynamic scope of the format, the maximum would need to go up to 60000-65535.
Therefore, if you push an 8 bit/channel image into a 16 bit/channel image, you need to multiply all values with “256” to use the full 16 bit depth. In case of the 12 bit/channel image, try as a multipler “16”. That should brighten up your image.
In my own software, I generally avoid such things by transforming the images in the input stage always to float images like in this code segment here:
if self.inputImage.dtype == np.uint8:
self.inputImage = (self.inputImage/float(0xff)).astype(np.float32)
elif self.inputImage.dtype == np.uint16:
self.inputImage = (self.inputImage/float(0xffff)).astype(np.float32)
than do the processing always in “float”-space, and reconvert only when outputting the processed image to an appropriate format. Sadly, some opencv function do not accept “np.float” or even “np.uint16” as input, so not always this route can be followed. Nothing is perfect. But it works for most purposes.