opencv reduces red luminance when writing jpeg - python

I'm running into an interesting issue with opencv in python. I noticed that the luminance in the red channel is significantly reduced when I load an image using opencv and immediately save it without doing any further processing.
I know that jpeg is a lossy format, and that I should expect quality to degrade if I read/write a bunch of times. However, if I open the image in preview (mac) and export from there, I don't see a difference, even if I export to the lowest possible quality. If I use openCV to do the exact same thing, the difference in red channel is enormous (much more than you would expect due to the compression).
I tried to change all the flags (imwrite_jpeg_quality, imwrite_jpeg_luma_quality, imread_anycolor, imread_unchanged, etc). I could not find any combination that prevents this effect.
To test whether it was just the red channel, I tried this with a picture of a green apple (no difference), and a picture of an orange (red reduction noticable by eye). To make sure it has nothing to do with the image file itself (corrupted?), I also took a screenshot of the orange, exported it as a new jpeg file, and tried to read/write with python, and again the color was changed.
I attached the images before/after of the orange. The difference becomes very clear if you open both images in their own tab and switch back and forth.
If I read an already processed image (with reduced red) using opencv, and export it a second time, the red is not reduced any further. It only happens when the image is processed by opencv the first time.
this is my code:
import cv2
loc = '/imagetest/'
img = cv2.imread(loc + 'orange.jpg', cv2.IMREAD_UNCHANGED)
cv2.imwrite(loc + 'orange_processed.jpg',img,[cv2.IMWRITE_JPEG_QUALITY, 100])

Related

How to merge red channel with gray scale image with opencv python?

I have two microscope images. One in gray scale, one in red. I can merge these in photoshop, imagej, etc. I want to merge these two in opencv so I can perform this operation on many samples.
So far I've done the following (where dia=grayscale and epi=red).
# Load images
img_dia = cv2.imread(input_dia)
img_epi = cv2.imread(input_epi)
# Slice red channel from fluorescence image
b_epi, g_epi, r_epi = cv2.split(img_epi)
# Now I want to merge the grey scale and red image
No error messages. I could not find any documentation or other stack exchange pages to resolve this issue. Any help would be appreciated!
There are many ways of overlaying and blending images, but I think the nearest result to yours is probably screen-mode blending.
You can get what you want quite simply with ImageMagick which is included in most Linux distros and is available for macOS and Windows. So, just in Terminal (or Command Prompt on Windows), you can run the following without needing to write any code:
magick grey.tif red.tif -compose screen -composite result.png
If that is not the exact blend-mode you want, there 69 modes available in ImageMagick and you can see them all listed if you run:
magick -list compose
So, if you want Hard Light blending, use:
magick grey.tif red.tif -compose HardLight -composite result.png
You alluded to having lots of images to do, if so, you can get thousands done in parallel just by using GNU Parallel. You can search for answers on Stack Overflow that use ImageMagick and GNU Parallel by putting the following in the Stack Overflow Search box, including the square brackets:
[imagemagick] [gnu-parallel]
Or, provide some more detail on how your files are named and stored and I can help you further.
considering input_epi is an RGB image(3 channels)
when you load an image into opencv it is loaded as BGR by default
# colour image
img_epi = cv2.imread('input_epi.jpg')
# red channel
red_epi = img_epi[:,:,2]
# gray image
img_dia = cv2.imread('input_dia',0)
# creating a resultant image for combining only two channels
resultant_image = np.ones((img_dia.shape[0],img_dia.shape[1],2),np.uint8)
# merge two channels into a single two channel array
# first channel red
resultant_image[:,:,0] = red_epi
# second channel gray
resultant_image[:,:,1] = img_dia
Splitting and Merging Image Channels
In OpenCV Documentation
Warning cv2.split() is a costly operation (in terms of time). So do it only if you need it. Otherwise go for Numpy indexing.
Actual red colour is tricky to pin point from a RGB image better to use HSV to extract particular range of colour from an image

RGB in OpenCV. What does it mean?

Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ??
I am not sure, what you want to do or achieve, here's one thing to mention about color profiles.
The most common color profile for cameras, screens and printers is sRGB, which is a limited color spectrum which does not include the whole RGB range (because the cheap hardware can't visualize it anyways).
Some cameras (and probably scanners) allow to use different color profiles like AdobeRGB, which increases the color space and "allows" more colors.
The problem is, if you capture (e.g. scan) an image in AdobeRGB color profile, but the system (browser/screen/printer) interprets it as sRGB, you'll probably get washed out colors, just because of wrong interpretation (like you'll get blue faces in your image, if you interpret BGR images as RGB images).
OpenCV and many browsers, printers, etc. always interpret images as sRGB images, according to http://fotovideotec.de/adobe_rgb/
As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed..
The pixel values are the ones you read in with
imgread
It depends on the flags you set for it. The original image may have a greater bit-depth (depending on your scanner) than the one you loaded.
Also the real file extension is determined from the first bytes of the file, not by the file name extension.
So it may not be the pixel value of the scanned image if the bit-depths differ.
Please have a look at the imgread documentation.

image rendering issue in psychopy

I am a long-time psychopy user, and i just upgraded to 1.81.03 (from 1.78.x). In one experiment, i present images (.jpgs) to the user and ask for a rating scale response. The code worked fine before the update, but now i am getting weird artifacts on some images. For example, here is one image i want to show:
But here is what shows up [screencapped]:
You can see that one border is missing. This occurs for many of my images, though it is not always the same border, and sometimes two or three borders are missing.
Does anyone have an idea about what might be going on?
I received this information from the psychopy-users group (Micahel MacAskill):
As a general point, you should avoid using .jpgs for line art: they aren't designed for this (if you zoom in, in the internal corners of your square, you'll see the typical compression artefacts that their natural image-optimised compression algorithm introduces when applied to line art). .png format is optimal for line art. It is lossless and for this sort of image will still be very small file-size wise.
Graphics cards sometimes do scaling-up and then down-scaling of bitmaps, which can lead to issues like this with single-pixel width lines. Perhaps this is particularly the issue here because (I think) this image was supposed to be 255 × 255 pixels, and cards will sometimes scale up to the nearest power-of-two size (256 × 256) and then down again, so easy to see how the border might be trimmed.
I grabbed your image off SO, it seemed to have a surrounding border around the black line to make it 321 × 321 in total. I made that surround transparent and saved it as .png (another benefit of png vs jpg). It displays without problems (although a version cropped to just the precise dimensions of the black line did show the error you mentioned). (Also, the compression artefacts are still there, as I just made this png directly from the jpg). See attached file.
If this is the sort of simple stimulus you are showing, you might want to use ShapeStim/Polygon stimuli instead of bitmaps. They will always be drawn precisely, without any scaling issues, and there wouldn't be the need for any jiggery pokery.
Why this changed from 1.78 I'm not sure. The issue is also there in 1.82.00

Reading a .JPG Image and Saving it without file size change

I want to write a python code that reads a .jpg picture, alter some of its RBG components and save it again, without changing the picture size.
I tried to load the picture using OpenCV and PyGame, however, when I tried a simple Load/Save code, using three different functions, the resulting images is greater in size than the initial image. This is the code I used.
>>> import cv, pygame # Importing OpenCV & PyGame libraries.
>>> image_opencv = cv.LoadImage('lena.jpg')
>>> image_opencv_matrix = cv.LoadImageM('lena.jpg')
>>> image_pygame = pygame.image.load('lena.jpg')
>>> cv.SaveImage('lena_opencv.jpg', image_opencv)
>>> cv.SaveImage('lena_opencv_matrix.jpg', image_opencv_matrix)
>>> pygame.image.save(image_pygame, 'lena_pygame.jpg')
The original size was 48.3K, and the resulting are 75.5K, 75.5K, 49.9K.
So, I'm not sure I'm missing something that makes the picture original size changes, although I only made a Load/Save, or not?
And is there a better library to use rather than OpenCV or PyGame ?!
JPEG is a lossy image format. When you open and save one, you’re encoding the entire image again. You can adjust the quality settings to approximate the original file size, but you’re going to lose some image quality regardless. There’s no general way to know what the original quality setting was, but if the file size is important, you could guess until you get it close.
The size of a JPEG output depends on 3 things:
The dimensions of the original image. In your case these are the same for all 3 examples.
The color complexity within the image. An image with a lot of detail will be bigger than one that is totally blank.
The quality setting used in the encoder. In your case you used the defaults, which appear to be higher for OpenCV vs. PyGame. A better quality setting will generate a file that's closer to the original (less lossy) but larger.
Because of the lossy nature of JPEG some of this is slightly unpredictable. You can save an image with a particular quality setting, open that new image and save it again at the exact same quality setting, and it will probably be slightly different in size because of the changes introduced when you saved it the first time.

How to scale an image without occasionally inverting it (with the Python Imaging Library)

When resizing images along the lines shown in this question occasionally the resulting image is inverted. About 1% of the images I resize are inverted, the rest is fine. So far I was unable to find out what is different about these images.
See resized example and original image for examples.
Any suggestions on how to track down that problem?
I was finally able to find someone experienced in JPEG and with some additional knowledge was able to find a solution.
JPEG is a very underspecified
Format.
The second image is a valid JPEG but it is in CMYK color space, not in RGB color space.
Design minded tools (read: things from Apple) can process CMYK JPEGs, other stuff (Firefox, IE) can't.
CMYK JPEG is very under specified and the way Adobe Photoshop writes it to disk is borderline to buggy.
Best of it all there is a patch to fix the issue.
Your original image won't display for me; Firefox says
The image “http://images.hudora.de/o/NIRV2MRR3XJGR52JATL6BOVMQMFSV54I01.jpeg”
cannot be displayed, because it contains errors.
This suggests that the problem arises when you attempt to resize a corrupted JPEG, and indeed your resized example shows what looks like JPEG corruption to my eye (Ever cracked open a JPEG image and twiddled a few bits to see what it does to the output? I have, and a few of my abominable creations looked like that). There are a few JPEG repair tools out there, but I've never seriously tried any of them and don't know if they might be able to help you out.

Categories