Colorspace Difference in OpenCV and MATLAB - python

I am aware that OpenCV read colored image as BGR by default and MATLAB read colored image as RGB. I both converted same image to L*a*b* colorspace using the code below.
OpenCV Python
img=cv2.imread("sample.jpg")
lab=cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
cv2.imshow("Sample",lab)
MATLAB
img=imread("sample.jpg")
D = makecform('srgb2lab');
lab=applycform(img,D);
imshow(lab)
However, the image displayed does not have the same output. But when I swap the L* and b* channel in OpenCV, it displays the same image as MATLAB does.
img=cv2.imread("sample.jpg")
lab=cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
l,a,b=cv2.split(lab)
lab=cv2.merge((b,a,l))
cv2.imshow("Sample",lab)
Which leads me to the question that do I have to always the replace the first and third channel of a 3-channel image to display it correctly in OpenCV? Not just L*a*b*, HSV and other colorspaces too. Since BGR is displayed as RGB. That does means L*a*b* is displayed as b*a*L* so I have to invert it again to have the correct displayed output?

Related

Python Pillow: how to produce 3-channel image from 1-channel image?

A Python package that I'm trying to use only works with 3-channel images. If I have a grayscale PNG image, Pillow's Image.open() naturally reads it as a single-layer image. How can I use Pillow to transform the 1-channel image into a 3-channel RGB image?
The simplest method to convert a single-channel greyscale image into a 3-channel RGB image with PIL is probably like this:
RGB = Image.open('image.png').convert('RGB')
Further discussion and explanation is available here.

Use different colormap with one channel image - cv2.imshow()

When I use this method with an 1 channel image, it's shown in grayscale. Is there a way to use other color maps (not grayscale) when showing an 1 channel image? If there is, it means grayscale is the default for imshow when working with an image with only 1 channel? Thanks in advance
As far as I know, you cannot apply a color map in Python/OpenCV cv2.imshow like you can with Mathplotlib pyplot. But you can create a color map and apply it to your grayscale image using cv2.LUT to change your grayscale image into a colored image that you can display with cv2.imshow.
Please see the documentation for cv2.imshow at https://docs.opencv.org/4.1.1/d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563

How to convert a grayscale image to heatmap image with Python OpenCV

I have a (540, 960, 1) shaped image with values ranging from [0..255] which is black and white. I need to convert it to a "heatmap" representation. As an example, pixels with 255 should be of most heat and pixels with 0 should be with least heat. Others in-between. I also need to return the heat maps as Numpy arrays so I can later merge them to a video. Is there a way to achieve this?
Here are two methods, one using Matplotlib and one using only OpenCV
Method #1: OpenCV + matplotlib.pyplot.get_cmap
To implement a grayscale (1-channel) -> heatmap (3-channel) conversion, we first load in the image as grayscale. By default, OpenCV reads in an image as 3-channel, 8-bit BGR.
We can directly load in an image as grayscale using cv2.imread() with the cv2.IMREAD_GRAYSCALE parameter or use cv2.cvtColor() to convert a BGR image to grayscale with the cv2.COLOR_BGR2GRAY parameter. Once we load in the image, we throw this grayscale image into Matplotlib to obtain our heatmap image. Matplotlib returns a RGB format so we must convert back to Numpy format and switch to BGR colorspace for use with OpenCV. Here's a example using a scientific infrared camera image as input with the inferno colormap. See choosing color maps in Matplotlib for available built-in colormaps depending on your desired use case.
Input image:
Output heatmap image:
Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('frame.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Method #2: cv2.applyColorMap()
We can use OpenCV's built in heatmap function. Here's the result using the cv2.COLORMAP_HOT heatmap
Code
import cv2
image = cv2.imread('frame.png', 0)
heatmap = cv2.applyColorMap(image, cv2.COLORMAP_HOT)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: Although OpenCV's built-in implementation is short and quick, I recommend using Method #1 since there is a larger colormap selection. Matplotlib has hundreds of various colormaps and allows you to create your own custom color maps while OpenCV only has 12 to choose from. Here's the built in OpenCV colormap selection:
You need to convert the image to a proper grayscale representation. This can be done a few ways, particularly with imread(filename, cv2.IMREAD_GRAYSCALE). This reduces the shape of the image to (54,960) (hint, no third dimension).

Problems in displaying uint16 np.array as image

I have a uint16 3-dim numpy array reppresenting an RGB image, the array is created from a TIF image.
The problem is that when I import the original image in QGIS for example is displayed correctly, but if I try to display within python (with plt.imshow) the result is different (in this case more green):
QGIS image:
Plot image:
I think it is somehow related to the way matplotlib manages uint16 but even if I try to divide by 255 and convert to uint8 I can't get good results.
Going by your comment, the image isn't encoded using an RGB colour space, since the R, G and B channels have a value range of [0-255] assuming 8 bits per channel.
I'm not sure exactly which colour space the image is using, but TIFF files generally use CMYK which is optimised for printing.
Other common colour spaces to try include YCbCr (YUV) and HSL, however there are lots of variations of these that have been created over the years as display hardware and video streaming technologies have advanced.
To convert the entire image to an RGB colour space, I'd recommend the opencv-python pip package. The package is well documented, but as an example, here's how you would convert a numpy array img from YUV to RGB:
img_bgr = cv.cvtColor(img, cv.COLOR_YUV2RGB)
When using plt.imshow there's the colormap parameter you can play with, try adding cmap="gray" so for example
plt.imshow(image, cmap="gray")
source:
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html
If I try to normalize the image I get good results:
for every channel:
image[i,:,:] = image[i,:,:] / image[i,:,:].max()
However, some images appear darker than others:
different images

Why do I get a green image when I convert an RGB image to grayscale using OpenCV?

When I tried to convert an RGB image with OpenCV function cv2.cvtColor(), I got a green-like image.
I've converted the raw image read by OpenCV to RGB-format, again converted it to gray scale using cv2.cvtColor(), and tried to display it using pyplot.imshow() function.
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
plt.imshow(image)
plt.imgshow() uses a color map for a single-channel images. You have two possible solutions, convert your grayscale to rgb (basically duplicate the grayscale 3 times), or choose a proper colormap as explained here: https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html

Categories