A Python package that I'm trying to use only works with 3-channel images. If I have a grayscale PNG image, Pillow's Image.open() naturally reads it as a single-layer image. How can I use Pillow to transform the 1-channel image into a 3-channel RGB image?
The simplest method to convert a single-channel greyscale image into a 3-channel RGB image with PIL is probably like this:
RGB = Image.open('image.png').convert('RGB')
Further discussion and explanation is available here.
Related
I am getting a ValueError in numpy when performing operations on images. The problem seems to be that the images edited by Paint.NET are missing the RGB dimension when opened using PIL and converted to a numpy array.
If PIL is giving you a 861x1091 image when you are expecting an 861x1091x3 image, that is almost certainly because it is a palette image - see here for explanation.
The simplest thing to do, if you want a 3-channel RGB image rather than a single channel palette image is to convert it to RGB when you open it:
im = Image.open(path).convert('RGB')
When I tried to convert an RGB image with OpenCV function cv2.cvtColor(), I got a green-like image.
I've converted the raw image read by OpenCV to RGB-format, again converted it to gray scale using cv2.cvtColor(), and tried to display it using pyplot.imshow() function.
image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
plt.imshow(image)
plt.imgshow() uses a color map for a single-channel images. You have two possible solutions, convert your grayscale to rgb (basically duplicate the grayscale 3 times), or choose a proper colormap as explained here: https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
I am aware that OpenCV read colored image as BGR by default and MATLAB read colored image as RGB. I both converted same image to L*a*b* colorspace using the code below.
OpenCV Python
img=cv2.imread("sample.jpg")
lab=cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
cv2.imshow("Sample",lab)
MATLAB
img=imread("sample.jpg")
D = makecform('srgb2lab');
lab=applycform(img,D);
imshow(lab)
However, the image displayed does not have the same output. But when I swap the L* and b* channel in OpenCV, it displays the same image as MATLAB does.
img=cv2.imread("sample.jpg")
lab=cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
l,a,b=cv2.split(lab)
lab=cv2.merge((b,a,l))
cv2.imshow("Sample",lab)
Which leads me to the question that do I have to always the replace the first and third channel of a 3-channel image to display it correctly in OpenCV? Not just L*a*b*, HSV and other colorspaces too. Since BGR is displayed as RGB. That does means L*a*b* is displayed as b*a*L* so I have to invert it again to have the correct displayed output?
I am using skimage.color.rgb2gray to convert image to grayscale. But the method isn't working. What am I missing?
How can a PIL image be converted to a Pyvision image?
Based on the documentation, http://sourceforge.net/apps/mediawiki/pyvision/index.php?title=Quick_Start_1, Pvvision image itself is PIL image.
The Image constructor accepts
filenames as an argument and will then
load that file from the disk as a PIL image. The Image constructor
will also accept other python image objects. For example, if you
pass a numpy matrix, PIL image, or an OpenCV image to the constructor
it will crate a pyvision image based on that data.