How to determine the number of channels in image? - python

I want to see the number of channels for thermal images, RGB images, grayscale images and binary images.
So I write this program:
import cv2
import numpy
img = cv2.imread("B2DBy.jpg")
print('No of Channel is: ' + str(img.ndim))
cv2.imshow("Channel", img)
cv2.waitKey()
But it gives the same three channel results for all types of images? I've read this question but it gives an error:
img = cv2.imread("B2DBy.jpg", CV_LOAD_IMAGE_UNCHANGED)
NameError: name 'CV_LOAD_IMAGE_UNCHANGED' is not defined
So my question is: Is it is the right way to see the number of channels? Or, somehow, I entered three channel images all the time and thus it gives three channel output?
My inputs:

The correct parameter in your cv2.imread should be:
img = cv2.imread('path/to/your/image', cv2.IMREAD_UNCHANGED)
Let's have a look at your images now. I use ImageJ's Show Info... command as well as the following Python code with OpenCV and Pillow:
import cv2
from PIL import Image
img_pil = Image.open('path/to/your/image')
print('Pillow: ', img_pil.mode, img_pil.size)
img = cv2.imread('path/to/your/image', cv2.IMREAD_UNCHANGED)
print('OpenCV: ', img.shape)
First image (depth map)
Pillow: RGB (640, 512)
OpenCV: (512, 640, 3)
ImageJ also says, that's a RGB image. So, most likely, your depth map was just saved as a RGB png.
Second image (dog)
Pillow: RGB (332, 300)
OpenCV: (300, 332, 3)
Interestingly, ImageJ says, that's an grayscale jpg! I assume, OpenCV and Pillow just don't support grayscale jpg, although there seems to be a grayscale jpg format.
Third image (sign)
Pillow: 1 (200, 140)
OpenCV: (140, 200)
Both, Pillow and OpenCV say, that's a grayscale image, which is also supported by ImageJ. Furthermore, Pillow uses mode '1' here, which is reflected by the dithered look of the image.
Fourth image (colours)
Pillow: RGB (500, 333)
OpenCV: (333, 500, 3)
That's just some RGB image; ImageJ also says this.
Conclusion
Yes, most likely, most of your images may just be RGB images. Nevertheless, using cv2.IMREAD_UNCHANGED at least will properly identify grayscale png files. It's questionable, if grayscale jpg files are properly supported.
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
OpenCV: 4.2.0
Pillow: 7.0.0
----------------------------------------

If image is grayscale you will need to set a flag, tuple returned contains only number of rows and columns.
So it is a good method to check if loaded image is grayscale or color image.
image = cv2.imread('gray.jpg', cv2.IMREAD_GRAYSCALE)
image.shape
If len(img.shape) gives you three, third element gives you number of channels.

I'm not sure if it'll work, but documentation says this:
cv.LoadImage(filename, iscolor=flag) with flags given. There's a flag "<0" that stands for "Return the loaded image as is (with alpha channel)".
I would try this:
img = cv2.imread("B2DBy.jpg",iscolor=<0)
or this
img = cv2.imread("B2DBy.jpg",iscolor=CV_LOAD_IMAGE_ANYDEPTH)

Related

PIL, CV2: Image quality breaks after converting from PIL to Numpy

I have this image and I read as a PIL file. Then, I save it back using save method in PIL and imwrite method in cv2. Saving the image with imwrite downgrades the image quality (it becomes black and white and text can't be read).
image = Image.open("image.png")
cv2_image = numpy.asarray(image)
image.save("pil.png")
cv2.imwrite("opencv.png", cv2_image)
Here are the output files:
pil.png
opencv.png
The input image is a palette image - see here. So, you need to convert it to RGB otherwise you just pass OpenCV the palette indices but without the palette.
So, you need:
image = Image.open(...).convert('RGB')
Now make it into a Numpy array:
cv2image = np.array(image)
But that will be in RGB order, so you need to reverse the channel order:
cv2image = cv2image[..., ::-1]

Why resizing image lead to increase in channels?

I have grayscale images of different dimensions so I need to convert them to same dimension (say, 28*28) for my experiments. I tried to do it using different methods and I was able to do it but I observed that resizing of image lead to increase in number of channels. I am new to python and image processing so please help.
from PIL import Image
image = Image.open('6.tif')
image = image.resize((28, 28), Image.ANTIALIAS)
image.save('6.png', 'PNG', quality=100)
And then following code shows different dimensions:
import imageio
image_data = imageio.imread("6.tif").astype(float)
print(image_data.shape)
image_data = imageio.imread("6.png").astype(float)
print(image_data.shape)
and result is:
(65, 74)
(28, 28, 4)
I don't need the last dimension. How is this coming? I get the similar results even with "from resizeimage import resizeimage".
There are a number of issues with your code...
If you are expecting a greyscale image, make sure that is what you get. So, change this:
image = Image.open('6.tif')
to:
image = Image.open('6.tif').convert('L')
When you resize an image, you need to use one of the correct resampling methods:
PIL.Image.NEAREST
PIL.Image.BOX
PIL.Image.BILINEAR
PIL.Image.HAMMING
PIL.Image.BICUBIC
PIL.Image.LANCZOS
So, you need to replace the ANTI_ALIAS with something from the above list on this line:
image = image.resize((28, 28), Image.ANTIALIAS)
When you save as PNG, it is always loss-less. The quality factor does not work the same as for JPEG images, so you should omit it unless you have a good understanding of how it affects the PNG encoder.
If you make these changes, specifically the first, I think your problem will go away. Bear in mind though that the PNG encoder may take an RGB image and save it as a palletised image, or it may take a greyscale image and encode it as RGB, or RGB alpha.

Error when overlaying two images in OpenCV and or PIL

I've tried overlaying two images in openCV both in openCV and in PIL, but to no avail. I'm using a 1000x1000x3 array of np.zeros for the background (aka, a black background) and this random image of my monitor, but I really can't get it to work for some reason unbeknownst to me.
Trying with OpenCV only: (result(if you pay attention, you can see a couple of weird lines and dots in the middle))
base_temp = np.zeros((1000,1000,3))
foreground_temp = cv2.imread('exampleImageThatILinkedAbove.png')
base_temp[offset_y:offset_y+foreground_temp.shape[0], offset_x:offset_x+foreground_temp.shape[1]] = foreground_temp
Trying with PIL: (The result is literally the same as the OpenCV version)
base_temp = cv2.convertScaleAbs(self.base) #Convert to uint8 for cvtColor
base_temp = cv2.cvtColor(base_temp, cv2.COLOR_BGR2RGB) #PIL uses RGB and OpenCV uses BGR
base_temp = Image.fromarray(base_temp) #Convert to PIL Image
foreground_temp = cv2.cvtColor(cv2.convertScaleAbs(self.last_img), cv2.COLOR_BGR2RGB)
foreground_temp = Image.fromarray(foreground_temp)
base_temp.paste(foreground_temp, offset)
I'm using python3.5 and and OpenCV3.4 on Windows 10, if that's any help.
I'd like to avoid any solutions that require saving the cv2 images and then reloading them in another module to convert them but if it's unavoidable that's okay too. Any help would be appreciated!
If you check the type of base_temp, you will see it is float64 and that is going to cause you problems when you try to save it as a JPEG which expects unsigned 8-bit values.
So the solution is to create your base_temp image with the correct type:
base_temp = np.zeros((1000,1000,3), dtype=np.uint8)
The complete code and result look like this:
import cv2
import numpy as np
from PIL import Image
# Make black background - not square, so it shows up problems with swapped dimensions
base_temp=np.zeros((768,1024,3),dtype=np.uint8)
foreground_temp=cv2.imread('monitor.png')
# Paste with different x and y offsets so it is clear when indices are swapped
offset_y=80
offset_x=40
base_temp[offset_y:offset_y+foreground_temp.shape[0], offset_x:offset_x+foreground_temp.shape[1]] = foreground_temp
Image.fromarray(base_temp).save('result.png')

Why reading colored image as Gray Scale in OpenCv is different from convert same image from BGR to GRAY [duplicate]

I am working in opencv(2.4.11) python(2.7) and was playing around with gray images. I found an unusual behavior when loading image in gray scale mode and converting image from BGR to GRAY. Following is my experimental code:
import cv2
path = 'some/path/to/color/image.jpg'
# Load color image (BGR) and convert to gray
img = cv2.imread(path)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Load in grayscale mode
img_gray_mode = cv2.imread(path, 0)
# diff = img_gray_mode - img_gray
diff = cv2.bitwise_xor(img_gray,img_gray_mode)
cv2.imshow('diff', diff)
cv2.waitKey()
When I viewed the difference image, I can see the left out pixels instead of jet black image. Can you suggest any reason? What is the correct way of working with gray images.
P.S. When I use both the images in SIFT, keypoints are different which may lead to different outcome specially when working with bad quality images.
Note: This is not a duplicate, because the OP is aware that the image from cv2.imread is in BGR format (unlike the suggested duplicate question that assumed it was RGB hence the provided answers only address that issue)
To illustrate, I've opened up this same color JPEG image:
once using the conversion
img = cv2.imread(path)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
and another by loading it in gray scale mode
img_gray_mode = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
Like you've documented, the diff between the two images is not perfectly 0, I can see diff pixels in towards the left and the bottom
I've summed up the diff too to see
import numpy as np
np.sum(diff)
# I got 6143, on a 494 x 750 image
I tried all cv2.imread() modes
Among all the IMREAD_ modes for cv2.imread(), only IMREAD_COLOR and IMREAD_ANYCOLOR can be converted using COLOR_BGR2GRAY, and both of them gave me the same diff against the image opened in IMREAD_GRAYSCALE
The difference doesn't seem that big. My guess is comes from the differences in the numeric calculations in the two methods (loading grayscale vs conversion to grayscale)
Naturally what you want to avoid is fine tuning your code on a particular version of the image just to find out it was suboptimal for images coming from a different source.
In brief, let's not mix the versions and types in the processing pipeline.
So I'd keep the image sources homogenous, e.g. if you have capturing the image from a video camera in BGR, then I'd use BGR as the source, and do the BGR to grayscale conversion cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Vice versa if my ultimate source is grayscale then I'd open the files and the video capture in gray scale cv2.imread(path, cv2.IMREAD_GRAYSCALE)

OpenCV Shows Gray Window

I'm trying to display an image using OpenCV. I have the following very basic code:
import cv2
img = cv2.imread('myimage.png', 0) # Reads a Gray-scale image
img2 = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.imshow("window", img2)
The window is opened properly, with the correct size, but it's gray - there's no image. The image is read properly (looking at both img and img2 in the debugger I see the expected values, not just one shade).
Note: Obviously I intend to do some image processing prior to showing the image, but first I need to be able to see the image...
OK, got it.
Turns out I needed to let OpenCV start handling events, it wasn't handling the WM_PAINT event. Adding cv2.waitKey() fixed this.
Sometimes the image size is high enough for imshow().
Try to resize the image by:
dimensions = (400,800)
image= cv2.imread('myimage.png', 0)
resized = cv2.resize(image, dimensions, interpolation = cv2.INTER_AREA)
cv2.imshow("window", resized )

Categories