Convert image numpy array into grayscale array directly without saving image - python

I have a NumPy array img_array of dimension (h,w,3) of one image, which is the result of some function. I want to convert this NumPy array directly into grayscale.
Possible Solution:
Save the img_array as image using cv2.imwrite(path). Then read again with cv2.imread(path, cv2.GRAYSCALE)
However, I am looking for something like this :
def convert_array_to_grayscale_array(img_array):
do something...
return grayscare_version
I have already tried cv2.imread(img_array, CV2.GRAYSCALE), but it is throwing error of img_array must be a file pathname.
I think saving a separate image will consume more space disk. Is there any better way to do that with or without using the OpenCV library function.

scikit-image has color conversion functions: https://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_rgb_to_gray.html
from skimage.color import rgb2gray
grayscale = rgb2gray(img_array)

Related

Why doesn't Matplotlib read image as grayscale?

I use matplotlib.pyplot.imsave with argument cmap='gray' to save a 1024x1024 nparrayas a grayscale image, but when I then read the saved image using matplotlib.pyplot.imread, I get a 1024x1024x4 nparray. Why is this?
Here is the code:
import numpy as np
import matplotlib.pyplot as plt
im = np.random.rand(1024, 1024)
print(im.shape)
plt.imsave('test.png', im, cmap='gray')
im = plt.imread('test.png')
print(im.shape)
The documentation for imread states that "The returned array has shape
(M, N) for grayscale images." I suppose this raises the question of what exactly is meant by a grayscale image? How are they stored on disk, and how is Matplotlib supposed to know whether to read an image as grayscale, RGB, RGBA, etc. (and why is it being read as an RGBA image in this case)?
I believe the cmap parameter doesn't change the file structure whatsoever in imsave.
The code from the matplotlib library for this function doesn't seem to take in account cmap for the number of channels it saves the file https://github.com/matplotlib/matplotlib/blob/v3.5.3/lib/matplotlib/image.py#L1566-L1675
I also think that Plain Onion's answer is correct.
Secondly
Rather than this If you want to save a grayscale image use open cv
try this code-
import cv2
img = cv2.imread("Image path here")
img = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
cv2.imread("path where you want to save image",img)

How possibly can I use the frames generated from my webcam to be the input of my Keras model?

I want to change the img_path to the frames generated from opencv:
img = image.load_img(img_path, target_size=(224, 224))
How can I rewrite it?
I am assuming that the image.load_img() function that you are using is the one from keras_utils package.
As it is noted in the documentation, load_img() accepts path to the image as the first parameter and returns:
Returns:
A PIL Image instance.
It is not mentioned in the question, but if you read the frames from the camera using opencv they should already be numpy arrays which you could to pass to your model. Of course you should resize them to (224,224) before (how to resize the image using opencv).
However, if you want to have the PIL images (to have the same type as the one returned by load_img()), you need to convert your opencv frames (numpy array) to PIL image. Follow this question and answer by #ZdaR to do this conversion:
import cv2
import numpy as np
from PIL import Image
img = cv2.imread("path/to/img.png")
# You may need to convert the color.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(img)
# For reversing the operation:
im_np = np.asarray(im_pil)

Python PIL read/open TIFF is black only

I try to read a TIFF file with pillow/PIL (7.2.0) in Python (3.8.3), e.g. this image.
The resulting file seems to be corrupted:
from PIL import Image
import numpy as np
myimage = Image.open('moon.tif')
myimage.mode
# 'L'
myimage.format
# 'TIFF'
myimage.size
# (358, 537)
# so far all good, but:
np.array(myimage)
# shows only zeros in the array, likewise
np.array(myimage).sum()
# 0
It doesn't seem to be a problem of the conversion to numpy array only, since if I save it to a jpg (myimage.save('moon.jpg')) the resulting jpg image has the appropriate dimensions but is all black, too.
Where did I do wrong or is it a bug?
I am not an expert in coding but i had same problem and found the TIFF file has 4 layers. R, G ,B and Alpha. When you convert it using PIL it is black.
try to view the image as plt.imshow(myimage[:, :, 0])
you could also remove the Alpha layer by saving the read image ( i used plt.imread('image')) and then saving it as image=image[:,:,3]. Now its a RGB image.
I don't know if i answered your question, but i felt this info might be of help.

convert greyscale image back to vector

I have a list called w (size: 784), which I outputted to a png greyscale image:
import matplotlib.pyplot as plt
tmp = 1/(1+np.exp(-10*w/w.max()))
plt.imshow(tmp.reshape(28,28),cmap="gray")
plt.draw()
plt.savefig("final_weight_vector")
Now I want to read the png image back to be a vector.
The solutions I found so far:
First:
import matplotlib.image as mpimg
img=mpimg.imread('final_weight_vector.png')
but img appears to not be greyscale, because its dimensions turend out to be (600, 800, 4).
Second:
reading the file as RGB and converting to greyscale:
im = Image.open('final_weight_vector.png').convert('LA')
However, I couldn't find how to iterate over im so I have no idea as to what's inside. Further, I am not sure the output of im will have the exact same values as the original w.
Help please?
The problem is that what you saved is probably a plot of the 28x28 image, not the image itself.
To be sure, please preview the image. I bet it is 600x800, not 28x28. I also suppose it contains many additional elements, like axes and padding.
If you want to store your array in a loadable format, you may use numpy.save() (and numpy.load() to load it).
You may also use PIL to save your array as image (e.g. using something similar to: http://code.activestate.com/recipes/577591-conversion-of-pil-image-and-numpy-array/)

Using Matplotlib imshow to show GIF image

I need to show a background to a matplotlib plot using ax.imshow(). The background images that I will be using are GIF-images. Despite having PIL installed, the following code results in an error complaining that the Python Image Library (PIL) is not installed (which it is):
from pylab import imread
im_file = open("test.gif")
im_obj = imread(im_file)
Reading the image using PIL directly works better:
from PIL import Image
import numpy
img = Image.open("test.gif")
img_arr = asarray(img.getdata(), dtype=numpy.uint8)
However, when reshaping the array, the following code does not work:
img_arr = img_arr.reshape(img.size[0], img.size[1], 3) #Note the number 3
The reason is that the actual color information is contained in a color table accessed through img.getcolors() or img.getpalette().
Converting all the images to PNG or another suitable format that results in RGB images when opening them with imread() or Image.open() is not an option. I could convert the images when needed using PIL but I consider that solution ugly. So the question is as follows: Is there a simple and fast (the images are 5000 x 5000 pixels) way to convert the GIF images to RGB (in RAM) so that I can display them using imshow()?
You need to convert the GIF to RGB first:
img = Image.open("test.gif").convert('RGB')
See this question: Get pixel's RGB using PIL

Categories