I have a greyscale image that, as a numpy array, has a maximal value of 91, but if it is first converted from grayscale to RGB, its maximal value (across all channels) is 255. What formula is being used here? When viewing the images using im.show() they look identical. I checked the PIL source code for 'convert' (link) but it doesn't explicitly state how a greyscale image is converted to RGB.
I run the following:
im = PIL.Image.open(path_to_greyscale_image)
im_max_grey = max(np.asarray(im).flatten())
im = im.convert('RGB')
im_max_rgb = max(np.asarray(im).flatten())
Related
I've converted some images from RGB to Grayscale for ML purpose.
However the shape of the converted grayscale image is still 3, the same as the color image.
The code for the Conversion:
from PIL import Image
img = Image.open('path/to/color/image')
imgGray = img.convert('L')
imgGray.save('path/to/grayscale/image')
The code to check the shape of the images:
import cv2
im_color = cv2.imread('path/to/color/image')
print(im_color.shape)
im_gray2 = cv2.imread('path/to/grayscale/image')
print(im_gray2.shape)
You did
im_gray2 = cv2.imread('path/to/grayscale/image')
OpenCV does not inspect colorness of image - it does assume image is color and desired output is BGR 8-bit format. You need to inform OpenCV you want output to be grayscale (2D intensity array) as follows
im_gray2 = cv2.imread('path/to/grayscale/image', cv2.IMREAD_GRAYSCALE)
If you want to know more about reading images read OpenCV: Getting Started with Images
cv.imread, without any flags, will always convert any image content to BGR, 8 bits per channel.
If you want any image file, grayscale or color, to be read as grayscale, you can pass the cv.IMREAD_GRAYSCALE flag.
If you want to read the file as it really is, then you need to use cv.IMREAD_UNCHANGED.
im_color = cv2.imread('path/to/color/image', cv2.IMREAD_UNCHANGED)
print(im_color.shape)
im_gray2 = cv2.imread('path/to/grayscale/image', cv2.IMREAD_UNCHANGED)
print(im_gray2.shape)
I want to change the pixel value of a grayscale image using OpenCV.
Assume that I have a grayscale image and I want to convert all its pixel to 0 value one at a time. So that the resultant image is completely black. I tried this but there is no change in the image:
image = cv2.imread('test_image.png',0)
for i in range(image.shape[0]):
for j in range(image.shape[1]):
image[i, j] = 0
Result:
display the updated image
In most cases, you want to avoid using double for loops to modify pixel values since it is very slow. A better approach is to use Numpy for pixel modification since OpenCV uses Numpy arrays to display images. To achieve your desired result, you can use np.zeros to create a completely black image with the same shape as the original image.
import cv2
import numpy as np
image = cv2.imread("test_image.png", 0)
black = np.zeros(image.shape, np.uint8)
cv2.imshow('image', image)
cv2.imshow('black', black)
cv2.waitKey(0)
For example with a test image. Original (left), result (right)
I would suggest you to always try manipulating the copy of an image so that the image doesn't get affected in the wrong way. Coming to your question, you can do the following:
import cv2
image = cv2.imread('test_image.png',0)
#Creating a copy of the image to confirm right operation is performed on the image.
image_copy = image.copy()
image_copy[:,:] = [0] #Setting all values to 0.
I am working on hair removal from skin lesion images. Is there any way to convert binary back to rgb?
Original Image:
Mask Image:
I just want to restore the black area with the original image.
As I know binary images are stored in grayscale in opencv values 1-->255.
To create „dummy“ RGB images you can do:
rgb_img = cv2.cvtColor(binary_img, cv.CV_GRAY2RGB)
I call them „dummy“ since in these images the red, green and blue values are just the same.
Something like this, but your mask is the wrong size (200x200 px) so it doesn't match your image (600x450 px):
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open the input image as numpy array
npImage=np.array(Image.open("image.jpg"))
# Open the mask image as numpy array
npMask=np.array(Image.open("mask2.jpg").convert("RGB"))
# Make a binary array identifying where the mask is black
cond = npMask<128
# Select image or mask according to condition array
pixels=np.where(cond, npImage, npMask)
# Save resulting image
result=Image.fromarray(pixels)
result.save('result.png')
I updated the Daniel Tremer's answer:
import cv2
opencv_rgb_img = cv2.cvtColor(opencv_image, cv2.COLOR_GRAY2RGB)
opencv_image would be two dimension matrix like [width, height] because of binary.
opencv_rgb_img would be three dimension matrix like [width, height, color channel] because of RGB.
I am trying to resize a .jpg image with skimage.transform.resize function. Function returns me weird result (see image below). I am not sure if it is a bug or just wrong use of the function.
import numpy as np
from skimage import io, color
from skimage.transform import resize
rgb = io.imread("../../small_dataset/" + file)
# show original image
img = Image.fromarray(rgb, 'RGB')
img.show()
rgb = resize(rgb, (256, 256))
# show resized image
img = Image.fromarray(rgb, 'RGB')
img.show()
Original image:
Resized image:
I allready checked skimage resize giving weird output, but I think that my bug has different propeties.
Update: Also rgb2lab function has similar bug.
The problem is that skimage is converting the pixel data type of your array after resizing the image. The original image has a 8 bits per pixel, of type numpy.uint8, and the resized pixels are numpy.float64 variables.
The resize operation is correct, but the result is not being correctly displayed. For solving this issue, I propose 2 different approaches:
To change the data structure of the resulting image. Prior to changing to uint8 values, the pixels have to be converted to a 0-255 scale, as they are on a 0-1 normalized scale:
# ...
# Do the OP operations ...
resized_image = resize(rgb, (256, 256))
# Convert the image to a 0-255 scale.
rescaled_image = 255 * resized_image
# Convert to integer data type pixels.
final_image = rescaled_image.astype(np.uint8)
# show resized image
img = Image.fromarray(final_image, 'RGB')
img.show()
Update: This method is deprecated, as per scipy.misc.imshow
To use another library for displaying the image. Taking a look at the Image library documentation, there isn't any mode supporting 3xfloat64 pixel images. However, the scipy.misc library has the appropriate tools for converting the array format in order to display it correctly:
from scipy import misc
# ...
# Do OP operations
misc.imshow(resized_image)
I'm trying to open an RGB picture, convert it to grayscale, then represent it as a list of floats scaled from 0 to 1. At last, I want to convert it back again to an Image. However, in the code below, something in my conversion procedure fails, as img.show() (the original image) displays correctly while img2.show() display an all black picture. What am I missing?
import numpy as np
from PIL import Image
ocr_img_path = "./ocr-test.jpg"
# Open image, convert to grayscale
img = Image.open(ocr_img_path).convert("L")
# Convert to list
img_data = img.getdata()
img_as_list = np.asarray(img_data, dtype=float) / 255
img_as_list = img_as_list.reshape(img.size)
# Convert back to image
img_mul = img_as_list * 255
img_ints = np.rint(img_mul)
img2 = Image.new("L", img_as_list.shape)
img2.putdata(img_ints.astype(int))
img.show()
img2.show()
The image used
The solution is to flatten the array before putting it into the image. I think PIL interprets multidimensional arrays as different color bands.
img2.putdata(img_ints.astype(int).flatten())
For a more efficient way of loading images, check out
https://blog.eduardovalle.com/2015/08/25/input-images-theano/
but use image.tobytes() (Pillow) instead of image.tostring() (PIL).
.