PIL Image convert from I mode to P mode - python

I have this depth image:
that I load with PIL like:
depth_image = Image.open('stereo.png')
If I print the mode of the image it shows mode I, that is (32-bit signed integer pixels) according to the documentation.
This is correct since the image values range from 0 to 255. I'd like to colorize this depth image for better visualization so I tried to convert it to P mode with a palette like:
depth_image = depth_image.convert('P', palette=custom_palette)
depth_image.save("colorized.png")
But the result is a black and white image like this:
I'm sure the palette is ok, since there are 256 colors in int format all in a single array.
I've tried to convert it to RGB before saving like:
depth_image = depth_image.convert('RGB')
Also I tried adding the palette afterwards like:
depth_image = depth_image.putpalette(custom_palette)
And if I try to save it without converting it to RGB I get a:
depth_image.save("here.png")
AttributeError: 'NoneType' object has no attribute 'save'
So far I'll try converting the image to a numpy array and then map the colors from there, but I was wondering what was I missing out regarding PIL. I was looking around the documentation but didn't find much regarding I to P conversion.

I think the issue is that your values are scaled to the range 0..65535 rather than 0..255.
If you do this, you will see the values are larger than you expected:
i = Image.open('depth.png')
n = np.array(i)
print(n.max(),n.mean())
# prints 32257, 6437.173
So, I quickly tried:
n = (n/256).astype(np.uint8)
r = Image.fromarray(n)
r=r.convert('P')
r.putpalette(custom_palette) # I grabbed this from your pastebin

Related

Unclear difference in displaying the same image by opencv and matplotlib [with example code & exported .npy file]

In continue to my previous question (that still not answered here)
Here is a simple code with some unsuccessful attempts and exported image.npy file that illustrates the problem:
'''
conda install -c conda-forge opencv
pip install numpy
pip install matplotlib
'''
import cv2
import numpy as np
import matplotlib.pyplot as plt
if __name__ == "__main__":
image = np.load('image.npy') # Link to the file under this code block
print(image.shape) # output: (256, 256, 1)
print(image.dtype) # output: float64
# Unsuccessful attempts:
# image[np.where(image.max() != 255)] = 0
# max_img = image.max(axis=0)
# int_image = image.astype(int)
Link to download the image.npy file
And when I display it with opencv using the following code:
cv2.imshow('image', image)
cv2.waitKey()
I get an image like the following result:
In contrast, when I display it with matplotlib using the following code:
plt.imshow(image, cmap="gray")
(The 'cmap' parameter is not the issue here, It's only plot the image in Black & White)
I get an image the following result:
The second result is the desired one as far as I'm concerned -
my question is how to make the image like this (by code only and without the need to save to a file and load the image) and make it so that I get the same image in opencv as well.
I researched the issue but did not find a solution.
This reference helps me understand the reason in general but I'm still don't know how to show the image in opencv like matplotlib view in this case.
Thank you!
Finally, I found the solution:
int_image = image.astype(np.uint8)
cv2.imshow('image', int_image)
cv2.waitKey()
plt.imshow(image, cmap="gray")
plt.title("image")
plt.show()
Now - The 2 plots are same.
Hope this helps more people in the future
You can use the following steps:
load the data from the file
truncate the data into 0-255 values, same type as original (float64)
filtering using the == operator, which gives True/False values, then multiplying by 255 to get 0/255 integer values
use cv2.imshow combined with astype to get the required type
import numpy as np
import cv2
if __name__ == '__main__':
data = np.load(r'image.npy')
data2 = np.trunc(data)
data3 = (data2 == 255) * 255
cv2.imshow('title', data3.astype('float32'))
cv2.waitKey(0)
You have very few unique values in image, and their distribution is severely skewed. The next smallest value below 255 is 3.something, leaving a huge range of values unused.
image.max() == 255
sorted(set(im.flat)) == [0.0, 0.249, 0.499, 0.748, 0.997, 1.246, 1.496, 1.745, 1.994, 2.244, 2.493, 2.742, 2.992, 3.241, 3.490, 3.739, 3.989, 255.0]
cv.imshow, given floats, will map 0.0 to black and 1.0 to white. That is why your picture looks like
Your options:
convert to uint8 using image.astype(np.uint8) because cv.imshow, given uint8 data, maps 0 to black and 255 to white
divide by 255, so your values, being float, range from 0 to 1
normalize by whatever way you want, but remember how imshow behaves given a certain element type and range of values. It also accepts uint16, but not int (which would be int32/int64).
The other answer, recommending np.trunc and stuff, is just messing around. The trunc doesn't change any of the 255.0 values (in the comparison), it's redundant. There's no need to threshold the data, unless you need such a result. It is also wrong in that it tells you to take a binary array, blow its value range up 0 to 255, and then convert to float, which is pointless, because imshow, given float, expects a value range of 0 to 1. Either the *255 step is pointless, or the astype step should have used uint8.

Replacing greyscale pixel values with colors from sample PNG image

I am trying to apply colors from a gradient image to a grayscale (in RGB format i.e. R=G=B) one. For now the code looks at the R channel and uses that value to copy a color from a certain band of a 255 px tall gradient via the R channel value acting as the Y coordinate. As an example, a pixel in image 1 at (0,0) has a value of (0,0.0), the code should replace it with the color (53,18,106) from (10,0) in the second image (x is arbitrary here, my sample gradient is 100x255). Here's my code:
import os, numpy, PIL
from PIL import Image
# Access all PNG files in directory
allfiles=os.listdir(os.getcwd())
imlistmaster=[filename for filename in allfiles if filename[-4:] in [".png",".PNG"]]
imlistGradient=[filename for filename in imlistmaster if "grad" in filename]
imlistSample=[filename for filename in imlistmaster if "Sample" in filename]
# Get dimensions of images
w1,h1=Image.open(imlistSample[0]).size
N1=len(imlistSample)
w2,h2=Image.open(imlistGradient[0]).size
N2=len(imlistGradient)
#Create array based on gradient
for im in imlistGradient:
imarr2=numpy.array(Image.open(im),dtype=numpy.uint8)
pix2=Image.open(im).load()
# Convert grayscale to RGB values based on gradient
for im in imlistSample:
filename1 = os.path.basename(imlistSample[0])
pix1=Image.open(im).load()
for x in range(w1):
for y in range (h1):
color=pix1[x, y]
color=list(color)
colorvalue=color[0]
newcolor=pix2[10,colorvalue]
pix1=newcolor
image:
gradient:
(imgur because I can't embed yet)
When I run the code, color=pix1[x, y] throws "TypeError: tuple indices must be integers or slices, not tuple". Which is odd, as both x and y show up as integers in variable explorer and shouldn't Image.load explicitly takes 2 coordinates in the form of (x,y)? Also while looking around in the variable explorer it does look like at least one iteration worked as newcolor has the expected value of (53,18,106) from the gradient. Frankly I'm stumped
The culprit ended up being pix1=newcolor, changing to pix1[x,y]=newcolor solved the tuple problem. Odd that the error would identify the wrong line but oh well. This also explains the partial success, the value was being found and copied correctly and failing when being overwritten.

convert image saved in hexadecimal in a np.array to import it in opencv

I get an image stored as an object from a camera that look like this (here reduced to make it understandable):
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
is it possible to 'import' it as an 'image' in opencv?
I tried to look at the documentation of cv2.imdecode but could get it to work.
I could preprocess this array to get it to another format but I am not sure what could 'fit' to opencv.
Thank you for your help
This is a very succinct and pythonic (using NumPy) way to implement a conversion from your hexadecimal values matrix to an RGB matrix that could be read by OpenCV.
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
def to_rgb(v):
return np.array([np.int(v[1:3],16), np.int(v[3:5],16) , np.int(v[5:7],16)])
image_cv = np.array([to_rgb(h) for h in image.flatten()]).reshape(3, 4, 3)
cv2.imwrite('result.png', image_cv)
OpenCV requires either a RGB or a BGR input, which is to say you need to give the values of Red Green Blue or Blue Green Red on a scale from 0-255 (8 bit). I have shared with you the code to convert your array to an image.
Initially, I count the number of rows to find the height in terms of pixels. Then I count the number of items in a row to find the width.
Then I create an empty array of the given dimensions using np.zeros.
I then go to each cell and convert the hex code to its RGB equivalent, using the following formula #RRGGBB, R = int(RR,16), G = int(GG, 16), B = int(BB, 16). This converts the hexadecimal string to int.
#!/usr/bin/env python3
import numpy as np
import re
import cv2
# Your image
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
# Enter the image height and width
height = int(len(image[0]))
width = int(len(image[0][0]))
# Create numpy array of BGR triplets
im = np.zeros((height,width,3), dtype=np.uint8)
for row in range (height):
for col in range(width):
hex = image[row, col][1:]
R = int(hex[0:2],16)
G = int(hex[2:4],16)
B = int(hex[4:6],16)
im[row,col] = (B,G,R)
# Save to disk
cv2.imwrite('result.png', im)

Creating image through input pixel values with the Python Imaging Library (PIL)

I'm wanting to work on an idea using images but I can't get it to write pixel values properly, it always just ends up grey with some pattern like artefacts, and no matter what I try, the artefacts change but the image remains grey.
Here's the basic code I have:
from PIL import Image
data = ""
for i in range( 128**2 ):
data += "(255,0,0),"
im = Image.fromstring("RGB", (128,128), data)
im.save("test.png", "PNG")
There is no information in http://effbot.org/imagingbook/pil-index.htm on how to format data, so I've tried using 0-1, 0-255, 00000000-111111111 (binary), brackets, square bracks, no brackets, extra value for alpha and changing RGB to RGBA (which turns it light grey but that's it), comma after, and no comma after, but absolutely nothing has worked.
For the record, I'm not wanting to just store a single colour, I'm just doing this to initially get it working.
The format string should be arranged like:
"RGBRGBRGBRGB..."
Where R is a single character indicating the red value of a particular pixel, and the same for G and B.
"But 255 is three characters long, how can I turn that into a single character?" you ask. Use chr to convert your numbers into byte values.
from PIL import Image
data = ""
for i in range( 128**2 ):
data += chr(255) + chr(0) + chr(0)
im = Image.fromstring("RGB", (128,128), data)
im.save("test.png", "PNG")
Result:
Alternative solution:
Using fromstring is a rather roundabout way to generate an image if your data isn't already in that format. Instead, consider using Image.load to directly manipulate pixel values. Then you won't have to do any string conversion stuff.
from PIL import Image
im = Image.new("RGB", (128, 128))
pix = im.load()
for x in range(128):
for y in range(128):
pix[x,y] = (255,0,0)
im.save("test.png", "PNG")

PIL: Convert RGB image to a specific 8-bit palette?

Using the Python Imaging Library, I can call
img.convert("P", palette=Image.ADAPTIVE)
or
img.convert("P", palette=Image.WEB)
but is there a way to convert to an arbitrary palette?
p = []
for i in range(0, 256):
p.append(i, 0, 0)
img.convert("P", palette=p)
where it'll map each pixel to the closest colour found in the image? Or is this supported for Image.WEB and nothing else?
While looking through the source code of convert() I saw that it references im.quantize.
quantize can take a palette argument. If you provide an Image that has a palette, this function will take that palette and apply it to the image.
Example:
src = Image.open("sourcefilewithpalette.bmp")
new = Image.open("unconvertednew24bit.bmp")
converted = new.quantize(palette=src)
converted.save("converted.bmp")
The other provided answer didn't work for me (it did some really bad double palette conversion or something,) but this solution did.
The ImagePalette module docs's first example shows how to attach a palette to an image, but that image must already be of mode "P" or "L". One can, however, adapt the example to convert a full RGB image to a palette of your choice:
from __future__ import division
import Image
palette = []
levels = 8
stepsize = 256 // levels
for i in range(256):
v = i // stepsize * stepsize
palette.extend((v, v, v))
assert len(palette) == 768
original_path = 'original.jpg'
original = Image.open(original_path)
converted = Image.new('P', original.size)
converted.putpalette(palette)
converted.paste(original, (0, 0))
converted.show()

Categories