Get pixel's RGB using PIL - python

Is it possible to get the RGB color of a pixel using PIL?
I'm using this code:
im = Image.open("image.gif")
pix = im.load()
print(pix[1,1])
However, it only outputs a number (e.g. 0 or 1) and not three numbers (e.g. 60,60,60 for R,G,B). I guess I'm not understanding something about the function. I'd love some explanation.
Thanks a lot.

Yes, this way:
im = Image.open('image.gif')
rgb_im = im.convert('RGB')
r, g, b = rgb_im.getpixel((1, 1))
print(r, g, b)
(65, 100, 137)
The reason you were getting a single value before with pix[1, 1] is because GIF pixels refer to one of the 256 values in the GIF color palette.
See also this SO post: Python and PIL pixel values different for GIF and JPEG and this PIL Reference page contains more information on the convert() function.
By the way, your code would work just fine for .jpg images.

With numpy :
im = Image.open('image.gif')
im_matrix = np.array(im)
print(im_matrix[0][0])
Give RGB vector of the pixel in position (0,0)

GIFs store colors as one of x number of possible colors in a palette. Read about the gif limited color palette. So PIL is giving you the palette index, rather than the color information of that palette color.
Edit: Removed link to a blog post solution that had a typo. Other answers do the same thing without the typo.

An alternative to converting the image is to create an RGB index from the palette.
from PIL import Image
def chunk(seq, size, groupByList=True):
"""Returns list of lists/tuples broken up by size input"""
func = tuple
if groupByList:
func = list
return [func(seq[i:i + size]) for i in range(0, len(seq), size)]
def getPaletteInRgb(img):
"""
Returns list of RGB tuples found in the image palette
:type img: Image.Image
:rtype: list[tuple]
"""
assert img.mode == 'P', "image should be palette mode"
pal = img.getpalette()
colors = chunk(pal, 3, False)
return colors
# Usage
im = Image.open("image.gif")
pal = getPalletteInRgb(im)

Not PIL, but imageio.imread might still be interesting:
import imageio
im = scipy.misc.imread('um_000000.png', flatten=False, mode='RGB')
im = imageio.imread('Figure_1.png', pilmode='RGB')
print(im.shape)
gives
(480, 640, 3)
so it is (height, width, channels). So the pixel at position (x, y) is
color = tuple(im[y][x])
r, g, b = color
Outdated
scipy.misc.imread is deprecated in SciPy 1.0.0 (thanks for the reminder, fbahr!)

Related

Tensorflow Deeplab image colormap removal confusion

In this following code, I only see, an image is read and written again. But how do the image pixel values get changed so drastically? Apparently, converting the PIL image object to numpy array causes this but don't know why. I have read the doc for PIL images but didn't see any reasonable explanation for this to happen.
import numpy as np
from PIL import Image
def _remove_colormap(filename):
return np.array(Image.open(filename))
def _save_annotation(annotation, filename):
pil_image = Image.fromarray(annotation.astype(dtype=np.uint8))
pil_image.save(filename)
def main():
raw_annotation = _remove_colormap('2007_000032.png')
_save_annotation(raw_annotation, '2007_000032_output.png')
if __name__ == '__main__':
main()
Input image is,
Here is the output,
Note: The value at the red area in the input image is [128,0,0] and in the output image it's [1,1,1].
The actual source of the code is here.
Edit:
As #taras made it clear in his comment,
Basically, palette is a list of 3 * 256 values in form 256 red values,
256 green values and 256 blue values. Your pil_image is an array of
greyscale pixels each taking a single value in 0..255 range. When
using 'P' mode the pixel value k is mapped to a color (pallette[k],
palette[256 + k], palette[2*256 + k]). When using 'L' mode the color
is simply k or (k, k, k) in RGB
The segmentation image annotations use a unique color for each object type. So we don't need the actual color palette for the visualization, we get rid of the unnecessary color palette.
A quick check of the opened image mode with
Image.open(filename).mode
shows the input file is opened with 'P' mode
which stands for
8-bit pixels, mapped to any other mode using a color palette
So, when you generate image with Image.fromarray the palette is simply lost
and you are left with a greyscale image in 'L' mode.
You simply need to provide the palette info when creating the output array.
The palette can be extracted with Image.getpalette():
def _remove_colormap(filename):
img = Image.open(filename)
palette = img.getpalette()
return np.array(img), palette
Once you created your pil_image you can set the palette back with Image.putpalette(palette)
def _save_annotation(annotation, palette, filename):
pil_image = Image.fromarray(annotation.astype(dtype=np.uint8))
pil_image.putpalette(palette)
pil_image.save(filename)
And your main changed accordingly:
def main():
raw_annotation, palette = _remove_colormap('SqSbn.png')
_save_annotation(raw_annotation, palette, '2007_000032_output.png')
Edit:
palette is a list of 3 * 256 values in the following form:
256 red values, 256 green values and 256 blue values.
pil_image is an array of greyscale pixels each taking a single value in 0..255 range. When using 'P' mode the pixel value k is mapped to an RGB color (pallette[k], palette[256 + k], palette[2*256 + k]). When using 'L' mode the color is simply k or (k, k, k) in RGB.
Mode conversion is missing in _remove_colormap(filename). As it's defined in the question (and the answer from #taras), remove_colormap converts a PIL image into a numpy array. _save_annotation() further converts the numpy array into a PIL image. RGB image is saved as such. convert('L') should be used for converting to grayscale. Modified function definition is as under:
def _remove_colormap(filename):
img = Image.open(filename).convert('L')
palette = img.getpalette()
print("palette: ", type(palette))
return np.array(img), palette

how to get all RGB values from a pixel in Python? [duplicate]

Is it possible to get the RGB color of a pixel using PIL?
I'm using this code:
im = Image.open("image.gif")
pix = im.load()
print(pix[1,1])
However, it only outputs a number (e.g. 0 or 1) and not three numbers (e.g. 60,60,60 for R,G,B). I guess I'm not understanding something about the function. I'd love some explanation.
Thanks a lot.
Yes, this way:
im = Image.open('image.gif')
rgb_im = im.convert('RGB')
r, g, b = rgb_im.getpixel((1, 1))
print(r, g, b)
(65, 100, 137)
The reason you were getting a single value before with pix[1, 1] is because GIF pixels refer to one of the 256 values in the GIF color palette.
See also this SO post: Python and PIL pixel values different for GIF and JPEG and this PIL Reference page contains more information on the convert() function.
By the way, your code would work just fine for .jpg images.
With numpy :
im = Image.open('image.gif')
im_matrix = np.array(im)
print(im_matrix[0][0])
Give RGB vector of the pixel in position (0,0)
GIFs store colors as one of x number of possible colors in a palette. Read about the gif limited color palette. So PIL is giving you the palette index, rather than the color information of that palette color.
Edit: Removed link to a blog post solution that had a typo. Other answers do the same thing without the typo.
An alternative to converting the image is to create an RGB index from the palette.
from PIL import Image
def chunk(seq, size, groupByList=True):
"""Returns list of lists/tuples broken up by size input"""
func = tuple
if groupByList:
func = list
return [func(seq[i:i + size]) for i in range(0, len(seq), size)]
def getPaletteInRgb(img):
"""
Returns list of RGB tuples found in the image palette
:type img: Image.Image
:rtype: list[tuple]
"""
assert img.mode == 'P', "image should be palette mode"
pal = img.getpalette()
colors = chunk(pal, 3, False)
return colors
# Usage
im = Image.open("image.gif")
pal = getPalletteInRgb(im)
Not PIL, but imageio.imread might still be interesting:
import imageio
im = scipy.misc.imread('um_000000.png', flatten=False, mode='RGB')
im = imageio.imread('Figure_1.png', pilmode='RGB')
print(im.shape)
gives
(480, 640, 3)
so it is (height, width, channels). So the pixel at position (x, y) is
color = tuple(im[y][x])
r, g, b = color
Outdated
scipy.misc.imread is deprecated in SciPy 1.0.0 (thanks for the reminder, fbahr!)

PIL merge of two images with alpha channels - not working as expected

There's a bunch of questions here on SO which provide answers to the present question, however the output is not the expected.
The goal is to merge two RGBA images. The information on the alpha channel of each image is not the same.
The current (simplified) code is:
from PIL import Image
image = '1.png'
watermark = '2.png'
wmark = Image.open(watermark)
img = Image.open(image)
img.paste(wmark, (0, 0), wmark)
img.save("result.png", "PNG")
The two images are:
Background
Foreground
Expected output
Actual result
In case you don't see the difference, here are the alpha channels (inverted for better visualization) of the final versions.
Expected result - alpha channel
Actual result - alpha channel
So with that said, is there any way of doing this or am I doing something wrong?
EDIT - clarification following #zenpoy comment:
If the foreground image has a certain amount of opacity, I want that to be taken into account when superimposing both images, but I don't want the alpha channel of the second image to be added to the first. Much like putting a piece of glass (the foreground image) in front of a paper image (background).
In other words, if the background image was RGB instead of RGBA, the final image should have no alpha information.
From your initial description, the following idea seems to be equivalent. Let X, Y be two RGBA images. Merge X and Y considering the bands RGB from X and the bands RGBA from Y, producing an image Z. Set the band A in Z to that of the band A in X. This contradicts your final statement, but it seems to give the expected output in this situation.
So, this is the code:
image = '1.png'
watermark = '2.png'
wmark = Image.open(watermark)
img = Image.open(image)
ia, wa = None, None
if len(img.getbands()) == 4:
ir, ig, ib, ia = img.split()
img = Image.merge('RGB', (ir, ig, ib))
if len(wmark.getbands()) == 4:
wa = wmark.split()[-1]
img.paste(wmark, (0, 0), wmark)
if ia:
if wa:
# XXX This seems to solve the contradiction, discard if unwanted.
ia = max_alpha(wa, ia)
img.putalpha(ia)
img.save('result.png')
where the function max_alpha is:
def max_alpha(a, b):
# Assumption: 'a' and 'b' are of same size
im_a = a.load()
im_b = b.load()
width, height = a.size
alpha = Image.new('L', (width, height))
im = alpha.load()
for x in xrange(width):
for y in xrange(height):
im[x, y] = max(im_a[x, y], im_b[x, y])
return alpha
This new function seems to take into consideration the contradiction mentioned.

PIL: Convert RGB image to a specific 8-bit palette?

Using the Python Imaging Library, I can call
img.convert("P", palette=Image.ADAPTIVE)
or
img.convert("P", palette=Image.WEB)
but is there a way to convert to an arbitrary palette?
p = []
for i in range(0, 256):
p.append(i, 0, 0)
img.convert("P", palette=p)
where it'll map each pixel to the closest colour found in the image? Or is this supported for Image.WEB and nothing else?
While looking through the source code of convert() I saw that it references im.quantize.
quantize can take a palette argument. If you provide an Image that has a palette, this function will take that palette and apply it to the image.
Example:
src = Image.open("sourcefilewithpalette.bmp")
new = Image.open("unconvertednew24bit.bmp")
converted = new.quantize(palette=src)
converted.save("converted.bmp")
The other provided answer didn't work for me (it did some really bad double palette conversion or something,) but this solution did.
The ImagePalette module docs's first example shows how to attach a palette to an image, but that image must already be of mode "P" or "L". One can, however, adapt the example to convert a full RGB image to a palette of your choice:
from __future__ import division
import Image
palette = []
levels = 8
stepsize = 256 // levels
for i in range(256):
v = i // stepsize * stepsize
palette.extend((v, v, v))
assert len(palette) == 768
original_path = 'original.jpg'
original = Image.open(original_path)
converted = Image.new('P', original.size)
converted.putpalette(palette)
converted.paste(original, (0, 0))
converted.show()

How to invert colors of image with PIL (Python-Imaging)?

I need to convert series of images drawn as white on black background letters to images where white and black are inverted (as negative). How can I achieve this using PIL?
Try the following from the docs: https://pillow.readthedocs.io/en/stable/reference/ImageOps.html
from PIL import Image
import PIL.ImageOps
image = Image.open('your_image.png')
inverted_image = PIL.ImageOps.invert(image)
inverted_image.save('new_name.png')
Note: "The ImageOps module contains a number of 'ready-made' image processing operations. This module is somewhat experimental, and most operators only work on L and RGB images."
If the image is RGBA transparent this will fail... This should work though:
from PIL import Image
import PIL.ImageOps
image = Image.open('your_image.png')
if image.mode == 'RGBA':
r,g,b,a = image.split()
rgb_image = Image.merge('RGB', (r,g,b))
inverted_image = PIL.ImageOps.invert(rgb_image)
r2,g2,b2 = inverted_image.split()
final_transparent_image = Image.merge('RGBA', (r2,g2,b2,a))
final_transparent_image.save('new_file.png')
else:
inverted_image = PIL.ImageOps.invert(image)
inverted_image.save('new_name.png')
For anyone working with an image in "1" mode (i.e., 1-bit pixels, black and white, stored with one pixel per byte -- see docs), you need to convert it into "L" mode before calling PIL.ImageOps.invert.
Thus:
im = im.convert('L')
im = ImageOps.invert(im)
im = im.convert('1')
now ImageOps must be:
PIL.ImageChops.invert(PIL.Image.open(imagepath))
note that this works for me in python 3.8.5
In case someone is inverting a CMYK image, the current implementations of PIL and Pillow don't seem to support this and throw an error. You can, however, easily circumvent this problem by inverting your image's individual bands using this handy function (essentially an extension of Greg Sadetsky's post above):
def CMYKInvert(img) :
return Image.merge(img.mode, [ImageOps.invert(b.convert('L')) for b in img.split()])
Of course ImageOps does its job well, but unfortunately it can't work with some modes like 'RGBA'. This code will solve this problem.
def invert(image: Image.Image) -> Image.Image:
drawer = ImageDraw.Draw(image)
pixels = image.load()
for x in range(image.size[0]):
for y in range(image.size[1]):
data = pixels[x, y]
if data != (0, 0, 0, 0) and isinstance(data, tuple):
drawer.point((x, y), (255 - data[0], 255 - data[1], 255 - data[2], data[3]))
return image
from PIL import Image
img = Image.open("archive.extension")
pixels = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
x,y,z = pixels[i,j][0],pixels[i,j][1],pixels[i,j][2]
x,y,z = abs(x-255), abs(y-255), abs(z-255)
pixels[i,j] = (x,y,z)
img.show()
`

Categories