Access R,G,B values using python PIL - python

Apologies if this question has been answered elsewhere, I looked at a few suggestions but didn't find a match.
I am trying to get the RGB values for each pixel in a png image. Based on the pillow access documentation (https://pillow.readthedocs.io/en/stable/reference/PixelAccess.html#pixelaccess) I tried the following:
def image_vec(fname):
im = Image.open(fname)
pix = im.load()
x, y = im.size
for i in range(x):
for j in range(y):
print(pix[i, j])
The documentation example seems to suggest that I should expect a triplet when I call pix[i,j]. However the snippet above prints out a quadruplet like so:
(182, 183, 172, 255)
(206, 206, 197, 255)
(156, 155, 148, 248)
So what are these values I am looking at? Is it R, G, B, alpha? If so isn't the alpha value supposed to be between 0 and 1?

Yes, those values are the R, G, B, alphas of the pixel.
Here the alpha range from 0 to 255, where 255 is completely opaque and 0 is completely transparent.

Related

is there a depth option for np.vectorize()

i'll explain what i mean by this example
i converted an image to a 3D-np.array of this form [row * col * [r, g, b, alpha, x,y]]
Notes:
The 3rd dimension is a numpy array containing rgba values of the pixel as well as its coordinates
To be presize rgb values are integers beetween 0 and 255
i want to modify each pixel(img[row][col]) using a function _func and am not sure if i can use np.vectorize for this
an example of _func would be to convert the pixel array to change the pixel rgb values st they are all equal (by calculating their mean) so that the pixel becomes gray (_func dosent modify the alpha or x, y of the pixel)
_func(np.array([100, 50, 0, 100, 255, 0, 0])) -> np.array([50, 50, 50, 100, 255, 0, 0])
can someone show me somthing equivelant to this code that actually works
#imports
img = f(img.bmp) #convert img to numpyarray, f defined earlier
def _func(arr):
"_func implementation here"
func = np.vectorize(_func, depth=2) #depth=2 because there is rows and columns
new_img = func(img)
i that this depth parameter dosent exist but can i implement this somehow.
and thank you for any help
(btw i started learning numpy this evening to do a little fun-project so i am very beginner at numpy)

Is there any way to manipulate an image using a color curve in python?

I have a series of tif images and was wondering if there is a possibility to write something in python (using maybe scikit-image or OpenCV) to apply a color curve. The data I have looks like the following, where we have a list of x, y, x, y... data like so: (0, 0, 32, 22, 64, 56, 128, 128, 192, 196, 255, 255).
With the new information you provided I think that the following code should do the trick. lut_in is the vector of input gray levels and lut_out the desired output level. Here it's applied for all 3 channels (like in photoshop). The only thing you need is the interpolation to have a LUT (look up table) of 256 elements, that fits the 256 gray levels of the input. You can apply the same method for other color resolutions.
import cv2
import numpy as np
image = cv2.imread('apple.jpg')
lut_in = [0, 127, 255]
lut_out = [0, 80, 255]
lut_8u = np.interp(np.arange(0, 256), lut_in, lut_out).astype(np.uint8)
image_contrasted = cv2.LUT(image, lut_8u)
cv2.imwrite('apple_dark.jpg', image_contrasted)
Input:
Output:
In the comments, some already gave answers on how to apply a color LUT to an image. However when I read your question, I have the impression that you would like to have in fact a 2D colormap, where the color depends on two parameters. If this is the case, I would recommend to visit this post or this one.
Hope this helps!
Here's a way to do this with python PIL instead of OpenCV just in case anyone stumbles upon this answer as I did and doesn't want to use OpenCV.
import numpy as np
from PIL import Image
im = Image.open("apple.png").convert("RGBA")
lut_x = [0, 127, 180, 255]
lut_y = [5, 80, 90, 255]
lut_u8 = np.interp(np.arange(0, 256), lut_x, lut_y).astype(np.uint8)
R, G, B, A = [0, 1, 2, 3]
source = im.split()
out = []
for band in [R, G, B]:
out.append(source[band].point(lut_u8))
out.append(source[A]) # Dont use LUT on the alpha band
merged_img = Image.merge('RGBA', out)
merged_img.show()

function takes exactly 1 argument (3 given)?

i am trying to change the value of a pixel in an image to the closest value i have in my list, and i cant figure out why i cant change the pixel value.
I've tried converting the image to RGB or RGBA and for some reason sometimes it takes 3 arguments sometime 4.
im = Image.open('rick.png') # Can be many different formats.
rgb_im = im.convert('RGBA')
pix = im.load()
height, width = im.size
image = ImageGrab.grab()
COLORS = (
(0, 0, 0),
(127, 127, 127),
(136, 0, 21),
(237, 28, 36),
(255, 127, 39),
)
def closest_color(r, g, b, COLORS):
min_diff = 9999
answer = None
for color in COLORS:
cr, cg, cb = color
color_diff = abs(r - cr) + abs(g - cg) + abs(b - cb)
if color_diff < min_diff:
answer = color
min_diff = color_diff
return answer
def read_color(height,width, COLORS, pix):
for x in range(height):
for y in range(width):
r,g,b,a = rgb_im.getpixel((x,y))
color = closest_color(r, g, b, COLORS) # color is returned as tuple
pix[x,y] = color # Changing color value? -Here i get the error-
read_color(height,width, COLORS, pix)
im.save('try.png')
I keep getting this error even tho closest_value returns one argument and i dont know why, thnk you for your help!
COLORS - is a list of colors, i've tested the closest_color() function and it works good
Error message:
'Exception has occurred: TypeError
function takes exactly 1 argument (3 given)
File "C:\Users\user\Desktop\תוכנות שעשיתי\program.py", line 133, in
read_color
pix[x,y] = color
File "C:\Users\user\Desktop\תוכנות שעשיתי\program.py", line 137, in
<module>
read_color(height,width, COLORS, pix)'
EDIT!
Apperantly the code is working for most of the images but not for all of them, for exmaple this image doesn't work and i get this error
You are being inconsistent by reading the pixels from the RGBA converted image but setting the pixels in the original maybe-not-RGBA image. Fixing that makes your code work with the sample image.
pix = rgb_im.load()

Python, Numpy, replace certain values in an image

I'm looking for an efficient way to replace certain values within a numpy image.
So far this is where I got :
def observation(self, img):
# 45 50 184
background = np.array([45, 50, 184])
# 80 0 132
border = np.array([80, 0, 132])
img = self.crop(img)
for line_index, line in enumerate(img):
for pixel_index, pixel in enumerate(line):
if not np.array_equal(pixel, background) and not np.array_equal(pixel, border):
img[line_index][pixel_index] = [254, 254, 254]
The idea is to replace all the colors that are not background or border to white.
I'm quite new to this, so I'm fairly sure that there is a more efficient way to do this.
Thanks all.
numpy.where should do the job. You have to call it twice (one for the background and one for the border) or combine the 2 conditions img != background and img != border:
np.where(np.logical_and(img!=background, img != border), img, [254, 254, 254])
See this post for a small example (possible duplicate?)
Hope it helps

How to get pixel coordinates if I know color(RGB)?

I use Python, opencv and PIL.
image = cv2.imread('image.jpg')
color = (235, 187, 7)
How can I get pixel coordinates(x, y) if I know pixels color?
Here is a numpythonic solution. Numpy library speeds up operations wherever possible.
Assuming the color to be: color = (235, 187, 7)
indices = np.where(img == color)
I used the numpy.where() method to retrieve a tuple indices of two arrays where the first array contains the x-coordinates of the pixels of color (235, 187, 7) and the second array contains the y-coordinates of those pixels.
Now indices returns something like the following:
(array([ 81, 81, 81, ..., 304, 304, 304], dtype=int64),
array([317, 317, 317, ..., 520, 520, 520], dtype=int64),
array([0, 1, 2, ..., 0, 1, 2], dtype=int64))
I then used the zip() method to get a list of tuples containing those points.
coordinates = zip(indices[0], indices[1])
But if you notice since this is a color image with three channels each coordinate will be repeated thrice. We have to keep only the unique coordinates. This can be accomplished using set() method.
unique_coordinates = list(set(list(coordinates)))
Try something like:
color = (235, 187, 7)
im = Image.open('image.gif')
rgb_im = im.convert('RGB')
for x in range(rgb_im.size()[0]):
for y in range(rgb_im.size()[1]):
r, g, b = rgb_im.getpixel((x, y))
if (r,g,b) == colour:
print(f"Found {colour} at {x},{y}!")
But getpixel can be slow, so look at using pixel access objects.
Also note that the value returned can depend on the image type. For example, a single value is returned with pix[1, 1] because GIF pixels refer to one of the 256 values in the GIF color palette.
See also this SO post: Python and PIL pixel values different for GIF and JPEG and this PIL Reference page contains more information on the convert() function.
By the way, your code would work just fine for .jpg images.
you can use following:
import numpy as np
# for color image
color = (75, 75, 75)
pixels = np.argwhere(img == color)
output(it repeats the same coordinates three times(number of colors)):
[[ 0 28 0]
[ 0 28 1]
[ 0 28 2]
[ 0 54 0]
[ 0 54 1]
[ 0 54 2]
................]
to avoid it do following(sorry for code readability):
pixels = pixels[::3][:, [0, 1]]
output:
[[ 0 28]
[ 0 54]
...........]
for gray scale image it looks better:
color = (75)
pixels = np.argwhere(img == color)
output:
[[ 0 28]
[ 0 54]
...........]
import PIL #The reason I use PIL and not opencv is that I find pillow
#(which is imported with 'PIL') a very useful library for image editing.
image = PIL.Image.open('Name_image') #the image is opened and named image
f = image.load() #I'm not sure what the load() operation exactly does, but it
# is necesarry.
color = (235, 187, 7) # the Red Green Blue values that you want to find the
#coordinates of
PixelCoordinates = [] # List in which all pixel coordinates that match
#requirements will be added.
#The lines of code below check for each pixel in the image if the RGB-values
# are equal to (235, 187, 7)
for x in image.size[0]:
for y in image.size[1]:
if f[x,y] == color:
PixelCoordinates.append([x,y])
Here is a solution using cv2 library only
import cv2
blue = int(input("Enter blue value: "))
green = int(input("Enter green value: "))
red = int(input("Enter red value: "))
path = str(input("Enter image path with image extension:"))
img = cv2.imread(path)
img= cv2.resize(img,(150,150))
x,y,z = img.shape
for i in range(x):
for j in range(y):
if img[i,j,0]==blue & img[i,j,1]==green & img[i,j,1]==red:
print("Found color at ",i,j)

Categories