Masking an Image by Manipulating Pixels through Conditions - python

I have read an image in python using RGBA color space. The size of an image is 640 by 960 and is stored to an array called img_array . Now each element in the array contains [R,G,B,A], say for example [21,34,53,255]. I want to filter my image pixels by turning pixels into black [0,0,0,255] which does not satisfy the conditional below.
R > 95 and G > 40 and B > 20 and R > G and R > B and | R - G | > 15 and A > 15
How will I do it in python? All I know is to set pixels to black which is not within the lower and upper boundaries using cv2.inrange(). Below is my sample code:
#import the necessary packages
import imutils
import numpy as np
import argparse
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image",help = "path to the image file")
args = vars(ap.parse_args())
#read image with alpha channel
img_array = cv2.imread(args["image"], -1)
rgba_lower_bound = np.array([0, 59, 59,2], dtype = "uint8")
rgba_upper_bound = np.array([20, 255, 255,255], dtype = "uint8")
skinMask = cv2.inRange(img_array, rgb_lower_bound, rgb_upper_bound)
skin = cv2.bitwise_and(img_array, img_array, mask = skinMask)
cv2.imshow("images", skin)
Please help me with this.

Assuming R, G, B, A are all numpy arrays with the same shape, created by something like:
R, G, B, A = cv2.split(img_array)
simply create a mask using the same conditionals; since they're numpy arrays, use & instead of and:
mask = (R > 95) & (G > 40) & (B > 20) & (R > G) & (R > B) & (abs(R - G) > 15) & (A > 15)
Then to set everything not satisfying the condition to black:
img_array[~mask] = [0, 0, 0, 255]
Note here the mask will be two-channel, and will be broadcasted to all the channels in img_array. Also note ~ inverts a numpy boolean array, so this is indexing by wherever mask is False, which is what you want.
Some more info on transparency: if the alpha channel is 0, that means fully transparent, and if it's 255 (for a unsigned 8-bit image), that means opaque. If you want the image to be transparent at those locations instead of black, you can just invert the mask, turn it into a uint8 array, and then merge it back into one image, like so:
R, G, B, A = cv2.split(img_array)
mask = (R > 95) & (G > 40) & (B > 20) & (R > G) & (R > B) & (abs(R - G) > 15) & (A > 15)
new_A = 255*(~mask).astype(np.uint8)
new_img_array = cv2.merge([R, G, B, new_A])
This way you're not losing any of the color information in R, G, B should you want to keep it.

You could do something like this:
def set_to_black(T, image):
# grab the image dimensions
h = image.shape[0]
w = image.shape[1]
# loop over the image, pixel by pixel
for y in range(0, h):
for x in range(0, w):
if (conditional) # I couldn't be bothered to write it all out
# Set the pixel to black
image[y, x] = [0, 0, 0, 255]
# return the thresholded image
return image

Related

How to simply do np.where not includes np.where

I wrote a program to change the color of the skin in the photo.
First I get a skin mask, then I convert BGR image to HSV. add V channel value in mask. Like this:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
H, S, V = cv2.split(hsv)
NV = np.where(skin_mask > 0, V + skin_mask / 255 * 50, V).astype(np.uint8)
NHSV = cv2.cvtColor(cv2.merge([H, S, NV]), cv2.COLOR_HSV2BGR)
But some original white pixel become to black, I think maybe V + skin_mask / 255 * 50 let pixel over 255.
so I try to:
NV = np.where(skin_mask > 0, np.where(V + skin_mask / 255 * 50 > 255, 255, V + skin_mask / 255 * 50), V).astype(np.uint8)
It's work. but ugly.
I want to know how to beautify this writing, don't use np.where includes np.where. Thank you very much!!!
It may be more elegant to use skin_mask as a mask, instead of applying arithmetic like skin_mask / 255 * 50.
You may solve it using cv2.add:
NV = V.copy()
cv2.add(NV, np.full_like(V, 50), NV, mask=skin_mask)
Advantages of using cv2.add over NumPy arithmetic:
cv2.add supports mask argument (mask element values are usually 0 and 255).
cv2.add clips the result to the valid range of uint8 [0, 255], without overflow.
Code I used for testing the solution:
import numpy as np
import cv2
# Build sample skin_mask
skin_mask = np.zeros((100, 80), np.uint8)
skin_mask[30:70, 30:60] = 255
# Build sample V matrix
V = np.full_like(skin_mask, 60)
V[40:80, 40:80] = 220
# Sum using cv2.add
NV = V.copy()
cv2.add(NV, np.full_like(V, 50), NV, mask=skin_mask)
# Sum using NumPy (used as reference for testing).
refNV = np.where(skin_mask > 0, np.minimum(V + skin_mask / 255 * 50, 255), V).astype(np.uint8) # Reference
if np.any(NV != refNV):
print('There is a bug: NV != refNV') # Should not enter here
# Show the images
cv2.imshow('skin_mask', skin_mask)
cv2.imshow('V', V)
cv2.imshow('NV', NV)
cv2.imshow('refNV', refNV)
cv2.waitKey()
cv2.destroyAllWindows()
You still need to detect the overflow. A slightly cleaner way consists in using np.minimum or np.clip:
brighter_V = np.minimum(V + skin_mask / 255 * 50, 255)
NV = np.where(skin_mask > 0, brighter_V, V).astype(np.uint8)
The np.clip method is even more generic, in case values can overflow below 0:
brighter_V = np.clip(V + skin_mask / 255 * 50, 0, 255)
NV = np.where(skin_mask > 0, brighter_V, V).astype(np.uint8)
(the break in two lines is just my personal coding preference)

How to generate a mask using Pillow's Image.load() function

I want to create a mask based on certain pixel values. For example: every pixel where B > 200
The Image.load() method seems to be exactly what I need for identifying the pixels with these values, but I can't seem to figure out how to take all these pixels and create a mask image out of them.
R, G, B = 0, 1, 2
pixels = self.input_image.get_value().load()
width, height = self.input_image.get_value().size
for y in range(0, height):
for x in range(0, width):
if pixels[x, y][B] > 200:
print("%s - %s's blue is more than 200" % (x, y))
``
I meant for you to avoid for loops and just use Numpy. So, starting with this image:
from PIL import Image
import numpy as np
# Open image
im = Image.open('colorwheel.png')
# Make Numpy array
ni = np.array(im)
# Mask pixels where Blue > 200
blues = ni[:,:,2]>200
# Save logical mask as PNG
Image.fromarray((blues*255).astype(np.uint8)).save('result.png')
If you want to make the masked pixels black, use:
ni[blues] = 0
Image.fromarray(ni).save('result.png')
You can make more complex, compound tests against ranges like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Open image
im = Image.open('colorwheel.png')
# Make Numpy array
ni = np.array(im)
# Mask pixels where 100 < Blue < 200
blues = ( ni[:,:,2]>100 ) & (ni[:,:,2]<200)
# Save logical mask as PNG
Image.fromarray((blues*255).astype(np.uint8)).save('result.png')
You can also make a condition on Reds, Greens and Blues and then use Numpy's np.logical_and() and np.logical_or() to make compound conditions, e.g.:
bluesHi = ni[:,:,2] > 200
redsLo = ni[:,:,0] < 50
mask = np.logical_and(bluesHi,redsLo)
Thanks to the reply from Mark Setchell, I solved by making a numpy array the same size as my image filled with zeroes. Then for every pixel where B > 200, I set the corresponding value in the array to 255. Finally I converted the numpy array to a PIL image in the same mode as my input image was.
R, G, B = 0, 1, 2
pixels = self.input_image.get_value().load()
width, height = self.input_image.get_value().size
mode = self.input_image.get_value().mode
mask = np.zeros((height, width))
for y in range(0, height):
for x in range(0, width):
if pixels[x, y][2] > 200:
mask[y][x] = 255
mask_image = Image.fromarray(mask).convert(mode)

Errow displaying image from array of pixels in python

I have a (numpy) array of pixels acquired as:
''' import numpy and matplotlib '''
image = Image.open('trollface.png', 'r')
width, height = image.size
pixel_values = list(image.getdata())
pixel_values = np.array(pixel_values).reshape((width, height, 3)) # 3 channels RGB
#height, width = len(pixel_values), len(pixel_values[0])
I need to compute digital negative of this image -
for y in range(0,height):
for x in range(0,width):
R,G,B = pixel_values[x,y]
pixel_values[x,y] =(255 - R, 255 - G, 255 - B)
tried displaying image from above pixels with the help of this thread
plt.imshow(np.array(pixel_values).reshape(width,height,3))
plt.show()
But it just displays a blank (white) window, with this error in CLI:
The aim here is to achieve a negative transformation of an image.
Pixel translations can be directly applied to the R, G, B band using Image.point method.
image = Image.open('trollface.png')
source = image.split()
r, g, b, a = 0, 1, 2, 3
negate = lambda i: 255 - i
transform = [source[band].point(negate) for band in (r, g, b)]
if len(source) == 4: # should have 4 bands for images with alpha channel
transform.append(source[a]) # add alpha channel
out = Image.merge(im.mode, transform)
out.save('negativetrollface.png')
EDIT using OP's procedure, you have:
im = Image.open('trollface.png')
w, h = im.size
arr = np.array(im)
original_shape = arr.shape
arr_to_dim = arr.reshape((w, h, 4))
# Note that this is expensive.
# Always take advantage of array manipulation implemented in the C bindings
for x in range(0, w):
for y in range(0, h):
r, g, b, a = arr_to_dim[x, y]
arr_to_dim[x, y] = np.array([255 - r, 255 - g, 255 - b, a])
dim_to_arr = arr_to_dim.reshape(original_shape)
im = Image.fromarray(dim_to_arr)
out.save('negativetrollface.png')

Channel mix with Pillow

I would like to do some color transformations, for example given RGB channels
R = G + B / 2
or some other transformation where a channel value is calculated based on the values of other channels of the same pixel.
It seems that .point() function can only operate on one channel. Is there a way to do what I want?
An alternative to using PIL.ImageChops is to convert the image data to a Numpy array. Numpy uses native machine data types and its compiled routines can processes array data very quickly compared to doing Python loops on Python numeric objects. So the speed of Numpy code is comparable to the speed of using ImageChops. And you can do all sorts of mathematical operations in Numpy, or using related libraries, like SciPy.
Numpy provides a function np.asarray which can create a Numpy array from PIL data. And PIL.Image has a .fromarray method to load image data from a Numpy array.
Here's a script that shows two different Numpy approaches, as well as an approach based on kennytm's ImageChops code.
#!/usr/bin/env python3
''' PIL Image channel manipulation demo
Replace each RGB channel by the mean of the other 2 channels, i.e.,
R_new = (G_old + B_old) / 2
G_new = (R_old + B_old) / 2
B_new = (R_old + G_old) / 2
This can be done using PIL's own ImageChops functions
or by converting the pixel data to a Numpy array and
using standard Numpy aray arithmetic
Written by kennytm & PM 2Ring 2017.03.18
'''
from PIL import Image, ImageChops
import numpy as np
def comp_mean_pil(iname, oname):
print('Loading', iname)
img = Image.open(iname)
#img.show()
rgb = img.split()
half = ImageChops.constant(rgb[0], 128)
rh, gh, bh = [ImageChops.multiply(x, half) for x in rgb]
rgb = [
ImageChops.add(gh, bh),
ImageChops.add(rh, bh),
ImageChops.add(rh, gh),
]
out_img = Image.merge(img.mode, rgb)
out_img.show()
out_img.save(oname)
print('Saved to', oname)
# Do the arithmetic using 'uint8' arrays, so we must be
# careful that the data doesn't overflow
def comp_mean_npA(iname, oname):
print('Loading', iname)
img = Image.open(iname)
in_data = np.asarray(img)
# Halve all RGB values
in_data = in_data // 2
# Split image data into R, G, B channels
r, g, b = np.split(in_data, 3, axis=2)
# Create new channel data
rgb = (g + b), (r + b), (r + g)
# Merge channels
out_data = np.concatenate(rgb, axis=2)
out_img = Image.fromarray(out_data)
out_img.show()
out_img.save(oname)
print('Saved to', oname)
# Do the arithmetic using 'uint16' arrays, so we don't need
# to worry about data overflow. We can use dtype='float'
# if we want to do more sophisticated operations
def comp_mean_npB(iname, oname):
print('Loading', iname)
img = Image.open(iname)
in_data = np.asarray(img, dtype='uint16')
# Split image data into R, G, B channels
r, g, b = in_data.T
# Transform channel data
r, g, b = (g + b) // 2, (r + b) // 2, (r + g) // 2
# Merge channels
out_data = np.stack((r.T, g.T, b.T), axis=2).astype('uint8')
out_img = Image.fromarray(out_data)
out_img.show()
out_img.save(oname)
print('Saved to', oname)
# Test
iname = 'Glasses0.png'
oname = 'Glasses0_out.png'
comp_mean = comp_mean_npB
comp_mean(iname, oname)
input image
output image
FWIW, that output image was created using comp_mean_npB.
The calculated channel values produced by the 3 functions can differ from one another by 1, due to the differences in the way they perform the calculations, but of course such differences aren't readily visible. :)
For this particular operation, the color transformation can be written as a matrix multiplication, so you could use the convert() method with a custom matrix (assuming no alpha channel):
# img must be in RGB mode (not RGBA):
transformed_img = img.convert('RGB', (
0, 1, .5, 0,
0, 1, 0, 0,
0, 0, 1, 0,
))
Otherwise, you can split() the image into 3 or 4 images of each color band, apply whatever operation you like, and finally merge() those bands back to a single image. Again, the original image should be in RGB or RGBA mode.
(red, green, blue, *rest) = img.split()
half_blue = PIL.ImageChops.multiply(blue, PIL.ImageChops.constant(blue, 128))
new_red = PIL.ImageChops.add(green, half_blue)
transformed_img = PIL.Image.merge(img.mode, (new_red, green, blue, *rest))

Is there any function in openCV or another library that can tile squares within an arbitrary contour?

I have images where I've found some contours around dogs, e.g.:
I want to tile squares/rectangles inside of the contour. Is there an openCV (or other library) function for this? I'm using Python. I'd like it to look something like this:
I was able to solve this by first drawing rectangles over the entire image, then checking which ones were in the area with the dog:
# the image here is stored as the variable fg
# with b, g, r, and alpha channels
# the alpha channel is masking the dog part of the image
import cv2
b, g, r, a = cv2.split(fg)
fgcp = fg.copy()
h, w = fg.shape[:2]
h -= 1
w -= 1 # avoid indexing error
rectDims = [10, 10] # w, h of rectangles
hRects = h / rectDims[0]
wRects = w / rectDims[1]
for i in range(wRects):
for j in range(hRects):
pt1 = (i * rectDims[0], j * rectDims[1])
pt2 = ((i + 1) * rectDims[0], (j + 1) * rectDims[1])
# alpha is 255 over the part of the dog
if a[pt1[1], pt1[0]] == 255 and a[pt2[1], pt2[0]] == 255:
cv2.rectangle(fgcp, pt1, pt2, [0, 0, 255], 2)
cv2.imshow('', fgcp), cv2.waitKey(0)
It's not necessarily the ideal implementation, but it works well enough.

Categories