Python Opencv - Cannot change pixel value of a picture - python

Need to change the white pixels to black and black pixels to white of the picture given below
import cv2
img=cv2.imread("cvlogo.png")
A basic opencv logo with white background and resized the picture to a fixed known size
img=cv2.resize(img, (300,300))#(width,height)
row,col=0,0
i=0
Now checking each pixel by its row and column positions with for loop
If pixel is white, then change it to black or if pixel is black,change it to white.
for row in range(0,300,1):
print(row)
for col in range(0,300,1):
print(col)
if img[row,col] is [255,255,255] : #I have used == instead of 'is'..but there is no change
img[row,col]=[0,0,0]
elif img[row,col] is [0,0,0]:
img[row,col]=[255,255,255]
There is no error in execution but it is not changing the pixel values to black or white respectively. More over if statement is also not executing..Too much of confusion..
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

I am not very experienced, but I would do it using numpy.where(), which is faster than the loops.
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Read the image
original_image=cv2.imread("cvlogo.png")
# Not necessary. Make a copy to plot later
img=np.copy(original_image)
#Isolate the areas where the color is black(every channel=0) and white (every channel=255)
black=np.where((img[:,:,0]==0) & (img[:,:,1]==0) & (img[:,:,2]==0))
white=np.where((img[:,:,0]==255) & (img[:,:,1]==255) & (img[:,:,2]==255))
#Turn black pixels to white and vice versa
img[black]=(255,255,255)
img[white]=(0,0,0)
# Plot the images
fig=plt.figure()
ax1 = fig.add_subplot(1,2,1)
ax1.imshow(original_image)
ax1.set_title('Original Image')
ax2 = fig.add_subplot(1,2,2)
ax2.imshow(img)
ax2.set_title('Modified Image')
plt.show()

I think this should work. :)
(I used numpy just to get width and height values - you dont need this)
import cv2
img=cv2.imread("cvlogo.png")
img=cv2.resize(img, (300,300))
height, width, channels = img.shape
white = [255,255,255]
black = [0,0,0]
for x in range(0,width):
for y in range(0,height):
channels_xy = img[y,x]
if all(channels_xy == white):
img[y,x] = black
elif all(channels_xy == black):
img[y,x] = white
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

This is also a method of solving this problem.
CREDITS:ajlaj25
import cv2
img=cv2.imread("cvlogo.png")
img=cv2.resize(img, (300,300))
height, width, channels = img.shape
print(height,width,channels)
for x in range(0,width):
for y in range(0,height):
if img[x,y,0] == 255 and img[x,y,1] == 255 and img[x,y,2] == 255:
img[x,y,0] = 0
img[x,y,1] = 0
img[x,y,2] = 0
elif img[x,y,0] == 0 and img[x,y,1] == 0 and img[x,y,2] == 0:
img[x,y,0] = 255
img[x,y,1] = 255
img[x,y,2] = 255
img[x,y] denotes the channel values - all three: [ch1,ch2,ch3] -
at the x,y coordinates. img[x,y,0] is the ch1 channel's value at x,y
coordinates.
**
x and y denotes pixels location not RGB values of pixel .So,
img[x,y,0] is the ch1 channel's value at x,y coordinates
**
cv2.imshow('Coverted Image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

A bit late, but I'd like to contribute with another approach to solve this situation. My approach is based on image indexation, which are faster than looping through the image as the approach used in the accept answer.
I did some time measurement of both codes to illustrate what I just said. Take a look at the code below:
import cv2
from matplotlib import pyplot as plt
# Reading image to be used in the montage, this step is not important
original = cv2.imread('imgs/opencv.png')
# Starting time measurement
e1 = cv2.getTickCount()
# Reading the image
img = cv2.imread('imgs/opencv.png')
# Converting the image to grayscale
imgGray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Converting the grayscale image into a binary image to get the whole image
ret,imgBinAll = cv2.threshold(imgGray,175,255,cv2.THRESH_BINARY)
# Converting the grayscale image into a binary image to get the text
ret,imgBinText = cv2.threshold(imgGray,5,255,cv2.THRESH_BINARY)
# Changing white pixels from original image to black
img[imgBinAll == 255] = [0,0,0]
# Changing black pixels from original image to white
img[imgBinText == 0] = [255,255,255]
# Finishing time measurement
e2 = cv2.getTickCount()
t = (e2 - e1)/cv2.getTickFrequency()
print(f'Time spent in seconds: {t}')
At this point I stopped timing because the next step is just to plot the montage, the code follows:
# Plotting the image
plt.subplot(1,5,1),plt.imshow(original)
plt.title('original')
plt.xticks([]),plt.yticks([])
plt.subplot(1,5,2),plt.imshow(imgGray,'gray')
plt.title('grayscale')
plt.xticks([]),plt.yticks([])
plt.subplot(1,5,3),plt.imshow(imgBinAll,'gray')
plt.title('binary - all')
plt.xticks([]),plt.yticks([])
plt.subplot(1,5,4),plt.imshow(imgBinText,'gray')
plt.title('binary - text')
plt.xticks([]),plt.yticks([])
plt.subplot(1,5,5),plt.imshow(img,'gray')
plt.title('final result')
plt.xticks([]),plt.yticks([])
plt.show()
That is the final result:
Montage showing all steps of the proposed approach
And this is the time consumed (printed in the console):
Time spent in seconds: 0.008526025
In order to compare both approaches I commented the line where the image is resized. Also, I stopped timing before the imshow command. These were the results:
Time spent in seconds: 1.837972522
Final result of the looping approach
If you examine both images you'll see some contour differences. Sometimes when you are working with image processing, efficiency is key. Therefore, it is a good idea to save time where it is possible. This approach can be adapted for different situations, take a look at the threshold documentation.

Related

How to count number of white and black pixels in color picture in python? How to count total pixels using numpy

I want to calculate persentage of black pixels and white pixels for the picture, its colorful one
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread("image.png")
cropped_image = image[183:779,0:1907,:]
You don't want to run for loops over images - it is dog slow - no disrespect to dogs. Use Numpy.
#!/usr/bin/env python3
import numpy as np
import random
# Generate a random image 640x150 with many colours but no black or white
im = np.random.randint(1,255,(150,640,3), dtype=np.uint8)
# Draw a white rectangle 100x100
im[10:110,10:110] = [255,255,255]
# Draw a black rectangle 10x10
im[120:130,200:210] = [0,0,0]
# Count white pixels
sought = [255,255,255]
white = np.count_nonzero(np.all(im==sought,axis=2))
print(f"white: {white}")
# Count black pixels
sought = [0,0,0]
black = np.count_nonzero(np.all(im==sought,axis=2))
print(f"black: {black}")
Output
white: 10000
black: 100
If you mean you want the tally of pixels that are either black or white, you can either add the two numbers above together, or test for both in one go like this:
blackorwhite = np.count_nonzero(np.all(im==[255,255,255],axis=2) | np.all(im==[0,0,0],axis=2))
If you want the percentage, bear mind that the total number of pixels is easily calculated with:
total = im.shape[0] * im.shape[1]
As regards testing, it is the same as any software development - get used to generating test data and using it :-)
white_pixels = np.logical_and(255==cropped_image[:,:,0],np.logical_and(255==cropped_image[:,:,1],255==cropped_image[:,:,2]))
num_white = np.sum(white_pixels)
and the same with 0 for the black ones
Keep variables for white_count and black_count and just iterate through the image matrix. Whenever you encounter 255 increase the white_count and whenever 0 increase the black_count. Try it yourself, if no success I'll post the code here :)
P.S keep the dimensionality of the image in mind
You can use the getcolors() function from PIL image, this function return a list of tuples with colors found in image and the amount of each one. I'm using the following function to return a dictionary with color as key, and counter as value.
from PIL import Image
def getcolordict(im):
w,h = im.size
colors = im.getcolors(w*h)
colordict = { x[1]:x[0] for x in colors }
return colordict
im = Image.open('image.jpg')
colordict = getcolordict(im)
# get the amount of black pixels in image
# in RGB black is 0,0,0
blackpx = colordict.get((0,0,0))
# get the amount of white pixels in image
# in RGB white is 255,255,255
whitepx = colordict.get((255,255,255))
# percentage
w,h = im.size
totalpx = w*h
whitepercent=(whitepx/totalpx)*100
blackpercent=(blackpx/totalpx)*100

How can I cut a green background with the foreground from the rest of the picture in Python?

I'm trying to cut multiple images with a green background. The center of the pictures is green and i want to cut the rest out of the picture. The problem is, that I got the pictures from a video, so sometimes the the green center is bigger and sometimes smaller. My true task is to use K-Means on the knots, therefore i have for example a green background and two ropes, one blue and one red.
I use python with opencv, numpy and matplotlib.
I already cut the center, but sometimes i cut too much and sometimes i cut too less. My Imagesize is 1920 x 1080 in this example.
Here the knot is left and there is more to cut
Here the knot is in the center
Here is another example
Here is my desired output from picture 1
Example 1 which doesn't work with all algorithm
Example 2 which doesn't work with all algorithm
Example 3 which doesn't work with all algorithm
Here is my Code so far:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from PIL import Image, ImageEnhance
img = cv2.imread('path')
print(img.shape)
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
crop_img = imgRGB[500:500+700, 300:300+500]
plt.imshow(crop_img)
plt.show()
You can change color to hsv.
src = cv2.imread('path')
imgRGB = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)
imgHSV = cv2.cvtColor(imgRGB, cv2.COLOR_BGR2HSV)
Then use inRange to find only green values.
lower = np.array([20, 0, 0]) #Lower values of HSV range; Green have Hue value equal 120, but in opencv Hue range is smaler [0-180]
upper = np.array([100, 255, 255]) #Uppervalues of HSV range
imgRange = cv2.inRange(imgHSV, lower, upper)
Then use morphology operations to fill holes after not green lines
#kernels for morphology operations
kernel_noise = np.ones((3,3),np.uint8) #to delete small noises
kernel_dilate = np.ones((30,30),np.uint8) #bigger kernel to fill holes after ropes
kernel_erode = np.ones((38,38),np.uint8) #bigger kernel to delete pixels on edge that was add after dilate function
imgErode = cv2.erode(imgRange, kernel_noise, 1)
imgDilate = cv2.dilate(imgErode , kernel_dilate, 1)
imgErode = cv2.erode(imgDilate, kernel_erode, 1)
Put mask on result image. You can now easly find corners of green screen (findContours function) or use in next steps result image
res = cv2.bitwise_and(imgRGB, imgRGB, mask = imgErode) #put mask with green screen on src image
The code below does what you want. First it converts the image to the HSV colorspace, which makes selecting colors easier. Next a mask is made where only the green parts are selected. Some noise is removed and the rows and columns are summed up. Finally a new image is created based on the first/last rows/cols that fall in the green selection.
Since in all provided examples a little extra of the top needed to be cropped off I've added code to do that. First I've inverted the mask. Now you can use the sum of the rows/cols to find the row/col that is fully within the green selection. It is done for the top. In the image below the window 'Roi2' is the final image.
Edit: updated code after comment by ts.
Updated result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("gr.png")
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
lower_val = (30, 0, 0)
upper_val = (65,255,255)
# Threshold the HSV image to get only green colors
# the mask has white where the original image has green
mask = cv2.inRange(hsv, lower_val, upper_val)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# sum each row and each volumn of the image
sumOfCols = np.sum(mask, axis=0)
sumOfRows = np.sum(mask, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
#roi = img[y1:y2,x1:x2]
#show images
#cv2.imshow("Roi", roi)
# optional: to cut off the extra part at the top:
#invert mask, all area's not green become white
mask_inv = cv2.bitwise_not(mask)
# search the first and last column top down for a green pixel and cut off at lowest common point
for i in range(mask_inv.shape[0]):
if mask_inv[i,0] == 0 and mask_inv[i,x2] == 0:
y1 = i
print('First row: ' + str(i))
break
# create a new image based on the found values
roi2 = img[y1:y2,x1:x2]
cv2.imshow("Roi2", roi2)
cv2.imwrite("img_cropped.jpg", roi2)
cv2.waitKey(0)
cv2.destroyAllWindows()
First step is to extract green channel from your image, this is easy with OpenCV numpy and would produce grayscale image (2D numpy array)
import numpy as np
import cv2
img = cv2.imread('knots.png')
imgg = img[:,:,1] #extracting green channel
Second step is using thresholding, which mean turning grayscale image into binary (black and white ONLY) image for which OpenCV has ready function: https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html
imgt = cv2.threshold(imgg,127,255,cv2.THRESH_BINARY)[1]
Now imgt is 2D numpy array consisting solely of 0s and 255s. Now you have to decide how you would look for places of cuts, I suggest following:
topmost row of pixel containing at least 50% of 255s
bottommost row of pixel containing at least 50% of 255s
leftmost column of pixel containing at least 50% of 255s
rightmost column of pixel containing at least 50% of 255s
Now we have to count number of occurences in each row and each column
height = img.shape[0]
width = img.shape[1]
columns = np.apply_along_axis(np.count_nonzero,0,imgt)
rows = np.apply_along_axis(np.count_nonzero,1,imgt)
Now columns and rows are 1D numpy arrays containing number of 255s for each column/row, knowing height and width we could get 1D numpy arrays of bool values following way:
columns = columns>=(height*0.5)
rows = rows>=(width*0.5)
Here 0.5 means 50% mentioned earlier, feel free to adjust that value to your needs. Now it is time to find index of first True and last True in columns and rows.
icolumns = np.argwhere(columns)
irows = np.argwhere(rows)
leftcut = int(min(icolumns))
rightcut = int(max(icolumns))
topcut = int(min(irows))
bottomcut = int(max(irows))
Using argwhere I got numpy 1D arrays of indexes of Trues, then found lowest and greatest. Finally you can clip your image and save it
imgout = img[topcut:bottomcut,leftcut:rightcut]
cv2.imwrite('out.png',imgout)
There are two places which might be requiring adjusting: % of 255s (in my example 50%) and threshold value (127 in cv2.threshold).
EDIT: Fixed line with cv2.threshold
Based on the new images you added I assume that you do not only want to cut out the non green parts as you asked, but that you want a smaller frame around the ropes/knot. Is that correct? If not, you should upload the video and describe the purpose/goal of the cropping a bit more, so that we can better help you.
Assuming you want a cropped image with only the ropes, the solution is quite similar the the previous answer. However, this time the red and blue of the ropes are selected using HSV. The image is cropped based on the resulting mask. If you want the image somewhat bigger than just the ropes, you can add extra margins - but be sure to account/check for the edge of the image.
Note: the code below works for the images that that have a full green background, so I suggest you combine it with one of the solutions that only selects the green area. I tested this for all your images as follows: I took the code from my other answer, put it in a function and added return roi2 at the end. This output is fed into a second function that holds the code below. All images were processed successful.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.JPG")
# blue
lower_val_blue = (110, 0, 0)
upper_val_blue = (179,255,155)
# red
lower_val_red = (0, 0, 150)
upper_val_red = (10,255,255)
# Threshold the HSV image
mask_blue = cv2.inRange(img, lower_val_blue, upper_val_blue)
mask_red = cv2.inRange(img, lower_val_red, upper_val_red)
# combine masks
mask_total = cv2.bitwise_or(mask_blue,mask_red)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask_total = cv2.morphologyEx(mask_total, cv2.MORPH_CLOSE, kernel)
# sum each row and each volumn of the mask
sumOfCols = np.sum(mask_total, axis=0)
sumOfRows = np.sum(mask_total, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
roi = img[y1:y2,x1:x2]
#show image
cv2.imshow("Result", roi)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to customise the output generated from image difference?

I have no background in image-processing. I am interested in getting the difference between these two images.
After writing the following code :
from PIL import Image
from PIL import ImageChops
im1 = Image.open("1.png")
im2 = Image.open("2.png")
diff = ImageChops.difference(im2, im1)
diff.save("diff.png")
I get this output :-
I am looking for some customisations here :
1) I want to label the differences in output in different colours. Things from the 1.png and 2.png should have a different colours.
2) background should be white.
3) I want my output to have axises and axis labels. Would it be possible somehow ?
You probably can't do this with the high-level difference method, but it's quite easy if you compare the images pixel by pixel yourself. Quick attempt:
Code:
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
im1 = Image.open("im1.jpeg").convert('1') # binary image for pixel evaluation
rgb1 = Image.open("im1.jpeg").convert('RGB') # RGB image for border copy
p1 = im1.load()
prgb1 = rgb1.load()
im2 = Image.open("im2.jpeg").convert('1') # binary image for pixel evaluation
p2 = im2.load()
width = im1.size[0]
height = im1.size[1]
imd = Image.new("RGB", im1.size)
draw = ImageDraw.Draw(imd)
dest = imd.load()
fnt = ImageFont.truetype('/System/Library/Fonts/OpenSans-Regular.ttf', 20)
for i in range(0, width):
for j in range(0, height):
# border region: just copy pixels from RGB image 1
if j < 30 or j > 538 or i < 170 or i > 650:
dest[i,j] = prgb1[i,j]
# pixel is only set in im1, make red
elif p1[i,j] == 255 and p2[i,j] == 0:
dest[i,j] = (255,0,0)
# pixel is only set in im2, make blue
elif p1[i,j] == 0 and p2[i,j] == 255:
dest[i,j] = (0,0,255)
# unchanged pixel/background: make white
else:
dest[i,j] = (255,255,255)
draw.text((700, 50),"blue", "blue", font=fnt)
draw.text((700, 20),"red", "red", font=fnt)
imd.show()
imd.save("diff.png")
This assumes that the images are the same size and have identical axes.

convert image (np.array) to binary image

Thank you for reading my question.
I am new to python and became interested in scipy. I am trying to figure out how I can make the image of the Racoon (in scipy misc) to a binary one (black, white). This is not taught in the scipy-lecture tutorial.
This is so far my code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc #here is how you get the racoon image
face = misc.face()
image = misc.face(gray=True)
plt.imshow(image, cmap=plt.cm.gray)
print image.shape
def binary_racoon(image, lowerthreshold, upperthreshold):
img = image.copy()
shape = np.shape(img)
for i in range(shape[1]):
for j in range(shape[0]):
if img[i,j] < lowerthreshold and img[i,j] > upperthreshold:
#then assign black to the pixel
else:
#then assign white to the pixel
return img
convertedpicture = binary_racoon(image, 80, 100)
plt.imshow(convertedpicture, cmap=plt.cm.gist_gray)
I have seen other people using OpenCV to make a picture binary, but I am wondering how I can do it in this way by looping over the pixels? I have no idea what value to give to the upper and lower threshold, so I made a guess of 80 and 100. Is there also a way to determine this?
In case anyone else is looking for a quick minimal example to experiment with, here's what I used to binarize an image:
from scipy.misc import imread, imsave
# read in image as 8 bit grayscale
img = imread('cat.jpg', mode='L')
# specify a threshold 0-255
threshold = 150
# make all pixels < threshold black
binarized = 1.0 * (img > threshold)
# save the binarized image
imsave('binarized.jpg', binarized)
Input:
Output:
You're overthinking this:
def to_binary(img, lower, upper):
return (lower < img) & (img < upper)
In numpy, the comparison operators apply over the whole array elementwise. Note that you have to use & instead of and to combine the booleans, since python does not allow numpy to overload and
You don't need to iterate over the x and y positions of the image array. Use the numpy array to check if the array is above of below the threshold of interest. Here is some code that produces a boolean (true/false) array as the black and white image.
# use 4 different thresholds
thresholds = [50,100,150,200]
# create a 2x2 image array
fig, ax_arr = plt.subplots(2,2)
# iterate over the thresholds and image axes
for ax, th in zip(ax_arr.ravel(), thresholds):
# bw is the black and white array with the same size and shape
# as the original array. the color map will interpret the 0.0 to 1.0
# float array as being either black or white.
bw = 1.0*(image > th)
ax.imshow(bw, cmap=plt.cm.gray)
ax.axis('off')
# remove some of the extra white space
fig.tight_layout(h_pad=-1.5, w_pad=-6.5)

loop binary image pixel

i have this image with two people in it. it is binary image only contains black and white pixels.
first i want to loop over all the pixels and find white pixels in the image.
than what i want to do is that i want to find [x,y] for the one certain white pixel.
after that i want to use that particular[x,y] in the image which is for the white pixel in the image.
using that co-ordinate of [x,y] i want to convert neighbouring black pixels into white pixels. not whole image tho.
i wanted to post image here but i cant post it unfortunately. i hope my question is understandable now. in the below image you can see the edges.
say for example the edge of the nose i find that with loop using [x,y] and than turn all neighbouring black pixels into white pixels.
This is the binary image
The operation described is called dilation, from Mathematical Morphology. You can either use, for example, scipy.ndimage.binary_dilation or implement your own.
Here are the two forms to do it (one is a trivial implementation), and you can check the resulting images are identical:
import sys
import numpy
from PIL import Image
from scipy import ndimage
img = Image.open(sys.argv[1]).convert('L') # Input is supposed to the binary.
width, height = img.size
img = img.point(lambda x: 255 if x > 40 else 0) # "Ignore" the JPEG artifacts.
# Dilation
im = numpy.array(img)
im = ndimage.binary_dilation(im, structure=((0, 1, 0), (1, 1, 1), (0, 1, 0)))
im = im.view(numpy.uint8) * 255
Image.fromarray(im).save(sys.argv[2])
# "Other operation"
im = numpy.array(img)
white_pixels = numpy.dstack(numpy.nonzero(im != 0))[0]
for y, x in white_pixels:
for dy, dx in ((-1,0),(0,-1),(0,1),(1,0)):
py, px = dy + y, dx + x
if py >= 0 and px >= 0 and py < height and px < width:
im[py, px] = 255
Image.fromarray(im).save(sys.argv[3])

Categories