Python OpenCV image editing: Faster way to edit pixels - python

Using python (openCV2, tkinter etc) I've created an app (a very amateur one) to change blue pixels to white. The images are high quality jpgs or PNGS.
The process: Search every pixel of an image and if the 'b' value of BGR is higher than x, set pixel to white (255, 255, 255).
The problem: There are about 150 pictures to process at a time, so the above process takes quite long. It's around 9 - 15 seconds per iteration depending on the images size (resizing the image speeds up the process, but not ideal).
Here is the code (with GUI and exception handling elements removed for simplicity):
for filename in listdir(sourcefolder):
# Read image and set variables
frame = imread(sourcefolder+"/"+filename)
rows = frame.shape[0]
cols = frame.shape[1]
# Search pixels. If blue, set to white.
for i in range(0,rows):
for j in range(0,cols):
if frame.item(i,j,0) > 155:
frame.itemset((i,j,0),255)
frame.itemset((i,j,1),255)
frame.itemset((i,j,2),255)
imwrite(sourcecopy+"/"+filename, frame)
#release image from memory
del frame
Any help on increasing efficiency / speed would be greatly appreciated!

Start with this image:
Then use this:
import cv2
im = cv2.imread('a.png')
# Make all pixels where Blue > 150 into white
im[im[...,0]>150] = [255,255,255]
# Save result
cv2.imwrite('result.png', im)

Use cv2.threshold to create a mask using x threshold value.
Set the color like this : img_bgr[mask == 255] = [255, 0, 0]

Related

Using Python Pillow to 'Punch' Out transparency Using Second Image

I am currently trying to use an RGBA image to 'punch' out a hole in another RGBA image but all my current attempts have failed to maintain the original transparency. Once I apply an alpha channel using putalpha it will replace the original alpha channel completely and turn previously transparent pixels back to their original colors.
I am trying to perform a "putalpha" on only the pixels with 100% transparency.
In the photos below I attempt to overlap an 'inverted transparency' alpha channel on top of my Circle to perform the 'punch out'. Instead of only applying the transparent pixels it will replace the entire image's alpha which turns the rest of the circle image's transparency white.
Is there a way for me to do this transparency "Merge" to achieve an alpha layer that is a composite of both images?
#image2 is a square, image1 is a circle
# swapTransparency is a function I made that works in swapping the transparency, it just goes pixel by pixel and switches alpha channel value to # max where empty and to 0 everywhere else.
# probably a better and more effective way to invert the transparency but this works right now and might not even be needed.
def swapTransparency(img):
datas = img.getdata()
newData = []
for item in datas:
if item [3] == 0:
newData.append((0, 0, 0, 255))
else:
newData.append((255, 255, 255, 0))
img.putdata(newData)
return img
##This is putting alpha channel overtop but its replacing the entire alpha instead of merging them, losing original cricle transparency.
image2 = swapTransparency(image2)
alphaChannel = image2.getchannel('A')
image1.putalpha(image2)
Image1
Image2
Desired Results

How To Get The Pixel Count Of A Segmented Area in an Image I used Vgg16 for Segmentation

I am new to deep learning but have succeeded in semantic segmentation of the image I am trying to get the pixel count of each class in the label. As an example in the image I want to get the pixel count of the carpet, or the chandelier or the light stand. How do I go about? Thanks any suggestions will help.
Edit: In what format the regions are returned? Do you have only the final image or the regions are given as contours? If you have them as contours (list of coordinates), you can apply findContourArea directly on that structure.
If you can receive/sample the regions one by one in an image (but do not have the contour), you can sequentially paint each of the colors/classes in a clear image, either convert it to grayscale or directly paint it in grayscale or binary, or binarize with threshold; then numberPixels = len(cv2.findNonZero(bwImage)). cv2.findContour and cv2.contourArea should do the same.
Instead of rendering each class in a separate image, if your program receives only the final segmentation and not per-class contours, you can filter/mask the regions by color ranges on that image. I built that and it seemed to do the job, 14861 pixels for the pink carpet:
import cv2
import numpy as np
# rgb 229, 0, 178 # the purple carpet in RGB (sampled with IrfanView)
# b,g,r = 178, 0, 229 # cv2 uses BGR
class_color = [178, 0, 229]
multiclassImage = cv2.imread("segmented.png")
cv2.imshow("MULTI", multiclassImage)
filteredImage = multiclassImage.copy()
low = np.array(class_color);
mask = cv2.inRange(filteredImage, low, low)
filteredImage[mask == 0] = [0, 0, 0]
filteredImage[mask != 0] = [255,255,255]
cv2.imshow("FILTER", filteredImage)
# numberPixelsFancier = len(cv2.findNonZero(filteredImage[...,0]))
# That also works and returns 14861 - without conversion, taking one color channel
bwImage = cv2.cvtColor(filteredImage, cv2.COLOR_BGR2GRAY)
cv2.imshow("BW", bwImage)
numberPixels = len(cv2.findNonZero(bwImage))
print(numberPixels)
cv2.waitKey(0)
If you don't have the values of the colors given or/and can't control them, you can use numpy.unique(): https://numpy.org/doc/stable/reference/generated/numpy.unique.html and it will return the unique colors, then they could be applied in the algorithm above.
Edit 2: BTW, another way to compute or verify such counts is by calculating histograms. That's with IrfanView on the black-white image:

How to efficiently change colors on a lot of images?

I have a huge dataset of images like this:
I would like to change the colors on these. All white should stay white, all purple should turn white and everything else should turn black. The desired output would look like this:
I've made the code underneath and it is doing what I want, but it takes way to long to go through the amount of pictures I have. Is there another and faster way of doing this?
path = r"C:path"
for f in os.listdir(path):
f_name = (os.path.join(path,f))
if f_name.endswith(".png"):
im = Image.open(f_name)
fn, fext = os.path.splitext(f_name)
print (fn)
im =im.convert("RGBA")
for x in range(im.size[0]):
for y in range(im.size[1]):
if im.getpixel((x, y)) == (255, 255, 255, 255):
im.putpixel((x, y),(255, 255, 255,255))
elif im.getpixel((x, y)) == (128, 64, 128, 255):
im.putpixel((x, y),(255, 255, 255,255))
else:
im.putpixel((x, y),(0, 0, 0,255))
im.show()
Your images seem to be palettised as they represent segmentations, or labelled classes and there are typically fewer than 256 classes. As such, each pixel is just a label (or class number) and the actual colours are looked up in a 256-element table, i.e. the palette.
Have a look here if you are unfamiliar with palletised images.
So, you don't need to iterate over all 12 million pixels, you can instead just iterate over the palette which is only 256 elements long...
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
# Load image
im = Image.open('image.png')
# Check it is palettised as expected
if im.mode != 'P':
sys.exit("ERROR: Was expecting a palettised image")
# Get palette and make into Numpy array of 256 entries of 3 RGB colours
palette = np.array(im.getpalette(),dtype=np.uint8).reshape((256,3))
# Name our colours for readability
purple = [128,64,128]
white = [255,255,255]
black = [0,0,0]
# Go through palette, setting purple to white
palette[np.all(palette==purple, axis=-1)] = white
# Go through palette, setting anything not white to black
palette[~np.all(palette==white, axis=-1)] = black
# Apply our modified palette and save
im.putpalette(palette.ravel().tolist())
im.save('result.png')
That takes 290ms including loading and saving the image.
If you have many thousands of images to do, and you are on a decent OS, you can use GNU Parallel. Change the above code to accept a command-line parameter which is the name of the image, and save it as recolour.py then use:
parallel ./recolour.py {} ::: *.png
It will keep all CPU cores on your CPU busy till they are all processed.
Keywords: Image processing, Python, Numpy, PIL, Pillow, palette, getpalette, putpalette, classes, classification, label, labels, labelled image.
If you're open to use NumPy, you can heavily speed-up pixel manipulations:
from PIL import Image
import numpy as np
# Open PIL image
im = Image.open('path/to/your/image.png').convert('RGBA')
# Convert to NumPy array
pixels = np.array(im)
# Get logical indices of all white and purple pixels
idx_white = (pixels == (255, 255, 255, 255)).all(axis=2)
idx_purple = (pixels == (128, 64, 128, 255)).all(axis=2)
# Generate black image; set alpha channel to 255
out = np.zeros(pixels.shape, np.uint8)
out[:, :, 3] = 255
# Set white and purple pixels to white
out[idx_white | idx_purple] = (255, 255, 255, 255)
# Convert back to PIL image
im = Image.fromarray(out)
That code generates the desired output, and takes around 1 second on my machine, whereas your loop code needs 33 seconds.
Hope that helps!

How to analyze only a part of an image?

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

Detect white background on images using python

Is there a way to tell whether an image as a white background using python and what could be a good strategy to get a "percentage of confidence" about this question? Seems like the literature on internet doesn't cover exactly this case and I can't find anything strictly related.
The images I want to analyze are typical e-commerce website product pictures, so they should have a single focused object in the middle and white background only at the borders.
Another information that could be available is the max percentage of image space the object should occupy.
I would go with something like this.
Reduce the contrast of the image by making the brightest, whitest pixel something like 240 instead of 255 so that the whites generally found within the image and within parts of the product are no longer pure white.
Put a 1 pixel wide white border around your image - that will allow the floodfill in the next step to "flow" all the way around the edge (even if the "product" touches the edges of the frame) and "seep" into the image from all borders/edges.
Floofdill your image starting at the top-left corner (which is necessarily pure white after step 2) and allow a tolerance of 10-20% when matching the white in case the background is off-white or slightly shadowed, and the white will flow into your image all around the edges until it reaches the product in the centre.
See how many pure white pixels you have now - these are the background ones. The percentage of pure white pixels will give you an indicator of confidence in the image being a product on a whitish background.
I would use ImageMagick from the command line like this:
convert product.jpg +level 5% -bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" result.jpg
I will put a red border around the following 2 pictures just so you can see the edges on StackOverflow's white background, and show you the before and after images - look at the amount of white in the resulting images (there is none in the second one because it didn't have a white background) and also at the shadow under the router to see the effect of the -fuzz.
Before
After
If you want that as a percentage, you can make all non-white pixels black and then calculate the percentage of white pixels like this:
convert product.jpg -level 5% \
-bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" -shave 1 \
-fuzz 0 -fill black +opaque white -format "%[fx:int(mean*100)]" info:
62
Before
After
ImageMagick has Python bindings so you could do the above in Python - or you could use OpenCV and Python to implement the same algorithm.
This question may be years ago but I just had a similar task recently. Sharing my answer here might help others that will encounter the same task too and I might also improve my answer by having the community look at it.
import cv2 as cv
import numpy as np
THRESHOLD_INTENSITY = 230
def has_white_background(img):
# Read image into org_img variable
org_img = cv.imread(img, cv.IMREAD_GRAYSCALE)
# cv.imshow('Original Image', org_img)
# Create a black blank image for the mask
mask = np.zeros_like(org_img)
# Create a thresholded image, I set my threshold to 200 as this is the value
# I found most effective in identifying light colored object
_, thres_img = cv.threshold(org_img, 200, 255, cv.THRESH_BINARY_INV)
# Find the most significant contours
contours, hierarchy = cv.findContours(thres_img, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
# Get the outermost contours
outer_contours_img = max(contours, key=cv.contourArea)
# Get the bounding rectangle of the contours
x,y,w,h = cv.boundingRect(outer_contours_img)
# Draw a rectangle base on the bounding rectangle of the contours to our mask
cv.rectangle(mask,(x,y),(x+w,y+h),(255,255,255),-1)
# Invert the mask so that we create a hole for the detected object in our mask
mask = cv.bitwise_not(mask)
# Apply mask to the original image to subtract it and retain only the bg
img_bg = cv.bitwise_and(org_img, org_img, mask=mask)
# If the size of the mask is similar to the size of the image then the bg is not white
if h == org_img.shape[0] and w == org_img.shape[1]:
return False
# Create a np array of the
np_array = np.array(img_bg)
# Remove the zeroes from the "remaining bg image" so that we dont consider the black part,
# and find the average intensity of the remaining pixels
ave_intensity = np_array[np.nonzero(np_array)].mean()
if ave_intensity > THRESHOLD_INTENSITY:
return True
else:
return False
These are the images of the steps from the code above:
Here is the Original Image. No copyright infringement intended.
(Cant find the url of the actual imagem from unsplash)
First step is to convert the image to grayscale.
Apply thresholding to the image.
Get the contours of the "thresholded" image and get the contours. Drawing the contours is optional only.
From the contours, get the values of the outer contour and find its bounding rectangle. Optionally draw the rectangle to the image so that you'll see if your assumed thresholding value fits the object in the rectangle.
Create a mask out of the bounding rectangle.
Lastly, subtract the mask to the greyscale image. What will remain is the background image minus the mask.
To Finally identify if the background is white, find the average intensity values of the background image excluding the 0 values of the image array. And base on a certain threshold value, categorize it if its white or not.
Hope this helps. If you think it can still be improve, or if there are flaws with my solution pls comment below.
The most popular image format is .png. PNG image can have a transparent color (alpha). Often match with the white background page. With pillow is easy to find out which pixels are transparent.
A good starting point:
from PIL import Image
img = Image.open('image.png')
img = img.convert("RGBA")
pixdata = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
pixel = pixdata[x, y]
if pixel[3] == 255:
# tranparent....
Or maybe it's enough if you check if top-left pixel it's white:
pixel = pixdata[0, 0]
if item[0] == 255 and item[1] == 255 and item[2] == 255:
# it's white

Categories