Detect white background on images using python - python

Is there a way to tell whether an image as a white background using python and what could be a good strategy to get a "percentage of confidence" about this question? Seems like the literature on internet doesn't cover exactly this case and I can't find anything strictly related.
The images I want to analyze are typical e-commerce website product pictures, so they should have a single focused object in the middle and white background only at the borders.
Another information that could be available is the max percentage of image space the object should occupy.

I would go with something like this.
Reduce the contrast of the image by making the brightest, whitest pixel something like 240 instead of 255 so that the whites generally found within the image and within parts of the product are no longer pure white.
Put a 1 pixel wide white border around your image - that will allow the floodfill in the next step to "flow" all the way around the edge (even if the "product" touches the edges of the frame) and "seep" into the image from all borders/edges.
Floofdill your image starting at the top-left corner (which is necessarily pure white after step 2) and allow a tolerance of 10-20% when matching the white in case the background is off-white or slightly shadowed, and the white will flow into your image all around the edges until it reaches the product in the centre.
See how many pure white pixels you have now - these are the background ones. The percentage of pure white pixels will give you an indicator of confidence in the image being a product on a whitish background.
I would use ImageMagick from the command line like this:
convert product.jpg +level 5% -bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" result.jpg
I will put a red border around the following 2 pictures just so you can see the edges on StackOverflow's white background, and show you the before and after images - look at the amount of white in the resulting images (there is none in the second one because it didn't have a white background) and also at the shadow under the router to see the effect of the -fuzz.
Before
After
If you want that as a percentage, you can make all non-white pixels black and then calculate the percentage of white pixels like this:
convert product.jpg -level 5% \
-bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" -shave 1 \
-fuzz 0 -fill black +opaque white -format "%[fx:int(mean*100)]" info:
62
Before
After
ImageMagick has Python bindings so you could do the above in Python - or you could use OpenCV and Python to implement the same algorithm.

This question may be years ago but I just had a similar task recently. Sharing my answer here might help others that will encounter the same task too and I might also improve my answer by having the community look at it.
import cv2 as cv
import numpy as np
THRESHOLD_INTENSITY = 230
def has_white_background(img):
# Read image into org_img variable
org_img = cv.imread(img, cv.IMREAD_GRAYSCALE)
# cv.imshow('Original Image', org_img)
# Create a black blank image for the mask
mask = np.zeros_like(org_img)
# Create a thresholded image, I set my threshold to 200 as this is the value
# I found most effective in identifying light colored object
_, thres_img = cv.threshold(org_img, 200, 255, cv.THRESH_BINARY_INV)
# Find the most significant contours
contours, hierarchy = cv.findContours(thres_img, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
# Get the outermost contours
outer_contours_img = max(contours, key=cv.contourArea)
# Get the bounding rectangle of the contours
x,y,w,h = cv.boundingRect(outer_contours_img)
# Draw a rectangle base on the bounding rectangle of the contours to our mask
cv.rectangle(mask,(x,y),(x+w,y+h),(255,255,255),-1)
# Invert the mask so that we create a hole for the detected object in our mask
mask = cv.bitwise_not(mask)
# Apply mask to the original image to subtract it and retain only the bg
img_bg = cv.bitwise_and(org_img, org_img, mask=mask)
# If the size of the mask is similar to the size of the image then the bg is not white
if h == org_img.shape[0] and w == org_img.shape[1]:
return False
# Create a np array of the
np_array = np.array(img_bg)
# Remove the zeroes from the "remaining bg image" so that we dont consider the black part,
# and find the average intensity of the remaining pixels
ave_intensity = np_array[np.nonzero(np_array)].mean()
if ave_intensity > THRESHOLD_INTENSITY:
return True
else:
return False
These are the images of the steps from the code above:
Here is the Original Image. No copyright infringement intended.
(Cant find the url of the actual imagem from unsplash)
First step is to convert the image to grayscale.
Apply thresholding to the image.
Get the contours of the "thresholded" image and get the contours. Drawing the contours is optional only.
From the contours, get the values of the outer contour and find its bounding rectangle. Optionally draw the rectangle to the image so that you'll see if your assumed thresholding value fits the object in the rectangle.
Create a mask out of the bounding rectangle.
Lastly, subtract the mask to the greyscale image. What will remain is the background image minus the mask.
To Finally identify if the background is white, find the average intensity values of the background image excluding the 0 values of the image array. And base on a certain threshold value, categorize it if its white or not.
Hope this helps. If you think it can still be improve, or if there are flaws with my solution pls comment below.

The most popular image format is .png. PNG image can have a transparent color (alpha). Often match with the white background page. With pillow is easy to find out which pixels are transparent.
A good starting point:
from PIL import Image
img = Image.open('image.png')
img = img.convert("RGBA")
pixdata = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
pixel = pixdata[x, y]
if pixel[3] == 255:
# tranparent....
Or maybe it's enough if you check if top-left pixel it's white:
pixel = pixdata[0, 0]
if item[0] == 255 and item[1] == 255 and item[2] == 255:
# it's white

Related

How to determine if image is mostly black or is mostly white

I'm working on university project: my task is to recognize ARTag using Python.
I'm doing great but there is one task which I do not know how to do.
For starters I segmented ARTag for smaller lets say pixels: but those pixels are neither perfect black nor white. There are mixes of 70% white to 30% black. So I want to determine if this pixel which i segmented tag is mostly black or white. Can you help?
If you want to define a given segment color:
Pixels take values from 0 (black) to 255 (white)
Lets call the pixels with values <= 76 - black.
Assuming that you have segment of an image with many pixels (imgSgt):
blackPxNum = np.count_nonzero([imgSgt<=76]) #number of black pixels
whitePxNum = imgSgt.size - blackPixNumber
if blackPxNum > whitePxNum:
"""segment is black"""
pass
To threshold image
Take a look at these functions which allow you to convert grayscale image (pixel values from 0 to 255) to binary image (pixel value 0 or 1).

how to make my code identify the difference between 2 circles (2 circles one filled with white and one with black) using python and pillow?

I have 2 images,
1- White circle with black stroke
2- Black circle with black stroke
I want to compare both images and identify that both have the same circle but with different filling
I should only use python & pillow
I have already tried several methods like Edge Detection, but whenever I try to reform the picture for edge detection the new image appear as empty
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
# Load image:
input_image = Image.open("input.png")
input_pixels = input_image.load()
width, height = input_image.width, input_image.height
# Create output image
output_image = Image.new("RGB", input_image.size)
draw = ImageDraw.Draw(output_image)
# Convert to grayscale
intensity = np.zeros((width, height))
for x in range(width):
for y in range(height):
intensity[x, y] = sum(input_pixels[x, y]) / 3
# Compute convolution between intensity and kernels
for x in range(1, input_image.width - 1):
for y in range(1, input_image.height - 1):
magx = intensity[x + 1, y] - intensity[x - 1, y]
magy = intensity[x, y + 1] - intensity[x, y - 1]
# Draw in black and white the magnitude
color = int(sqrt(magx**2 + magy**2))
draw.point((x, y), (color, color, color))
output_image.save("edge.png")
expected result that the both pictures will be greyscaled with only the circle edges marked in white
actual result empty black image (as if it couldnt see the edges)
Well, If all you want is Edge Detection in an image, then you can try using Sobel Operator or its equivalents.
from PIL import Image, ImageFilter
image = Image.open(r"Circle.png").convert("RGB")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(r"ED_Circle.png")
The above code takes in an input image, converts it into RGB mode (certain images have P mode, which doesn't allows edge detection, therefore converting to RGB). Then finds edges in it via image.filter(ImageFilter.FIND_EDGES).
Sample Input Image (Black border with black circle):-
Output after processing through python program:-
Sample Image 2 (white circle with black border):-
Output after processing through python program:-
In the above sample, both the input images were of the same size and the circles in them were also of the same dimensions, the only difference between the two was that, one had a white circle inside a black border, and the other had a black circle inside black border.
Since the circles were of same dimensions, passing them through the edge detection process gave us same results.
NOTE:-
In the question, you wanted circle edges in white, and the rest of
part in greyscale. Which isn't the best choice for edge detection.
White and Black are inverse of each other, therefore edges could be
easily identified if the sample space of the image consists of these
two colors. Even then, if you want greyscale instead of black, then you can simple change each black pixel of the image to a grey one, or something that meets your needs
The results of above edge detection are same because the size of the
border is negligible. If the border is wider (a stroke), then when
the process is done on a white circle with black border, the edge
detection will create more then one white border. You can get through
that problem, by making the program ignore the inner edges and only
taking into account the outermost ones.

Is there another way to fill the area outside a rotated image with white color? 'fillcolor' does not work with older versions of Python

I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()

White Spots Appearing in Image Containing Outline

I want my image to look like this.
No Spots Appearing in Purple Region
However, my image looks like this, with white spots sometimes showing up in the area that is supposed to be "outlined."
Spots Appearing
Basically, I coded an eroded version of an image Eroded as well as a dilated version Dilated. If you would like to see the code for those two versions, please let me know and I will add it.
My goal is to make the white regions in the eroded image purple and place these purple eroded letters/numbers inside of the dilated letters/numbers. The onechannel function only displays a specified R/G/B channel of a given image.
def outline():
red,green,blue = range(3)
imgD = dilation(chars,7,20,480)
imgE = erosion(chars,7,20,480)
imgDOr = imgD.copy()
imgDcop = onechannel(imgD,0)
imgDcop[:,:,0] = 128
imgEcop = onechannel(imgE,2)
imgEcop[:,:,2] = 128
for i in range (0,len(imgD)):
for j in range (0,len(imgD[0])):
if imgE[i,j,0] == 255:
imgDOr[i,j,0] = imgDcop[i,j,0]
imgDOr[i,j,1] = imgDcop[i,j,1]
imgDOr[i,j,2] = imgEcop[i,j,2]
imageshow(imgDOr)
print(outline())
It's a bug in your erosion function where it does not set the white pixels to 255,255,255. If you inspect the RGB of the eroded image you posted you will see that the first channel of the white areas has values ranging from 250 to 255, and the grayish edges are starting from 239,239,239. You need to either fix the erosion function to strictly set all white areas to absolute 255,255,255 or relax the condition in your outline function from if imgE[i,j,0] == 255: to something like if 255 - imgE[i,j,0] <= 16:.

Change the colors within certain range to another color using OpenCV

I want to change the brown areas to RED (or another color).
Just I don't know how to get the ranges for brown and put them in python code.
I know how to change a single color, but not a range of colors.
Any Ideas?
Thanks
This should give you an idea - it is pretty well commented:
#!/usr/local/bin/python3
import cv2 as cv
import numpy as np
# Load the aerial image and convert to HSV colourspace
image = cv.imread("aerial.png")
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
# Define lower and uppper limits of what we call "brown"
brown_lo=np.array([10,0,0])
brown_hi=np.array([20,255,255])
# Mask image to only select browns
mask=cv.inRange(hsv,brown_lo,brown_hi)
# Change image to red where we found brown
image[mask>0]=(0,0,255)
cv.imwrite("result.png",image)
How did I determine the limits for "brown"? I located a brown area in the image, and cropped it out to remove everything else. Then I resized it to 1x1 to average all the shades of brown in that area and converted it to HSV colourspace, I printed that and took the value for Hue which was 15 and went +/-5 to give a range of 10-20. Increase the range to 8-22 to select a wider range of hues.
HSV/HSL colourspace is described on Wikipedia here.
Keywords: Image processing, Python, OpenCV, inRange, range of colours, prime.
I would like to propose a different approach. However, this will work only for a range of certain dominant colors (red, blue, green and blue). I am focusing on the red colored regions present in the image in question.
Background:
Here I am using LAB color space where:
L-channel: expresses the brightness in the image
A-channel: expresses variation of color in the image between red and green
B-channel: expresses variation of color in the image between yellow and blue
Since I am interested in the red region, I will choose the A-channel for further processing.
Code:
img = cv2.imread('image_path')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# A-channel
cv2.imshow('A-channel', lab[:,:,1])
If you look at the image closely, the bright regions correspond to the red color in the original image. Now when we threshold it, we can isolate it completely:
th = cv2.threshold(lab[:,:,1],127,255,cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
Using the th image as mask, we give a different color to the corresponding regions in white:
# create copy of original image
img1=img.copy()
# highlight white region with different color
img1[th==255]=(255,255,0)
Here are both the images stacked beside each other:
You can normalize the A-channel image to better visualize it:
dst = cv2.normalize(lab[:,:,1], dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
In this way, there is no need to look for range in HSV space when working with dominant colors. Exploring the B-channel can help isolate blue and yellow colored regions.

Categories