How to invert an image when it is only dark? - python

I would like to invert an image when it only has a dark background or theme. For example, when I have a GUI window with a dark theme, I would like to invert its colors to make it light. However, when it has a light theme, just keep it as is.
I mean by dark theme not only black but also any other dark color (dark blue for example). I found some solutions that suggest counting zeros but I think this will work only for black backgrounds/themes.

Inspired by these post1 post2, I found the solution I need, and I tested on my data to get 100% accuracy.
The following is the code I used:
import cv2
def is_img_dark(img):
#blur = cv2.blur(img, (5, 5)) # With kernel size depending upon image size # You could skip the blur and get exactly the same result
if cv2.mean(img)[0] > 127: # The range for a pixel's value in grayscale is (0-255), 127 lies midway
return False # (127 - 255) denotes light image
else:
return True # (0 - 127) denotes dark image
img_file = "THE PATH OF THE IMAGE"
im = cv2.imread(img_file,0) # 0 here to gray the image
if is_img_dark(im):
im = ~im # invert colors

Related

The mask I am creating is clipping the image I am trying to paste over it

I am trying to paste an image(noise) on top of a background image(back_eq).
The problem is that when applying the mask (mask = np.uint8(alpha/255) the mask gets clipped clipped mask
this is the original shape i am trying to paste the white shape should get onto the background (but black)
so the result is this clipped result
The problem fixes when instead of normalizing with 255 we use a value smaller s.a 245 or 240 (mask = np.uint8(alpha/240))
The problem is that this is a correct normalization. Any suggestion on how to fix the mask with a correct normalization being mandatory?
import numpy as np
import cv2
import matplotlib.pyplot as plt
noise = cv2.imread("3_noisy.jpg")
noise = cv2.resize(noise,(300,300), interpolation = cv2.INTER_LINEAR)
alpha = cv2.imread("3_alpha.jpg")
alpha = cv2.resize(alpha,(300,300), interpolation = cv2.INTER_LINEAR)
back_eq = cv2.imread('Results/back_eq.jpg')
back_eq_crop = cv2.imread('Results/back_eq_crop.jpg')
im_3_tone = cv2.imread('Results/im_3_tone.jpg')
final = back_eq.copy()
back_eq_h, back_eq_w, _ = back_eq.shape
noisy_h, noisy_w,_ = noise.shape
l1 = back_eq_h//2 - noisy_h//2
l2 = back_eq_h//2 + noisy_h//2
l3 = back_eq_w//2 - noisy_w//2
l4 = back_eq_w//2 + noisy_w//2
print(alpha.shape)
# normalizing the values
mask = np.uint8(alpha/255)
# masking back_eq_crop
masked_back_eq_crop = cv2.multiply(back_eq_crop,(1-mask))
cv2.imshow('as',masked_back_eq_crop)
cv2.waitKey(0)
cv2.destroyAllWindows()
# creating the masked region
mask_to_add = cv2.multiply(im_3_tone, mask)
cv2.imshow('as',mask_to_add)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Combining
masked_image = cv2.add(masked_back_eq_crop, mask_to_add)
final[l1:l2, l3:l4] = masked_image
cv2.imshow('aa',masked_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
plt.figure()
plt.imshow(final[:, :, ::-1]);plt.axis("off");plt.title("Final Image")
plt.show()
retval=cv2.imwrite("Results/Final Image.jpg", final)
To use binary mask threshold of 255, you have to have a properly prepared image - preferably already binary. Because 255 means only pure white (#FFFFFF) will stay white. Even the lightest gray will become black.
And in your case, well... the image has antialiasing (edges are softened), and you're doing scaling in the code. But moreover, your white is not pure white. There's a hole in the result.
To show it, instead of just talking:
I loaded your mask in GIMP,I loaded your mask pic in gimp, got the 'select by colour' tool, disabled antialiasing and turned threshold to 0 - everything so that only pure white FFFFFF gets selected, the same as your code.
Aaaand we see the holes. The tail is pixely already, same with hair... the hole in the face is there... Hole's colour is #FEFEFE - 254, making it black with threshold of 255.
The best threshold in such (pseudo) "black-and-white" is actually near the middle (128). Because antialiasing makes the edges be blackish-gray or whiteish-gray - no middle grays, so middle gray separates the two groups nicely. And your "visually white but not pure white" (+similar blacks) get into those groups as well. Even if you believe to have only pure black and pure white in your image - if you load it as colour or grayscale, you have 0 and 255 values anyways, so 128 will work. (I don't have access to my old code right now, but I believe I kept my thresholds around 200 when I played with images?)
tl;dr:
Threshold 255 only makes #FFFFFF white, it's never good
your picture has a lot of "visually white but not #FFFFFF white" pixels
there's nothing bad in using lower threshold, even around middle of the range for pseudo black-and-white

White Spots Appearing in Image Containing Outline

I want my image to look like this.
No Spots Appearing in Purple Region
However, my image looks like this, with white spots sometimes showing up in the area that is supposed to be "outlined."
Spots Appearing
Basically, I coded an eroded version of an image Eroded as well as a dilated version Dilated. If you would like to see the code for those two versions, please let me know and I will add it.
My goal is to make the white regions in the eroded image purple and place these purple eroded letters/numbers inside of the dilated letters/numbers. The onechannel function only displays a specified R/G/B channel of a given image.
def outline():
red,green,blue = range(3)
imgD = dilation(chars,7,20,480)
imgE = erosion(chars,7,20,480)
imgDOr = imgD.copy()
imgDcop = onechannel(imgD,0)
imgDcop[:,:,0] = 128
imgEcop = onechannel(imgE,2)
imgEcop[:,:,2] = 128
for i in range (0,len(imgD)):
for j in range (0,len(imgD[0])):
if imgE[i,j,0] == 255:
imgDOr[i,j,0] = imgDcop[i,j,0]
imgDOr[i,j,1] = imgDcop[i,j,1]
imgDOr[i,j,2] = imgEcop[i,j,2]
imageshow(imgDOr)
print(outline())
It's a bug in your erosion function where it does not set the white pixels to 255,255,255. If you inspect the RGB of the eroded image you posted you will see that the first channel of the white areas has values ranging from 250 to 255, and the grayish edges are starting from 239,239,239. You need to either fix the erosion function to strictly set all white areas to absolute 255,255,255 or relax the condition in your outline function from if imgE[i,j,0] == 255: to something like if 255 - imgE[i,j,0] <= 16:.

Change the colors within certain range to another color using OpenCV

I want to change the brown areas to RED (or another color).
Just I don't know how to get the ranges for brown and put them in python code.
I know how to change a single color, but not a range of colors.
Any Ideas?
Thanks
This should give you an idea - it is pretty well commented:
#!/usr/local/bin/python3
import cv2 as cv
import numpy as np
# Load the aerial image and convert to HSV colourspace
image = cv.imread("aerial.png")
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
# Define lower and uppper limits of what we call "brown"
brown_lo=np.array([10,0,0])
brown_hi=np.array([20,255,255])
# Mask image to only select browns
mask=cv.inRange(hsv,brown_lo,brown_hi)
# Change image to red where we found brown
image[mask>0]=(0,0,255)
cv.imwrite("result.png",image)
How did I determine the limits for "brown"? I located a brown area in the image, and cropped it out to remove everything else. Then I resized it to 1x1 to average all the shades of brown in that area and converted it to HSV colourspace, I printed that and took the value for Hue which was 15 and went +/-5 to give a range of 10-20. Increase the range to 8-22 to select a wider range of hues.
HSV/HSL colourspace is described on Wikipedia here.
Keywords: Image processing, Python, OpenCV, inRange, range of colours, prime.
I would like to propose a different approach. However, this will work only for a range of certain dominant colors (red, blue, green and blue). I am focusing on the red colored regions present in the image in question.
Background:
Here I am using LAB color space where:
L-channel: expresses the brightness in the image
A-channel: expresses variation of color in the image between red and green
B-channel: expresses variation of color in the image between yellow and blue
Since I am interested in the red region, I will choose the A-channel for further processing.
Code:
img = cv2.imread('image_path')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# A-channel
cv2.imshow('A-channel', lab[:,:,1])
If you look at the image closely, the bright regions correspond to the red color in the original image. Now when we threshold it, we can isolate it completely:
th = cv2.threshold(lab[:,:,1],127,255,cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
Using the th image as mask, we give a different color to the corresponding regions in white:
# create copy of original image
img1=img.copy()
# highlight white region with different color
img1[th==255]=(255,255,0)
Here are both the images stacked beside each other:
You can normalize the A-channel image to better visualize it:
dst = cv2.normalize(lab[:,:,1], dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
In this way, there is no need to look for range in HSV space when working with dominant colors. Exploring the B-channel can help isolate blue and yellow colored regions.

Detect white background on images using python

Is there a way to tell whether an image as a white background using python and what could be a good strategy to get a "percentage of confidence" about this question? Seems like the literature on internet doesn't cover exactly this case and I can't find anything strictly related.
The images I want to analyze are typical e-commerce website product pictures, so they should have a single focused object in the middle and white background only at the borders.
Another information that could be available is the max percentage of image space the object should occupy.
I would go with something like this.
Reduce the contrast of the image by making the brightest, whitest pixel something like 240 instead of 255 so that the whites generally found within the image and within parts of the product are no longer pure white.
Put a 1 pixel wide white border around your image - that will allow the floodfill in the next step to "flow" all the way around the edge (even if the "product" touches the edges of the frame) and "seep" into the image from all borders/edges.
Floofdill your image starting at the top-left corner (which is necessarily pure white after step 2) and allow a tolerance of 10-20% when matching the white in case the background is off-white or slightly shadowed, and the white will flow into your image all around the edges until it reaches the product in the centre.
See how many pure white pixels you have now - these are the background ones. The percentage of pure white pixels will give you an indicator of confidence in the image being a product on a whitish background.
I would use ImageMagick from the command line like this:
convert product.jpg +level 5% -bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" result.jpg
I will put a red border around the following 2 pictures just so you can see the edges on StackOverflow's white background, and show you the before and after images - look at the amount of white in the resulting images (there is none in the second one because it didn't have a white background) and also at the shadow under the router to see the effect of the -fuzz.
Before
After
If you want that as a percentage, you can make all non-white pixels black and then calculate the percentage of white pixels like this:
convert product.jpg -level 5% \
-bordercolor white -border 1 \
-fill white -fuzz 25% -draw "color 0,0 floodfill" -shave 1 \
-fuzz 0 -fill black +opaque white -format "%[fx:int(mean*100)]" info:
62
Before
After
ImageMagick has Python bindings so you could do the above in Python - or you could use OpenCV and Python to implement the same algorithm.
This question may be years ago but I just had a similar task recently. Sharing my answer here might help others that will encounter the same task too and I might also improve my answer by having the community look at it.
import cv2 as cv
import numpy as np
THRESHOLD_INTENSITY = 230
def has_white_background(img):
# Read image into org_img variable
org_img = cv.imread(img, cv.IMREAD_GRAYSCALE)
# cv.imshow('Original Image', org_img)
# Create a black blank image for the mask
mask = np.zeros_like(org_img)
# Create a thresholded image, I set my threshold to 200 as this is the value
# I found most effective in identifying light colored object
_, thres_img = cv.threshold(org_img, 200, 255, cv.THRESH_BINARY_INV)
# Find the most significant contours
contours, hierarchy = cv.findContours(thres_img, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
# Get the outermost contours
outer_contours_img = max(contours, key=cv.contourArea)
# Get the bounding rectangle of the contours
x,y,w,h = cv.boundingRect(outer_contours_img)
# Draw a rectangle base on the bounding rectangle of the contours to our mask
cv.rectangle(mask,(x,y),(x+w,y+h),(255,255,255),-1)
# Invert the mask so that we create a hole for the detected object in our mask
mask = cv.bitwise_not(mask)
# Apply mask to the original image to subtract it and retain only the bg
img_bg = cv.bitwise_and(org_img, org_img, mask=mask)
# If the size of the mask is similar to the size of the image then the bg is not white
if h == org_img.shape[0] and w == org_img.shape[1]:
return False
# Create a np array of the
np_array = np.array(img_bg)
# Remove the zeroes from the "remaining bg image" so that we dont consider the black part,
# and find the average intensity of the remaining pixels
ave_intensity = np_array[np.nonzero(np_array)].mean()
if ave_intensity > THRESHOLD_INTENSITY:
return True
else:
return False
These are the images of the steps from the code above:
Here is the Original Image. No copyright infringement intended.
(Cant find the url of the actual imagem from unsplash)
First step is to convert the image to grayscale.
Apply thresholding to the image.
Get the contours of the "thresholded" image and get the contours. Drawing the contours is optional only.
From the contours, get the values of the outer contour and find its bounding rectangle. Optionally draw the rectangle to the image so that you'll see if your assumed thresholding value fits the object in the rectangle.
Create a mask out of the bounding rectangle.
Lastly, subtract the mask to the greyscale image. What will remain is the background image minus the mask.
To Finally identify if the background is white, find the average intensity values of the background image excluding the 0 values of the image array. And base on a certain threshold value, categorize it if its white or not.
Hope this helps. If you think it can still be improve, or if there are flaws with my solution pls comment below.
The most popular image format is .png. PNG image can have a transparent color (alpha). Often match with the white background page. With pillow is easy to find out which pixels are transparent.
A good starting point:
from PIL import Image
img = Image.open('image.png')
img = img.convert("RGBA")
pixdata = img.load()
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
pixel = pixdata[x, y]
if pixel[3] == 255:
# tranparent....
Or maybe it's enough if you check if top-left pixel it's white:
pixel = pixdata[0, 0]
if item[0] == 255 and item[1] == 255 and item[2] == 255:
# it's white

Python: Combining images using paste with overlapping pixels and areas with alpha channel=0

I am trying to combine three images together. The image I want on the bottom is a 700x900 image with all black pixels. On top of that I want to paste an image that is 400x400 with an offset of 100,200. On top of that I want to paste an image border that is 700x900. The image border has alpha=0 in the inside of it and alpha=0 around it because it doesn't have straight edges. When I run the code I have pasted below I encounter 2 problems:
1) Everywhere on the border image where the alpha channel = 0, the alpha channel has been set to 255 and the color white shows instead of the black background and the image I am putting the border around.
2) The border image's quality has been significantly reduced and looks a lot different than it should.
Also: part of the border image will cover part of the Image I am putting the border around. So I can't just switch the order that I am pasting.
Thanks in advance for any help.
#!/usr/bin/python -tt
from PIL import ImageTk, Image
old_im2 = Image.open('backgroundImage1.jpg') # size = 400x400
old_im = Image.open('topImage.png') # size = 700x900
new_size = (700,900)
new_im = Image.new("RGBA", new_size) # makes the black image
new_im.paste(old_im2, (100, 200))
new_im.paste(old_im,(0,0))
new_im.show()
new_im.save('final.jpg')
I think you have a misconception about images - the border image does have pixels everywhere. It's not possible for it to be "missing" pixels. It is possible to have an image with an alpha channel, which is a channel like the R, G, and B channels, but indicates transparency.
Try this:
1. Make sure that topImage.png has a transparency channel, and that the pixels that you want to be "missing" are transparent (i.e. have a maximum alpha value). You can double check this way:
print old_im.mode # This should print "RGBA" if it has an alpha channel.
2. Create new_im in "RGBA" mode:
new_im = Image.new("RGBA", new_size) # makes the black image
# Note the "A" --------^
3. Try this paste statement instead:
new_im.paste(old_im,(0,0), mask=old_im) # Using old_im as the mask argument should tell the paste function to use old_im's alpha channel to combine the two images.

Categories