Coded in Python. I have the following image that I classified with making so only what was found to have its original colour. Is there a way I can intensify the pixels colour (mage the green...greener)?
Goal is this:
img = cv2.imread("/Volumes/EXTERNAL/ClassifierImageSets/Origional_2.png",1)
mask = cv2.imread("/Users/chrisradford/Documents/School/Masters/RA/Classifier/Python/mask.png",0)
result = cv2.bitwise_and(img,img,mask=mask)
I convert it to HSV colorspace, and increment the S channel value to the max for the values that are "green".
with this code:
import cv2
img = cv2.imread("D:\\testing\\test.png",1)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
greenMask = cv2.inRange(hsv, (26, 10, 30), (97, 100, 255))
hsv[:,:,1] = greenMask
back = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('test', back)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you want, you can put pure green to it like this:
with this code:
import cv2
img = cv2.imread("D:\\testing\\test.png",1)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
greenMask = cv2.inRange(hsv, (26, 10, 30), (97, 100, 255))
img[greenMask == 255] = (0, 255, 0)
cv2.imshow('test', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
It seems that a part of the small thing in the south is also green (or green enough).
I hope this helps you.
Related
I'm trying to detect colorful dots on a white/gray background. The dots are 3 different colors (yellow, purple, blue) of different sizes. Here is the original image:
I converted the image to HSV and found lower and upper bounds for each image then applied contour detection to find those dots. The following code detects most of the dots:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('image1_1.png')
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_yellow = np.array([22,25,219])
upper_yellow = np.array([25,75,225])
lower_purple = np.array([141,31,223])
upper_purple = np.array([143,83,225])
lower_blue = np.array([92,32,202])
upper_blue = np.array([96,36,208])
mask_blue = cv2.inRange(hsv, lower_blue, upper_blue)
mask_purple = cv2.inRange(hsv, lower_purple, upper_purple)
mask_yellow = cv2.inRange(hsv, lower_yellow, upper_yellow)
res_blue = cv2.bitwise_and(img,img, mask=mask_blue)
res_purple = cv2.bitwise_and(img,img, mask=mask_purple)
res_yellow = cv2.bitwise_and(img,img, mask=mask_yellow)
gray_blue = cv2.cvtColor(res_blue, cv2.COLOR_BGR2GRAY)
gray_purple = cv2.cvtColor(res_purple, cv2.COLOR_BGR2GRAY)
gray_yellow = cv2.cvtColor(res_yellow, cv2.COLOR_BGR2GRAY)
_,thresh_blue = cv2.threshold(gray_blue,10,255,cv2.THRESH_BINARY)
_,thresh_purple = cv2.threshold(gray_purple,10,255,cv2.THRESH_BINARY)
_,thresh_yellow = cv2.threshold(gray_yellow,10,255,cv2.THRESH_BINARY)
contours_blue, hierarhy1 = cv2.findContours(thresh_blue,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours_purple, hierarhy2 = cv2.findContours(thresh_purple,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours_yellow, hierarhy3 = cv2.findContours(thresh_yellow,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
result = img.copy()
cv2.drawContours(result, contours_blue, -1, (0, 0, 255), 2)
cv2.drawContours(result, contours_purple, -1, (0, 0, 255), 2)
cv2.drawContours(result, contours_yellow, -1, (0, 0, 255), 2)
cv2.imwrite("_allContours.jpg", result)
Here are the detected contours:
The problem is that some of the colored dots are not detected. I understand by fine-tuning the color ranges (lower and upper) it's possible to detect more dots. But that is very time consuming and not generalizable to similar images. For example the following image looks similar to the first image above and has the same colorful dots but the background is slightly different, once I ran it through above code it was not able to detect even one of the dots. Am I on the right track? Is there a more scalable and reliable solution with less need to tune color parameters in order to solve this problem? Here is the other image I tried:
I would suggest simply using adaptiveThreshold in Python/OpenCV
import cv2
import numpy as np
# read image
img = cv2.imread("dots.png")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# do adaptive threshold on gray image
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 25, 6)
# write results to disk
cv2.imwrite("dots_thresh.jpg", thresh)
# display it
cv2.imshow("thresh", thresh)
cv2.waitKey(0)
I need to remove the gray drawing from the image background and only need symbols drawn over it.
Here is my code to do that using morphologyEx but it did not remove the entire gray drawing that is in background.
img_path = "images/new_drawing.png"
img = cv2.imread(img_path)
kernel = np.ones((2,2), dtype=np.uint8)
result = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel, iterations=1)
cv2.imshow('Without background',result);
cv2.waitKey(0)
cv2.destroyAllWindows()
I tried this also and got expected results in grayscale but unable to convert it to BGR.
Here is my code
img = cv2.imread('images/new_drawing.png')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
med_blur = cv2.medianBlur(gray_img, ksize=3)
_, thresh = cv2.threshold(med_blur, 190, 255, cv2.THRESH_BINARY)
blending = cv2.addWeighted(gray_img, 0.5, thresh, 0.9, gamma=0)
cv2.imshow("blending", blending);
Also i used contours to identify symbols and draw them to white image but problem is that it also identify background drawing that i don't want.
Input image
Expected output image
Also the drawing will be always in gray color as in image.
Please help me out to get better result.
You are almost there...
Instead of using cv2.inRange to "catch" the non-gray pixel I suggest using cv2.inRange for catching all the pixels you want to change to white color:
mask = cv2.inRange(hsv, (0, 0, 100), (255, 5, 255))
The hue range is irrelevant.
The saturation is close to zero (shades of gray).
The brightness excludes the black pixels (you like to keep).
In order to get a nicer solution, I also used the following additional stages:
Build a mask of non-black pixels:
nzmask = cv2.inRange(hsv, (0, 0, 5), (255, 255, 255))
Erode the above mask:
nzmask = cv2.erode(nzmask, np.ones((3,3)))
Apply and operation between mask and nzmask:
mask = mask & nzmask
The above stages keeps the gray pixels around the black text.
Without the above stages, the black text gets thinner.
The last stage is replacing mask pixels with white:
new_img = img.copy()
new_img[np.where(mask)] = 255
Here is the code:
import numpy as np
import cv2
img_path = "new_drawing.png"
img = cv2.imread(img_path)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, (0, 0, 100), (255, 5, 255))
cv2.imshow('mask before and with nzmask', mask);
# Build mask of non black pixels.
nzmask = cv2.inRange(hsv, (0, 0, 5), (255, 255, 255))
# Erode the mask - all pixels around a black pixels should not be masked.
nzmask = cv2.erode(nzmask, np.ones((3,3)))
cv2.imshow('nzmask', nzmask);
mask = mask & nzmask
new_img = img.copy()
new_img[np.where(mask)] = 255
cv2.imshow('mask', mask);
cv2.imshow('new_img', new_img);
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Here is one way to do that in Python/OpenCV.
Read the input
Convert to HSV and separate channels
Threshold the saturation channel
Threshold the value channel and invert
Combine the two threshold images as a mask
Apply the mask to the input to write white where the mask is black
Save the result
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('symbols.png')
# convert image to hsv colorspace
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
# threshold saturation image
thresh1 = cv2.threshold(s, 92, 255, cv2.THRESH_BINARY)[1]
# threshold value image and invert
thresh2 = cv2.threshold(v, 128, 255, cv2.THRESH_BINARY)[1]
thresh2 = 255 - thresh2
# combine the two threshold images as a mask
mask = cv2.add(thresh1,thresh2)
# use mask to remove lines in background of input
result = img.copy()
result[mask==0] = (255,255,255)
# display IN and OUT images
cv2.imshow('IMAGE', img)
cv2.imshow('SAT', s)
cv2.imshow('VAL', v)
cv2.imshow('THRESH1', thresh1)
cv2.imshow('THRESH2', thresh2)
cv2.imshow('MASK', mask)
cv2.imshow('RESULT', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save output image
cv2.imwrite('symbols_thresh1.png', thresh1)
cv2.imwrite('symbols_thresh2.png', thresh2)
cv2.imwrite('symbols_mask.png', mask)
cv2.imwrite('symbols_cleaned.png', result)
Saturation channel thresholded:
Value channel thresholded and inverted:
Mask:
Result:
I want to change this background into the original black. This background is not pure black. Its values contain 1, 2 or 3. After using the following code I got the background value very near to black but not black. Although the background looks black
img = cv2.imread("images.bmp")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 0, 255, cv2. THRESH_BINARY)
img[thresh == 5] = 0
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
erosion = cv2.erode(img, kernel, iterations = 1)
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", erosion)
cv2.waitKey(0)
cv2.destroyAllWindows()
This should fix your problem, to the best of my understanding.
import cv2
gray = cv2.imread(r"brain.png", cv2.IMREAD_GRAYSCALE)
thresh_val = 5
gray[gray < thresh_val] = 0
Besides that, watch out that
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
is basically going to set the whole image to 255, since the second argument is the threshold and every pixel above threshold is set to the third value, which is 255.
Try replacing this:
ret, thresh = cv2.threshold(gray, 0, 255, cv2. THRESH_BINARY)
img[thresh == 5] = 0
with this:
# threshold to 10% of the maximum
threshold = 0.10 * np.max(img)
img[gray <= threshold] = 0
The issue is that cv2.threshold() does not compute a threshold for you, but applies one and, for example, thresh in your code is already the thresholded image.
(EDITED)
Im trying to set the Minimum and Maximum value of HSV of an Image in opencv python but after running the code all I can see is a blank rectangle box.
import cv2
import sys
import numpy as np
# Load in image
image = cv2.imread('power.jpg')
# Set minimum and max HSV values to display
lower = np.array([0, 209, 0])
upper = np.array([179, 255, 236])
# Create HSV Image and threshold into a range.
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower, upper)
output = cv2.bitwise_and(image,image, mask= mask)
# Display output image
cv2.imshow('image',output)
I was able to solve it.
import numpy as np
import cv2
img = cv2.imread( "power.jpg" )
## convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
## mask of red (36,0,0) ~ (70, 255,255)
mask = cv2.inRange(hsv, (0, 209, 0), (179, 255,236))
bak = img.copy()
# Show only red
#bak[mask > 0] = (0, 0, 255)
imask = mask>0
green = np.zeros_like(img, np.uint8)
green[imask] = img[imask]
## save
cv2.imwrite("image.png", green)
I have an image with green background, for example:
My purpose is to show everything that is not green
There`s the code to highlight green
import cv2
import numpy as np
low_green = np.array([25, 52, 72])
high_green = np.array([102, 255, 255])
while True:
img = cv2.imread('someimage.jpg')
img = cv2.resize(img, (900, 650), interpolation=cv2.INTER_CUBIC)
# convert BGR to HSV
imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# create the Mask
mask = cv2.inRange(imgHSV, low_green, high_green)
cv2.imshow("mask", mask)
cv2.imshow("cam", img)
cv2.waitKey(10)
And mask image
How do i show everything that is black on mask image?
Here`s the code:
import cv2
import numpy as np
low_green = np.array([25, 52, 72])
high_green = np.array([102, 255, 255])
while True:
img = cv2.imread('someimage.JPG')
img = cv2.resize(img, (900, 650), interpolation=cv2.INTER_CUBIC)
# convert BGR to HSV
imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# create the Mask
mask = cv2.inRange(imgHSV, low_green, high_green)
# inverse mask
mask = 255-mask
res = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow("mask", mask)
cv2.imshow("cam", img)
cv2.imshow('res', res)
cv2.waitKey(10)
and the result
you have the green mask, where white is what is green and black what isn't...
So you take the inverse of that mask(black becomes white and white black) and apply such mask on your image.