I would like to know the radius of the red circle on the dart board (where you can score 50).
I converted to HSV and than defined the red color range.
But when I tried to detect circles on my mask image I didn't get the results what I expected. I tried to play with the attributes but I couldn't find a working combination.
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('testImages/testImage3_6.jpg')
if img is None:
print "There is no image with this name! (maybe typo)"
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_red = np.array([0,50,50])
upper_red = np.array([10,255,255])
mask = cv2.inRange(hsv, lower_red, upper_red)
result = cv2.bitwise_and(img.copy(),img.copy(), mask= mask)
edges = cv2.Canny(mask,1000,1500)
maskSmoothed = cv2.medianBlur(mask,5)
circles = cv2.HoughCircles(maskSmoothed,cv2.cv.CV_HOUGH_GRADIENT,1,50,
param1=20,param2=20,minRadius=0,maxRadius=0)
output = img
cmask = cv2.cvtColor(maskSmoothed, cv2.COLOR_GRAY2BGR)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(cmask,(i[0],i[1]),i[2],(0,255,0),1) # draw the outer circle
cv2.circle(cmask,(i[0],i[1]),2,(0,0,255),3) # draw the center of the circle
cv2.imshow('win', cmask)
cv2.waitKey(0)
Original image
Image about the detected mask and edges
Detected circles on the mask
How could I detect it precisely? I need the radius.
Related
I saw this answer: Python - Finding contours of different colors on an image but in my case, it's becoming difficult to know different color thresholds because of varying brightness in images
I have few plant images where I am trying to segment the plant. Some plants have good shape and color but some have brownish tint to the leaves. How should I make a good segmentation of these images using opencv. I first thought of using detection and though I did get edges, I couldn't fill the gaps so I used contour detection and but they don't work well many many not-so-well-shaped plants with variation in color.
My current method attempts to find green color in HSV colorspace.It's easier to define green colorspace but not for the little brownish tints.
import cv2
import matplotlib.pyplot as plt
import numpy as np
import os, glob
def method_1(img):
# Blur the image
blur = cv2.GaussianBlur(img,(9,9),cv2.BORDER_DEFAULT)
## convert to hsv
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
## mask of green (36,25,25) ~ (86, 255,255)
mask = cv2.inRange(hsv, (36, 25, 25), (85, 255,255))
## slice the green
imask = mask>0
green = np.zeros_like(img, np.uint8)
green[imask] = img[imask]
# apply dilation on src image
kernel = np.ones((7,7),np.uint8)
dilated_img = cv2.dilate(green, kernel, iterations = 2)
# Draw contours and fill holes
canvas = dilated_img.copy() # Canvas for plotting contours on
canvas = cv2.cvtColor(canvas, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(canvas, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
return canvas, contours
def get_mask(maskfolder, file_path, filename, savefile):
contour_list = []
img = cv2.imread(file_path)
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(rgb_img)
plt.show()
It works fine for images which look like:
but not for images like this:
Sample results below
I then used a simple contouring method
def method_2(img):
blur = cv2.GaussianBlur(img, (5, 5), 0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
contours, _ = cv2.findContours(gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
return gray, contours
def get_mask(maskfolder, file_path, filename, savefile):
contour_list = []
img = cv2.imread(file_path)
rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# METHOD 2
canvas_2, contours_2 = method_2(rgb_img)
for cnt in contours_2:
cv2.drawContours(canvas_2,[cnt],-1,255,-1)
plt.imshow(canvas_2, cmap="gray")
plt.show()
but it returns black images:
You can get rid of varying illumintation by normalizing the colors (divide the RGB components by their sum). Anyway, the saturated areas are forever lost.
The you can classify the background vs. the foreground by the nearest neighbor rule, based on a few colors that you pick. If you need an automated solution, the k-nearest neighbor algorithm can be a start.
I have tried some variations in the parameters, read a detailed description of their meaning, but I can't seem to detect a simple circle in an image. This is a simplified function I have tried:
def get_a_circles(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, # input grayscale image
cv2.HOUGH_GRADIENT,
2,
minDist=5,
param1=10, param2=200,
minRadius=0,
maxRadius=0)
return circles
which, when run on this image:
img = cv2.imread("step2.jpg")
get_a_circle(img.copy())
returns none.
It does however detect the circles in this image:
image
circles found & highlighted
I tried to add some blur to the image that fails, with either
gray = cv2.medianBlur(gray, 5) or gray = cv2.GaussianBlur(gray,(5,5),0) but it does not improve the results.
Any suggestions on what to try with that image? (It would seem it's an obvious circle)
You need to lower param2 in HoughCircles in Python/OpenCV.
Input:
import cv2
import numpy as np
# Read image
img = cv2.imread('dot.jpg')
hh, ww = img.shape[:2]
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# median filter
gray = cv2.medianBlur(gray, 5)
# get Hough circles
min_dist = int(ww/20)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, minDist=min_dist, param1=150, param2=10, minRadius=0, maxRadius=0)
print(circles)
# draw circles
result = img.copy()
for circle in circles[0]:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
(x,y,r) = circle
x = int(x)
y = int(y)
cv2.circle(result, (x, y), r, (0, 0, 255), 1)
# save results
cv2.imwrite('dot_circle.jpg', result)
# show images
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I'm trying to follow movement of a part using red dots. I tried with white dots and thresholding before, but there is too much reflection from the smartphone I'm using. The plan is to recognize a dot as a contour, find the center and fill the array with the coordinates of all contour centers for further calculation.
The code is posted bellow, it recognizes the correct number of dots, but I get the division by zero error. Does anyone know what I'm doing wrong?
Image:https://imgur.com/a/GLXGCPP
import cv2
import numpy as np
from matplotlib import pyplot as plt
import imutils
#load image
img = cv2.imread('dot4_red.jpg')
#apply median blur, 15 means it's smoothing image 15x15 pixels
blur = cv2.medianBlur(img,15)
#convert to hsv
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
#color definition
red_lower = np.array([0,0,240])
red_upper = np.array([10,10,255])
#red color mask (sort of thresholding, actually segmentation)
mask = cv2.inRange(hsv, red_lower, red_upper)
#copy image for, .findContours distorts the source image
mask_copy = mask.copy()
#find contours
cnts = cv2.findContours(mask_copy,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
#extract contours from the list??
cnts = imutils.grab_contours(cnts)
#count number of conoturs of specific size
s1 = 500
s2 = 10000
xcnts = []
for cnt in cnts:
if s1<cv2.contourArea(cnt)<s2:
xcnts.append(cnt)
n = len(xcnts)
#pre-allocate array for extraction of centers of contours
s = (n,2)
array = np.zeros(s)
#fill array of center coordinates
for i in range(0,n):
cnt = cnts[i]
moment = cv2.moments(cnt)
c_x = int(moment["m10"]/moment["m00"])
c_y = int(moment["m01"]/moment["m00"])
array[i,:] = [c_x, c_y]
#display image
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow('image', mask)
cv2.waitKey(0) & 0xFF
cv2.destroyAllWindows()
#print results
print ('number of dots, should be 4:',n)
print ('array of dot center coordinates:',array)
The problem was the wrong color range. Because of this, there were holes in the mask of the circles. Due to division by zero. M00. You can choose the correct color range or pre-fill the holes in the mask. Or use this code:
import cv2
import numpy as np
#load image
img = cv2.imread('ZayrMep.jpg')
#apply median blur, 15 means it's smoothing image 15x15 pixels
blur = cv2.medianBlur(img,15)
#convert to hsv
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
#color definition
red_lower = np.array([160,210,230])
red_upper = np.array([180,255,255])
#red color mask (sort of thresholding, actually segmentation)
mask = cv2.inRange(hsv, red_lower, red_upper)
connectivity = 4
# Perform the operation
output = cv2.connectedComponentsWithStats(mask, connectivity, cv2.CV_32S)
# Get the results
num_labels = output[0]-1
centroids = output[3][1:]
#print results
print ('number of dots, should be 4:',num_labels )
print ('array of dot center coordinates:',centroids)
moments00 (area) can be 0 for some shapes according to cv documentation. This is probably what is happening here:
Note Since the contour moments are computed using Green formula, you
may get seemingly odd results for contours with self-intersections,
e.g. a zero area (m00) for butterfly-shaped contours.
From: https://docs.opencv.org/3.4/d8/d23/classcv_1_1Moments.html#a8b1b4917d1123abc3a3c16b007a7319b
You need to ensure the area (m00) is not 0 before using it for division.
I'm a newbie with Open CV and computer vision so I humbly ask a question. With a pi camera I record a video and in real time I can recognize blue from other colors (I see blue as white and other colors as black).
I want to measure the length of the base of the area (because I have a white rectangle and a black rectangle). This two rectangle together create the square frame.
Excerpt of code:
# Take each frame
_, frame = cap.read(0)
# Convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([80,30,30])
upper_blue = np.array([130,150,210])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# applica filtri morfologici
mask = cv2.erode(mask, spotFilter)
mask = cv2.dilate(mask, maskMorph)
# ^Now the mask is a black and white image.
# ^Get height of the black region in this image
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
cv2.imshow('frame',frame)
cv2.imshow('mask',mask)
cv2.imshow('res',res)
Assuming input frames will have "close to rectangle" shapes (where the following code works best), you have to use the findContours function to get the black region's boundary and boundingRectfunction to get it's dimensions.
mask = cv2.imread('mask.png') #The mask variable in your code
# plt.imshow(mask)
thresh_min,thresh_max = 127,255
ret,thresh = cv2.threshold(mask,thresh_min,thresh_max,0)
# findContours requires a monochrome image.
thresh_bw = cv2.cvtColor(thresh, cv2.COLOR_BGR2GRAY)
# findContours will find contours on bright areas (i.e. white areas), hence we'll need to invert the image first
thresh_bw_inv = cv2.bitwise_not(thresh_bw)
_, contours, hierarchy = cv2.findContours(thresh_bw_inv,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# ^Gets all white contours
# Find the index of the largest contour
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
#Draw the rectangle on original image here.
cv2.rectangle(mask,(x,y),(x+w,y+h),(0,255,0),2)
plt.imshow(mask)
print("Distances: vertical: %d, horizontal: %d" % (h,w))
I am using Python and OpenCV. I am now using grabcut() to crop out the object I want. Here is my code:
img = cv2.imread('test.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (2,2,630,930)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0), 0,1).astype('uint8')
img = img*mask2[:,:, np.newaxis]
Afterwards, I try to find out the contour.
I have tried to find the contour by the code below:
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
And it returns a contours array with length 48. When I draw this out:
First question is how can I get the contour (array) of this grab cut?
Second question: as you can see, the background color is black. How can I change the background color to white?
Thank you.
At first, you need to get the background. To this must be subtracted from the original image with the mask image. And then change the black background to white (or any color). And then back to add with the image of the mask.
import numpy as np
import cv2
cv2.namedWindow(‘image’, cv2.WINDOW_NORMAL)
#Load the Image
imgo = cv2.imread(‘input.jpg’)
height, width = imgo.shape[:2]
#Create a mask holder
mask = np.zeros(imgo.shape[:2],np.uint8)
#Grab Cut the object
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#Hard Coding the Rect… The object must lie within this rect.
rect = (10,10,width-30,height-30)
cv2.grabCut(imgo,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2)|(mask==0),0,1).astype(‘uint8’)
img1 = imgo*mask[:,:,np.newaxis]
#Get the background
background = imgo – img1
#Change all pixels in the background that are not black to white
background[np.where((background > [0,0,0]).all(axis = 2))] =[255,255,255]
#Add the background and the image
final = background + img1
#To be done – Smoothening the edges….
cv2.imshow(‘image’, final )
k = cv2.waitKey(0)
if k==27:
cv2.destroyAllWindows()
Information taken from the site
https://nxtify.wordpress.com/2015/02/24/image-background-removal-using-opencv-in-python/
If you want a single contour like boundary, you can go for edge detection on the output of grabcut and morphology dilation on the edge image to get as a proper connected contour and can get the array of pixels of the boundary.
For making the background white, All the pixels outside your bounding box can be default made to white. The black pixels inside your bounding box, you can compare with the original image corresponding gray level, if it is black, you can keep it, otherwise make it to white. Because if the original pixel is not black, but made black by grabcut, then it is considered as background. If black pixel is there in foreground, the grabcut never makes it to black (ideal case).