I'm using openCV to detect some coins, first I used some functions to fill the coin area so I can make a solid white circle where the coin is, then im trying to use houghCircles to detect the white circle so I can crop it to send to a neural network. But the houghCircle is not detecting anything, any tips on this?
Here is the code:
import numpy as np
import cv2
gray = cv2.imread('coin25a2.jpg',0)
color = cv2.imread('coin25a2.jpg',1)
gray_blur = cv2.GaussianBlur(gray, (15,15), 0)
thresh = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11,1)
kernel = np.ones((3, 3), np.uint8)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=7)
circles = cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200, 20,30,30, 0)
circles = np.uint16(np.around(circles))
print(circles)
cv2.imshow("a", closing)
cv2.waitKey(0)
The circles variable is not returning any valid (x,y,r).
circles = cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200, 20,30,30, 0)
The last parameter is the maximum radius of the circle that you want to find. I think you need to put a large value there, instead of 0.
A better plan is to go with only the default parameters and adjust later.
cv2.HoughCircles(image, method, dp, minDist)
, which is the same as
cv2.HoughCircles(closing,cv2.HOUGH_GRADIENT,1, 200)
Related
I want to develop an algorithm to recognize through an image, if the object present in said image is a spoon, a fork, or a knife. To acomplish this I am thinking of first comparing the ratio between white and black pixels along a line and if the ratio is the same along the ROI (with a tolerance of say 30%) i can say it's a knife, if not then I move to determine if it's a spoon or a fork.
But my problem starts here, I can't control wheter or not the spoon/fork/knife comes correctly oriented or not, if it comes rotated I am afraid my algorithm needs to be able to check along rotated lines as well. I can alredy get the angle of rotation, but I don't really know what to do with it. The code I have so far is:
import cv2
import numpy as np
img = cv2.imread('GenericImages/PBL3/fork.jpg')
NC = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, threshold = cv2.threshold(NC, 50, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(threshold, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draws contours
for c in contours:
if cv2.contourArea(c) < 1000:
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 191, 255), 2)
cv2.imshow('ROI', img)
cv2.waitKey(0)
I finished a tutorial on OpenCv for finding lanes, and I am trying to apply it to finding a piece of tape on the floor. I got the code running and set the region of interest but it only finds a few edges of the tape. I think it has to do with the thickness but I am not 100% sure. Any help would be appreciated.
import cv2
import numpy as np
import matplotlib.pyplot as plt
def canny(image):
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
blur = cv2.GaussianBlur(gray, (5,5), 0)
canny = cv2.Canny(blur, 50, 150)
return canny
def display_lines(image, lines):
line_image = np.zeros_like(image)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line.reshape(4)
cv2.line(line_image, (x1, y1), (x2, y2), (255, 0, 0), 10)
return line_image
def region_of_interest(image):
height = image.shape[0]
polygons = np.array([
[(200, height), (400, height), (355, 0)]
])
mask = np.zeros_like(image)
cv2.fillPoly(mask, polygons, 255)
masked_image = cv2.bitwise_and(image, mask)
return masked_image
image = cv2.imread('tape3.jpg')
lane_image = np.copy(image)
canny_image = canny(image)
cropped_image = region_of_interest(canny_image)
lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength=40, maxLineGap=5)
line_image = display_lines(lane_image, lines)
combo_image = cv2.addWeighted(lane_image, 0.8, line_image, 1, 1)
# cv2 print image
print(region_of_interest(image))
cv2.imshow("result", combo_image)
cv2.waitKey(0)
This may not answer your original question, but this could be an alternate way to achieve what you're looking for.
I started by thresholding the grayscale of the image to try and isolate the tape
Then I used opencv's findContours to get the segmentation points of each white blob
The thresholding method I used is sensitive to light and shadow so you may have to find some other thresholding method if this isn't a workable constraint. If different colored tape is a concern, you can threshold off of other values (convert to HSV or LAB and threshold off of the H or B channels respectively to look for red).
Edit:
If you still want to use HoughLinesP, here's a working example with your picture.
First I applied canny:
Then I used the HoughLinesP function:
I've never used houghLinesP before so I'm not sure of the potential pitfalls, but it seems to work, though it actually creates a bunch of overlapping lines with these parameters, you'll have to play around with it a bit.
Relevant Code:
# canny
canned = cv2.Canny(gray, 591, 269);
# dilate
kernel = np.ones((3,3), np.uint8);
canned = cv2.dilate(canned, kernel, iterations = 1);
# hough
lines = cv2.HoughLinesP(canned, rho = 1, theta = 1*np.pi/180, threshold = 30, minLineLength = 10, maxLineGap = 20);
Edit 2:
I looked at the documentation for the function and the third parameter (theta) refers to the angle resolution. I think it might not have worked in your code because you didn't run dilation on the image after Canny. With a one-degree search resolution it's not hard to imagine that we could miss the very thin line that canny returns. It might even be worth dilating the lines more than I did in the example by using a larger kernel (or dilating multiple times).
i am trying to remove noise in an image less and am currently running this code
import numpy as np
import argparse
import cv2
from skimage import morphology
# Construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required = True,
help = "Path to the image")
args = vars(ap.parse_args())
# Load the image, convert it to grayscale, and blur it slightly
image = cv2.imread(args["image"])
cv2.imshow("Image", image)
cv2.imwrite("image.jpg", image)
greenLower = np.array([50, 100, 0], dtype = "uint8")
greenUpper = np.array([120, 255, 120], dtype = "uint8")
green = cv2.inRange(image, greenLower, greenUpper)
#green = cv2.GaussianBlur(green, (3, 3), 0)
cv2.imshow("green", green)
cv2.imwrite("green.jpg", green)
cleaned = morphology.remove_small_objects(green, min_size=64, connectivity=2)
cv2.imshow("cleaned", cleaned)
cv2.imwrite("cleaned.jpg", cleaned)
cv2.waitKey(0)
However, the image does not seem to have changed from "green" to "cleaned" despite using the remove_small_objects function. why is this and how do i clean the image up? Ideally i would like to isolate only the image of the cabbage.
My thought process is after thresholding to remove pixels less than 100 in size, then smoothen the image with blur and fill up the black holes surrounded by white - that is what i did in matlab. If anybody could direct me to get the same results as my matlab implementation, that would be greatly appreciated. Thanks for your help.
Edit: made a few mistakes when changing the code, updated to what it currently is now and display the 3 images
image:
green:
clean:
my goal is to get somthing like this picture below from matlab implementation:
Preprocessing
A good idea when you're filtering an image is to lowpass the image or blur it a bit; that way neighboring pixels become a little more uniform in color, so it will ease brighter and darker spots on the image and keep holes out of your mask.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
lower_green = np.array([50, 100, 0])
upper_green = np.array([120, 255, 120])
mask = cv2.inRange(blur, lower_green, upper_green)
masked_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Colorspace
Currently, you're trying to contain an image by a range of colors with different brightness---you want green pixels, regardless of whether they are dark or light. This is much more easily accomplished in the HSV colorspace. Check out my answer here going in-depth on the HSV colorspace.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_green = np.array([37, 0, 0])
upper_green = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower_green, upper_green)
masked_img = cv2.bitwise_and(img, img, mask=mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Removing noise in a binary image/mask
The answer provided by ngalstyan shows how to do this nicely with morphology. What you want to do is called opening, which is the combined process of eroding (which more or less just removes everything within a certain radius) and then dilating (which adds back to any remaining objects however much was removed). In OpenCV, this is accomplished with cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel). The tutorials on that page show how it works nicely.
img = cv2.imread('image.jpg')
blur = cv2.GaussianBlur(img, (15, 15), 2)
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
lower_green = np.array([37, 0, 0])
upper_green = np.array([179, 255, 255])
mask = cv2.inRange(hsv, lower_green, upper_green)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
opened_mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
masked_img = cv2.bitwise_and(img, img, mask=opened_mask)
cv2.imshow('', masked_img)
cv2.waitKey()
Filling in gaps
In the above, opening was shown as the method to remove small bits of white from your binary mask. Closing is the opposite operation---removing chunks of black from your image that are surrounded by white. You can do this with the same idea as above, but using cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel). This isn't even necessary after the above in your case, as the mask doesn't have any holes. But if it did, you could close them up with closing. You'll notice my opening step actually removed a small bit of the plant at the bottom. You could actually fill those gaps with closing first, and then opening to remove the spurious bits elsewhere, but it's probably not necessary for this image.
Trying out new values for thresholding
You might want to get more comfortable playing around with different colorspaces and threshold levels to get a feel for what will work best for a particular image. It's not complete yet and the interface is a bit wonky, but I have a tool you can use online to try out different thresholding values in different colorspaces; check it out here if you'd like. That's how I quickly found values for your image.
Although, the above problem is solved using cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel). But, if somebody wants to use morphology.remove_small_objects to remove area less than a specified size, for those this answer may be helpful.
Code I used to remove noise for above image is:
import numpy as np
import cv2
from skimage import morphology
# Load the image, convert it to grayscale, and blur it slightly
image = cv2.imread('im.jpg')
cv2.imshow("Image", image)
#cv2.imwrite("image.jpg", image)
greenLower = np.array([50, 100, 0], dtype = "uint8")
greenUpper = np.array([120, 255, 120], dtype = "uint8")
green = cv2.inRange(image, greenLower, greenUpper)
#green = cv2.GaussianBlur(green, (3, 3), 0)
cv2.imshow("green", green)
cv2.imwrite("green.jpg", green)
imglab = morphology.label(green) # create labels in segmented image
cleaned = morphology.remove_small_objects(imglab, min_size=64, connectivity=2)
img3 = np.zeros((cleaned.shape)) # create array of size cleaned
img3[cleaned > 0] = 255
img3= np.uint8(img3)
cv2.imshow("cleaned", img3)
cv2.imwrite("cleaned.jpg", img3)
cv2.waitKey(0)
Cleaned image is shown below:
To use morphology.remove_small_objects, first labeling of blobs is essential. For that I use imglab = morphology.label(green). Labeling is done like, all pixels of 1st blob is numbered as 1. similarly, all pixels of 7th blob numbered as 7 and so on. So, after removing small area, remaining blob's pixels values should be set to 255 so that cv2.imshow() can show these blobs. For that I create an array img3 of the same size as of cleaned image. I used img3[cleaned > 0] = 255 line to convert all pixels which value is more than 0 to 255.
It seems what you want to remove is a disconnected group of small blobs.
I think erode() will do a good job remove them with the right kernel.
Given an nxn kernel, erode moves the kernel through the image and replaces the center pixel by the minimum pixel in the kernel.
Then you can dilate() the resulting image to restore eroded edges of the green part.
Another option would be to use fastndenoising
##### option 1
kernel_size = (5,5) # should roughly have the size of the elements you want to remove
kernel_el = cv2.getStructuringElement(cv2.MORPH_RECT, kernel_size)
eroded = cv2.erode(green, kernel_el, (-1, -1))
cleaned = cv2.dilate(eroded, kernel_el, (-1, -1))
##### option 2
cleaned = cv2.fastNlMeansDenoisingColored(green, h=10)
I have an image that I'm eroding and dilating like so:
kernel = np.ones((5,5),np.float32)/1
eroded_img = cv2.erode(self.inpainted_adjusted_image, kernel, iterations=10)
dilated_img = cv2.dilate(eroded_img, kernel, iterations=10)
Here's the result of the erosion and dilation:
and then I'm taking a threshold of it like so:
self.thresh = cv2.threshold(dilated_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
But the threshold gives me an unwanted extension that I've marked in the image below (The region above the red line is the unwanted region):
How do I get rid of this unwanted region? Is there a better way to do what I'm doing?
Working with a different type of threshold (adaptive threshold, which takes local brigthness into account) will already get rid of your problem: The adaptive threshold result is what you are looking for.
[EDIT: I have taken the liberty of adding some code on Hough circles. I admit that I have played with the parameters for this single image to get a nice looking result, though I do not know what type of accuracy you are needing for such a type of problem]
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('image.png',0)
thresh = cv2.threshold(img, 210, 255, cv2.ADAPTIVE_THRESH_MEAN_C)[1]
canny = cv2.Canny(thresh,50,150)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=23,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,0,0),3)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
titles = ['Original Image', 'Adaptive Thresholding', "Canny", "Hough Circle"]
images = [img, thresh, canny, cimg]
for i in xrange(4):
plt.subplot(2,2,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
Let us know if this is not yet sufficient.
From the binary Image it would be fairly easy to fit a circle using a Hough transform. Once you have the outer boundary of the circle i would suggest bleeding the boundary and cropping out the portion that outside the boundary.
Another approach is to adjust your threshold value. It looks like you could get away with that. You might need some morphological operations to get a clean edge. Using a disk kernel will help retain the shape to a good extent.
Since your question has been rolled back to its original version, I have attached a solution using flood fill which works on your images.
import numpy as np
import cv2
import sys
import matplotlib.pyplot as plt
img = cv2.imread('image.png', 0)
h, w = img.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
gray = cv2.blur(img,(5,5))
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
print maxLoc
fixed_range = True
connectivity = 4
flooded = img.copy()
mask[:] = 0
connectivity = 4 #8
flags = connectivity
flags |= cv2.FLOODFILL_FIXED_RANGE
cv2.floodFill(flooded, mask, maxLoc, (255, 255, 255), (60,)*3, (60,)*3, flags)
thresh = cv2.threshold(flooded, 250, 255, cv2.THRESH_BINARY)[1]
titles = ['Original Image', 'Blurred', "Floodfill", "Threshold"]
images = [img, gray, flooded, thresh]
for i in xrange(4):
plt.subplot(2,2,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
I am trying to identify cards from a photo. I managed to do what I wanted on ideal photos, but I am now having hard time applying the same procedure with slightly different lighting, etc. So the question is about making the following contour detection more robust.
I need to share a big part of my code for the takers to be able to make the images of interest, but my question relates only to the last block and image.
import numpy as np
import cv2
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import math
img = cv2.imread('image.png')
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img)
Then the cards are detected:
# Prepocess
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(1,1),1000)
flag, thresh = cv2.threshold(blur, 120, 255, cv2.THRESH_BINARY)
# Find contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea,reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
# Show image
imgcont = img.copy()
[cv2.drawContours(imgcont, [contours[i]], 0, (0,255,0), 5) for i in listindex]
plt.imshow(imgcont)
The perspective is corrected:
#plt.rcParams['figure.figsize'] = (3.0, 3.0)
warp = range(numcards)
for i in range(numcards):
card = contours[i]
peri = cv2.arcLength(card,True)
approx = cv2.approxPolyDP(card,0.02*peri,True)
rect = cv2.minAreaRect(contours[i])
r = cv2.cv.BoxPoints(rect)
h = np.array([ [0,0],[399,0],[399,399],[0,399] ],np.float32)
approx = np.array([item for sublist in approx for item in sublist],np.float32)
transform = cv2.getPerspectiveTransform(approx,h)
warp[i] = cv2.warpPerspective(img,transform,(400,400))
# Show perspective correction
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
grid[i].imshow(warp[i]) # The AxesGrid object work as a list of axes.
That were I am having my problem. I want to detect the contour of the shapes. The best way I found is using a combination of bilateralFilter and AdaptativeThreshold on a gray image:
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
image2 = cv2.bilateralFilter(warp[i].copy(),10,100,100)
grey = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
grey2 = cv2.cv.AdaptiveThreshold(cv2.cv.fromarray(grey), cv2.cv.fromarray(grey), 255, cv2.cv.CV_ADAPTIVE_THRESH_MEAN_C, cv2.cv.CV_THRESH_BINARY, blockSize=31, param1=6)
grid[i].imshow(grey,cmap=plt.cm.binary)
This is very close to what I would like, but how can I improve it to get closed contours in white, and everything else in black?
Why not just use Canny and apply perspective correction after finding the contours (because it seems to blur the edges)? For example, using the small image you provided in your question (the result could be better on a bigger one):
Based on some parts of your code:
import numpy as np
import cv2
import math
img = cv2.imread('image.bmp')
# Prepocess
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
flag, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY)
# Find contours
img2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
card_number = -1 #just so happened that this is the worst case
stencil = np.zeros(img.shape).astype(img.dtype)
cv2.drawContours(stencil, [contours[listindex[card_number]]], 0, (255, 255, 255), cv2.FILLED)
res = cv2.bitwise_and(img, stencil)
cv2.imwrite("out.bmp", res)
canny = cv2.Canny(res, 100, 200)
cv2.imwrite("canny.bmp", canny)
First, remove everything except a single card for simplicity, then apply Canny edge detector:
Then you can dilate/erode, correct perspective, remove the largest contour etc.
Except for the image in the bottom right corner, the following steps should generally work:
Dilate and erode the binary masks to bridge any one or two pixels gaps between contour fragments.
Use maximal supression to turn your thick binary masks along the boundary of your shapes into thin edges.
As used earlier in the pipeline, use cvFindcontours to identify closed contours. Each contour identified by the method can be tested for being closed.
As a general solution to such problems, I would advise you to try my algorithm to find closed contours around a given point. Check active segmentation with fixation