I have the code below that I use to find if there is any dicontinuity on railways. The idea is to create bounding box and if there are 2 boxes, it means that everything is okay. Otherwise, if I have more than 2 boxes, it means that there is a discontinuity. So, at the end of my code, I put a function print len(contours).
The problem is that when I have my final image (with blue rectangles), we can see those I need (those that cover the railways), but also some really little rectangles I don't want. It means that the function print len(contours) send me back a message telling me that there are 19 rectangles for example.
My idea is to make a little function, asking the programme to count the number of rectangles with a minimal length or width (so it would count only the rectangles that cover the railways). I am a beginner in coding with Python and I have no idea about how to code this, does anyone can help me please?
Thank you in advance for your help.
Here is my code:
import numpy as np
import argparse
import cv2
image = cv2.imread('..\img.jpg')
frame = cv2.resize(image,(500,500))
"""boundaries = [
([17, 15, 100], [50, 56, 200]),
([86, 31, 4], [220, 88, 50]),
([25, 146, 190], [62, 174, 250]),
([103, 86, 65], [145, 133, 128])]"""
boundaries = [([100, 100, 100], [255, 255, 255])]
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint8")
upper = np.array(upper, dtype = "uint8")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(frame, lower, upper)
output = cv2.bitwise_and(frame, frame, mask = mask)
# show the images
cv2.namedWindow("images", cv2.WINDOW_NORMAL)
cv2.resizeWindow("images", 1000, 500)
cv2.imshow("images", np.hstack([frame, output]))
cv2.waitKey(0)
gray=cv2.cvtColor(output,cv2.COLOR_BGR2GRAY)
ret,th1 = cv2.threshold(gray,25,255,cv2.THRESH_BINARY)
contours,hierarchy = cv2.findContours(th1, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(output,(x,y),(x+w,y+h),(255,0,0),2)
cv2.imshow('image2',output)
cv2.waitKey(0)
print len(contours)
Here are my results (left image is the original, middle image allows us to see only the railways and right image detects the railways with boundary boxes) :
Results when there is not any discontinuity (the code says that there are 4 rectangles)
Results when there is a discontinuity (the code says that there are 19 rectangles)
In your result, we can see many noisy regions. So to get the counts of rectangles, you should do more pre-post-processing steps, such as morph-op on the binary, or removing the noisy regions by area or width-height-ratio.
In these two cases, doing morph-op is enough. But you can also do more processing on them.
croped images:
situation 1:
situation 2:
Source code:
## read and convert to gray
img = cv2.imread("lines.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## threshed
th, threshed = cv2.threshold(gray, 100, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
## morph-op
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
morphed = threshed.copy()
morphed = cv2.morphologyEx(morphed, cv2.MORPH_ERODE, kernel, iterations=1)
morphed = cv2.morphologyEx(morphed, cv2.MORPH_DILATE, kernel, iterations=2)
## find contours
cnts = cv2.findContours(morphed, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2]
print(len(cnts))
## removingt noisy regions by area
cnts = list(filter(lambda cnt: cv2.contourArea(cnt) > 100, cnts))
print(len(cnts))
Related
I have an image where I'm trying to export and do transform as rectangle (no matter in base photo). I have done with finding largest contour in image using mask (trained in pytorch). In variable pr_mask I have stored mask from network. After that from that mask I can get extract contours of my image. So from this image:
I can extract contours like this:
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),5)
canny = cv2.Canny(blur, 30, 80, 3)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
flag, thresh = cv2.threshold(canny, 80, 255, cv2.THRESH_BINARY)
kernel = np.ones((10,10),np.uint8)
dilation = cv2.dilate(canny,kernel,iterations = 1)
ret,thresh = cv2.threshold(dilation, 200, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
After that I fetch largest contour and then trying to extract contours:
approx = np.ones((10,10),np.uint8)
d = 0
while (len(approx)>4):
d=d+1
approx = cv2.approxPolyDP(largestcontour,d, True)
cv2.drawContours(imagebase, [largestcontour], -1, (0,255,0), 3)
hull = []
hull.append(cv2.convexHull(largestcontour, False))
cv2.drawContours(imagebase, hull, -1, (0,0,255), 3)
After that I get tthis result:
Is it possible to extract square (rectangle) which will transform my image to exact square without rounded corners and to fit whole image?
I'm trying to find a way how to solve this "perspective" issue and align element to specific position. My image is taken by cameras, so it can be in different positions, where boundingRectangle will not work ... any suggestions?
I have foreground mask, like this
If I use findContours method I receive this result
I want to find only main contours on hand without trash. How can I do it?
thresh = 100
# get threshold image
ret, thresh_img = cv.threshold(fgMask, thresh, 255, cv.THRESH_BINARY)
# find contours
contours, hierarchy = cv.findContours(thresh_img, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
# create an empty image for contours
img_contours = np.zeros(img.shape)
# draw the contours on the empty image
cv.drawContours(img_contours, contours, -1, (0, 255, 0), 3)
cv.imwrite("img.png", fgMask)
cv.imwrite("img1.png", img_contours)
Maybe you can try blurring image before threshold to reduce the noise.
However, your foreground mask is quite bad actually for this purpose. Since I cannot see what you have done before, it is generally better to use HSV color space if you want to achieve human skin. Because
The color of human skin is created by a combination of blood (red) and
melanin (yellow, brown). Skin colors lie between these two extreme
hues and are somewhat saturated.
HSV: (Hue-Saturation-Value) defined in a way that is similar to how
humans perceive color.
img_path = "hand.jpg"
img = cv.imread(img_path)
# define the upper and lower boundaries of the HSV pixel intensities
# to be considered 'skin'
hsvim = cv.cvtColor(img, cv.COLOR_BGR2HSV)
lower = np.array([0, 48, 80], dtype="uint8")
upper = np.array([20, 255, 255], dtype="uint8")
skinMask= cv.inRange(hsvim, lower, upper)
# blur the mask to help remove noise
skinMask= cv.blur(skinMask, (2, 2))
# get threshold image
ret, thresh = cv.threshold(skinMask, 100, 255, cv.THRESH_BINARY)
cv.imshow("thresh", thresh)
# draw the contours on the empty image
contours, hierarchy = cv.findContours(thresh, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
contours = max(contours, key=lambda x: cv.contourArea(x))
cv.drawContours(img, [contours], -1, (255, 255, 0), 2)
cv.imshow("contours", img)
cv.waitKey()
should resulted better like
Code Reference
I have to detect yellow circles in an image using OpenCV and python, as shown in the first image:
Once I detect the yellow circle, I have to highlight it, like so:
I am new to OpenCV, so I was looking for some guidance or help. All help is appreciated
Here's a potential approach:
Convert image to HSV
Find upper/lower color boundaries and create a mask
Find contours and filter using the number of vertices
We convert image to HSV and then determine lower and upper boundaries to create a mask using cv2.inRange(). This step isolates the yellow objects
image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 208, 94], dtype="uint8")
upper = np.array([179, 255, 232], dtype="uint8")
mask = cv2.inRange(image, lower, upper)
Next to determine the shape, we find contours and filter using the number of vertices. We use cv2.arcLength() and cv2.approxPolyDP() to obtain a list of vertices and approximate contours. We can check the number of entries in this list to determine the shape of an object. For instance, if the contour has three vertices, it must be a triangle. Similarly, if it has four vertices, it must be a square. So for this image, we can make the assumption the shape is a circle if it has greater than a certain number of vertices. Here's the result
import numpy as np
import cv2
image = cv2.imread('1.png')
original = image.copy()
image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 208, 94], dtype="uint8")
upper = np.array([179, 255, 232], dtype="uint8")
mask = cv2.inRange(image, lower, upper)
# Find contours
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Extract contours depending on OpenCV version
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Iterate through contours and filter by the number of vertices
for c in cnts:
perimeter = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.04 * perimeter, True)
if len(approx) > 5:
cv2.drawContours(original, [c], -1, (36, 255, 12), -1)
cv2.imshow('mask', mask)
cv2.imshow('original', original)
cv2.imwrite('mask.png', mask)
cv2.imwrite('original.png', original)
cv2.waitKey()
I would like to find the contour of the rectangular photograph inside of the object. I've tried using the corner detection feature of OpenCV, but to no avail. I also tried to find all the contours using findContours, and filter out the contours with more (or less) than 4 edges, but this also didn't lead anywhere.
I have a sample scan here.
I have a solution for you, but it involves a lot of steps. Also, it may not generalize that well. It does work pretty good for your image though.
First a grayscale and threshold is made and findContours is used to create a mask of the paper area. That mask is inverted and combined with the original image, which makes the black edges white. A new grayscale and threshold is made on the resulting image, which is then inverted so findContours can find the dark pixels of the photo. A rotated box around the largest contours is selected, which is the area you seek.
I added a little extra, which you may not need, but could be convenient: perspectivewarp is applied to the box, so the area you want is made into a straight rectangle.
There is quite a lot happening, so I advise you to take some time a look at the intermediate steps, to understand what happens.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('photo.jpg')
# resize to easily view on screen, remove for final processing
image = cv2.resize(image,None,fx=0.2, fy=0.2, interpolation = cv2.INTER_CUBIC)
### remove outer black edge
# create grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# perform threshold
retr , mask = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY)
# remove noise
kernel = np.ones((5,5),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# create emtpy mask
mask_2 = np.zeros(image.shape[:3], dtype=image.dtype)
# find contours
ret, contours, hier = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# draw the found shapes (white, filled in ) on the empty mask
for cnt in contours:
cv2.drawContours(mask_2, [cnt], 0, (255,255,255), -1)
# invert mask and combine with original image - this makes the black outer edge white
mask_inv_2 = cv2.bitwise_not(mask_2)
tmp = cv2.bitwise_or(image, mask_inv_2)
### Select photo - not inner edge
# create grayscale
gray_image2 = cv2.cvtColor(tmp, cv2.COLOR_BGR2GRAY)
# perform threshold
retr, mask3 = cv2.threshold(gray_image2, 190, 255, cv2.THRESH_BINARY)
# remove noise
maskX = cv2.morphologyEx(mask3, cv2.MORPH_CLOSE, kernel)
# invert mask, so photo area can be found with findcontours
maskX = cv2.bitwise_not(maskX)
# findcontours
ret, contours2, hier = cv2.findContours(maskX, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# select the largest contour
largest_area = 0
for cnt in contours2:
if cv2.contourArea(cnt) > largest_area:
cont = cnt
largest_area = cv2.contourArea(cnt)
# find the rectangle (and the cornerpoints of that rectangle) that surrounds the contours / photo
rect = cv2.minAreaRect(cont)
box = cv2.boxPoints(rect)
box = np.int0(box)
print(rect)
#### Warp image to square
# assign cornerpoints of the region of interest
pts1 = np.float32([box[1],box[0],box[2],box[3]])
# provide new coordinates of cornerpoints
pts2 = np.float32([[0,0],[0,450],[630,0],[630,450]])
# determine and apply transformationmatrix
M = cv2.getPerspectiveTransform(pts1,pts2)
result = cv2.warpPerspective(image,M,(630,450))
#draw rectangle on original image
cv2.drawContours(image, [box], 0, (255,0,0), 2)
#show image
cv2.imshow("Result", result)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am trying to identify cards from a photo. I managed to do what I wanted on ideal photos, but I am now having hard time applying the same procedure with slightly different lighting, etc. So the question is about making the following contour detection more robust.
I need to share a big part of my code for the takers to be able to make the images of interest, but my question relates only to the last block and image.
import numpy as np
import cv2
from matplotlib import pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
import math
img = cv2.imread('image.png')
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img)
Then the cards are detected:
# Prepocess
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(1,1),1000)
flag, thresh = cv2.threshold(blur, 120, 255, cv2.THRESH_BINARY)
# Find contours
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea,reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
# Show image
imgcont = img.copy()
[cv2.drawContours(imgcont, [contours[i]], 0, (0,255,0), 5) for i in listindex]
plt.imshow(imgcont)
The perspective is corrected:
#plt.rcParams['figure.figsize'] = (3.0, 3.0)
warp = range(numcards)
for i in range(numcards):
card = contours[i]
peri = cv2.arcLength(card,True)
approx = cv2.approxPolyDP(card,0.02*peri,True)
rect = cv2.minAreaRect(contours[i])
r = cv2.cv.BoxPoints(rect)
h = np.array([ [0,0],[399,0],[399,399],[0,399] ],np.float32)
approx = np.array([item for sublist in approx for item in sublist],np.float32)
transform = cv2.getPerspectiveTransform(approx,h)
warp[i] = cv2.warpPerspective(img,transform,(400,400))
# Show perspective correction
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
grid[i].imshow(warp[i]) # The AxesGrid object work as a list of axes.
That were I am having my problem. I want to detect the contour of the shapes. The best way I found is using a combination of bilateralFilter and AdaptativeThreshold on a gray image:
fig = plt.figure(1, (10,10))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (4, 4), # creates 2x2 grid of axes
axes_pad=0.1, # pad between axes in inch.
aspect=True, # do not force aspect='equal'
)
for i in range(numcards):
image2 = cv2.bilateralFilter(warp[i].copy(),10,100,100)
grey = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
grey2 = cv2.cv.AdaptiveThreshold(cv2.cv.fromarray(grey), cv2.cv.fromarray(grey), 255, cv2.cv.CV_ADAPTIVE_THRESH_MEAN_C, cv2.cv.CV_THRESH_BINARY, blockSize=31, param1=6)
grid[i].imshow(grey,cmap=plt.cm.binary)
This is very close to what I would like, but how can I improve it to get closed contours in white, and everything else in black?
Why not just use Canny and apply perspective correction after finding the contours (because it seems to blur the edges)? For example, using the small image you provided in your question (the result could be better on a bigger one):
Based on some parts of your code:
import numpy as np
import cv2
import math
img = cv2.imread('image.bmp')
# Prepocess
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
flag, thresh = cv2.threshold(gray, 120, 255, cv2.THRESH_BINARY)
# Find contours
img2, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Select long perimeters only
perimeters = [cv2.arcLength(contours[i],True) for i in range(len(contours))]
listindex=[i for i in range(15) if perimeters[i]>perimeters[0]/2]
numcards=len(listindex)
card_number = -1 #just so happened that this is the worst case
stencil = np.zeros(img.shape).astype(img.dtype)
cv2.drawContours(stencil, [contours[listindex[card_number]]], 0, (255, 255, 255), cv2.FILLED)
res = cv2.bitwise_and(img, stencil)
cv2.imwrite("out.bmp", res)
canny = cv2.Canny(res, 100, 200)
cv2.imwrite("canny.bmp", canny)
First, remove everything except a single card for simplicity, then apply Canny edge detector:
Then you can dilate/erode, correct perspective, remove the largest contour etc.
Except for the image in the bottom right corner, the following steps should generally work:
Dilate and erode the binary masks to bridge any one or two pixels gaps between contour fragments.
Use maximal supression to turn your thick binary masks along the boundary of your shapes into thin edges.
As used earlier in the pipeline, use cvFindcontours to identify closed contours. Each contour identified by the method can be tested for being closed.
As a general solution to such problems, I would advise you to try my algorithm to find closed contours around a given point. Check active segmentation with fixation