I have been getting images like this after edge detection:
I'd like it to connect the edges together into straight-line polygons.
I thought this could be done using findCountours with chain approximation, but that doesn't seem to be working well for me.
How can I convert an image like the one above into a simple straight-line polygons (that look like skewed triangles and trapezoids and squares)?
You need to first detect the lines and then construct the contours. You can do that using HoughLines(). There is a short tutorial here.
Blur the image, then find the contours.
If the edges are that close together, a simple blurring with something like
def blur_image(image, amount=3):
'''Blurs the image
Does not affect the original image'''
kernel = np.ones((amount, amount), np.float32) / (amount**2)
return cv2.filter2D(image, -1, kernel)
should connect all the little gaps, and you can do contour detection with that.
If you then want to convert those contours into polygons, you can look to approximate those contours as polygons. A great tutorial with code for that is here.
The basic idea behind detecting polygons is running
cv2.findContours(image, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
Which tells OpenCV to approximate the contours with straight lines, creating polygons. (I'm not sure what the cv2.RETR_EXTERNAL parameter does at this time.)
Related
I am trying to calculate the roughness of lines in an image, and believe that cv2.convexHull and cv2.convexivityDefects are the way to go. The issue that I am running into is the lines cannot be collected into contours because they go outside the image boundaries. Below is the original image and the edges that I was able to form from it. Is there a way to just close off the shapes or make contours even though they are open? Or is there a better way to go about finding the roughness?
To clarify, these are the functions that I am trying to use currently. I am new to image processing so I have no idea what modules could help here.
cnts = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
hull = cv2.convexHull(cnt,returnPoints=True)
hullDefects = cv2.convexHull(cnt,returnPoints=False)
spiralDefects = cv2.convexityDefects(cnt, hullDefects)
I am new to OpenCV, so please bear with me.
Currently, I get contours of both white and black things in my binary image. I only want black contours though (so where the value is 0). Is there some kind of setting I can adjust to get this result? Or can I filter them?
Also: cv.findContours() does return both the contours and a hierarchy. What is hierarchy used for?
And lastsly: Contours seemingly consist of an array with multiple coordinates. What do they mean?
cv2.findContours finds all the contours in your image. Some are internal, some are external, some are nested inside other contours.
For this reason the method returns multiple coordinates.
Hierarchy is a vector that contains information about these different levels of contours extracted (external, nested, internal etc..).
You can however set a retrievalMode to filter contours based on hierarchy.
Under no circumstances they contain information about color so you need to filter them in some way.
I might add that a sensible thing you can do is filter the image before getting contours, so you find contours only in the mask you create, based on the color or range of colours of your choice (see cv2.inRange)
EDIT: This is a deeper explanation of a question I asked earlier, which is still not solved for me.
I'm currently trying to write some code that can extract data from some uncommon graphs in a book. I scanned the pages of the book, and by using opencv I would like to detect some features ofthe graphs in order to convert them into useable data. In the left graph I'm looking for the height of the "triangles" and in the right graph the distance from the center to the points where the dotted lines intersect with the gray area. In both cases I would like to convert these values into numeric data for further usage.
For the left graph, I thought of detecting all the individual colors and computing the area of each sector by counting the amount of pixels in that color. When I have the area of these sectors, I can easily calculate their heights, using basic math. The following code snippet shows how far I've gotten already with identifying different colors. However I can't manage to make this work accurately. It always seems to detect some colors of other sectors as well, or not detect all pixels of one sector. I think it has something to do with the boundaries I'm using. I can't quite figure out how to make them work. Does someone know how I can determine these values?
import numpy as np
import cv2
img = cv2.imread('images/test2.jpg')
lower = np.array([0,0,100])
upper = np.array([50,56,150])
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask = mask)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
For the right graph, I still have no idea how to extract data from it. I thought of identifying the center by detecting all the dotted lines, and then by detecting the intersections of these dotted lines with the gray area, I could measure the distance between the center and these intersections. However I couldn't yet figure out how to do this properly, since it sounds quite complex. The following code snippet shows how far I've gotten with the line detection. Also in this case the detection is far from accurate. Does someone have an idea how to tackle this problem?
import numpy as np
import cv2
# Reading the image
img = cv2.imread('test2.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=50,maxLineGap=20)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
For the left image, using your approach, try to look at the RGB histogram, the colors should be significant peaks, if you would like to use the relative area of the segments.
Another alternative could be to use Hough Circle Transform, which should work on circle segments. See also here.
For the right image ... let me think ...
You could create a "empty" diagram with no data inside. You know the locations of the circle segment ("cake pieces"). Then you could identify the area where the data is (the dark ones), either by using a grey threshold, an RGB threshold, or Find Contours or look for Watershed / Distance Transform.
In the end the idea is to make a boolean overlay between the cleared image and the segments (your data) that was found. Then you can identify which share of your circle segments is covered, or knowing the center, find the farthest point from the center.
I want in the image below to (by using Python):
1.) Find the contours of the bones (only the sides will do)
2.) Recognize and draw all the contours.
It could look something like this:
A better contour is even good. I am not entirely sure as to how I could tackle this,
The gradient of the image is:
A initial way of approaching this would be using canny edge detection using the right threshold values and then find the contours.
import cv2
# Load the image
img = cv2.imread("/home/tribta/Desktop/feet.png")
# Find the contours
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,60,200)
im2, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions
# For each contour, find the bounding rectangle and draw it
cv2.drawContours(img, contours, -1, (0,255,0), 3)
# Finally show the image
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Then you could add some biomedical processing criteria to distinguish the different contours and verify if it is really a bone.
You should apply first a threshold using cv2.threshold to clean your images from objects you don't want to, try different values of intensity to catch the borders, then apply some morphology operation such as OPENING and CLOSING with cv2.morphologyEx to clean a bit the image, fill the holes and finally applied cv2.findContours and cv2.drawContours to get the finally image of the contour of the bones.
check those command on opencv library
You will find really good examples on stackoverflow and internet, try adapting your code to those.
Hope this was a good help to you.
In another user's post, they discussed how they were having trouble using OpenCV's Hough Circle Transform to detect and draw over each ring on a bull target here. A proposed solution was to instead using contours to locate each circle. When I attempted the same thing, my result was exactly what I was looking for; the only problem is that the program I'm attempting to create needs a series of circles, one on each ring, so that I may then calculate the distance from the center and therefore the particular ring where a given point exists.
Following the code if the mentioned post, I have the following:
import cv2
import numpy as np
image = cv2.imread('bull.png')
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
_,contours,_ = cv2.findContours(255 - image_copy, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
print(contours)
cv2.drawContours(image, contours, -1, color=(0,255,0), thickness=1)
cv2.imshow("detected circles", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Which gives me the same results as shown in the linked post.
And this is great. I love the progress. However, I initially tried using the same method that the linked poster did because I liked the coordinate and radius information provided by the circles.
I found in one of OpenCV's tutorials that you can contain a contour with a circle, rectangle, or ellipse. The tutorial is rather easy to follow for an individual contour, but in my code, I have no idea what _,contours,_ means. Is it some sort of array? How do I access the individual contours? When I tried printing the variable, it gave me close to 4,000 lines. I won't post them here, of course, but they appeared to be sets of coordinates.
So my question is: How can I go about converting this grouping of contours into more manageable circles?
As I mentioned in your previous question you can use minEnclosingCircle on each contour. and you'll get circles centers and radiuses.