Contour Arrays in OpenCV - python

In another user's post, they discussed how they were having trouble using OpenCV's Hough Circle Transform to detect and draw over each ring on a bull target here. A proposed solution was to instead using contours to locate each circle. When I attempted the same thing, my result was exactly what I was looking for; the only problem is that the program I'm attempting to create needs a series of circles, one on each ring, so that I may then calculate the distance from the center and therefore the particular ring where a given point exists.
Following the code if the mentioned post, I have the following:
import cv2
import numpy as np
image = cv2.imread('bull.png')
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
grayscaled_image = cv2.cvtColor(image_copy, cv2.COLOR_GRAY2BGR)
cv2.imshow("confirm", grayscaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
_,contours,_ = cv2.findContours(255 - image_copy, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
print(contours)
cv2.drawContours(image, contours, -1, color=(0,255,0), thickness=1)
cv2.imshow("detected circles", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Which gives me the same results as shown in the linked post.
And this is great. I love the progress. However, I initially tried using the same method that the linked poster did because I liked the coordinate and radius information provided by the circles.
I found in one of OpenCV's tutorials that you can contain a contour with a circle, rectangle, or ellipse. The tutorial is rather easy to follow for an individual contour, but in my code, I have no idea what _,contours,_ means. Is it some sort of array? How do I access the individual contours? When I tried printing the variable, it gave me close to 4,000 lines. I won't post them here, of course, but they appeared to be sets of coordinates.
So my question is: How can I go about converting this grouping of contours into more manageable circles?

As I mentioned in your previous question you can use minEnclosingCircle on each contour. and you'll get circles centers and radiuses.

Related

Hough circles on edges

I have the following image in which I've detected the borders, representing 7 circles. In my opinion, it is fairly easy to identify the circles on it, but I am having trouble detecting all circles with the opencv Hough transform. Here it is what I've tried:
img = cv2.imread('sample.png',0)
edges = cv2.Canny(img,20,120)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,100,
param2=40,minRadius=0,maxRadius=250)
I either get the central circle, the outer one or a lot of circles, depending on the parameters I input on the function. Do you guys have a set of parameters that would output all the circles?
Thanks in advance
Solved with this example from scikit-image adjusting canny thresholds to match the posted image and also the radii range.
Thanks to #barny

Extracting data from graphs in a scanned document

EDIT: This is a deeper explanation of a question I asked earlier, which is still not solved for me.
I'm currently trying to write some code that can extract data from some uncommon graphs in a book. I scanned the pages of the book, and by using opencv I would like to detect some features ofthe graphs in order to convert them into useable data. In the left graph I'm looking for the height of the "triangles" and in the right graph the distance from the center to the points where the dotted lines intersect with the gray area. In both cases I would like to convert these values into numeric data for further usage.
For the left graph, I thought of detecting all the individual colors and computing the area of each sector by counting the amount of pixels in that color. When I have the area of these sectors, I can easily calculate their heights, using basic math. The following code snippet shows how far I've gotten already with identifying different colors. However I can't manage to make this work accurately. It always seems to detect some colors of other sectors as well, or not detect all pixels of one sector. I think it has something to do with the boundaries I'm using. I can't quite figure out how to make them work. Does someone know how I can determine these values?
import numpy as np
import cv2
img = cv2.imread('images/test2.jpg')
lower = np.array([0,0,100])
upper = np.array([50,56,150])
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask = mask)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
For the right graph, I still have no idea how to extract data from it. I thought of identifying the center by detecting all the dotted lines, and then by detecting the intersections of these dotted lines with the gray area, I could measure the distance between the center and these intersections. However I couldn't yet figure out how to do this properly, since it sounds quite complex. The following code snippet shows how far I've gotten with the line detection. Also in this case the detection is far from accurate. Does someone have an idea how to tackle this problem?
import numpy as np
import cv2
# Reading the image
img = cv2.imread('test2.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=50,maxLineGap=20)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
For the left image, using your approach, try to look at the RGB histogram, the colors should be significant peaks, if you would like to use the relative area of the segments.
Another alternative could be to use Hough Circle Transform, which should work on circle segments. See also here.
For the right image ... let me think ...
You could create a "empty" diagram with no data inside. You know the locations of the circle segment ("cake pieces"). Then you could identify the area where the data is (the dark ones), either by using a grey threshold, an RGB threshold, or Find Contours or look for Watershed / Distance Transform.
In the end the idea is to make a boolean overlay between the cleared image and the segments (your data) that was found. Then you can identify which share of your circle segments is covered, or knowing the center, find the farthest point from the center.

Opencv - Extracting data from in-game images

I need some help with an OpenCV project I'm working on. I'm taking images from a computer game (in this case, Fortnite), and I would like to extract different elements from them, eg. timer value, quantities of materials, health and shield etc.
Currently I perform a series of image preprocessing functions until I get a binary image, followed by locating the contours in the image and then sending those contours to a machine learning algorithm (K-Nearest-Neighbours).
I am able to succeed in a lot of cases, but there are some images where I don't manage to find some of the contours, therefore I don't find all the data.
An important thing to note is that I use the same preprocessing pipeline for all images, because I'm looking for as robust of a solution that I can manage.
I would like to know what I can do to improve the performance of my program. -
Is KNN a good model for this sort of task, or are there other models that might give me better results?
Is there any way to recognise characters without locating contours?
How can I make my preprocessing pipeline as robust as possible, given the fact that there is a lot of variance in the background across all images?
My goal is to process the images as fast as possible, starting out with a minimum of at least 2 images per second.
Thanks in advance for any help or advice you can give me!
Here is an example image before preprocessing
Here is the image after preprocessing, in this example I cannot find the contour for the 4 on the right side.
Quite simply, enlarging the image might help, since it increases the dark border of the number.
I threw together some code that does that. The result could be improved, but my point here is to show that the 4 can now be detected as a contour. To increase efficiency I only selected contours within a certain size.
Also, since it is part of the HUD, that usually means that the location on screen is always the same. If so, you can get great performance increase by only selecting the area with values (described here) - as I have done manually.
Finally, since the numbers have a consistent shape, you could try matchShapes as an alternative to kNN to recognize the numbers. I don't know how they compare in performance though, so you'll have to try that out yourself.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("fn2.JPG")
# enlarge image
img = cv2.resize(img,None,fx=4, fy=4, interpolation = cv2.INTER_CUBIC)
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# create mask using threshold
ret,mask = cv2.threshold(gray,200,255,cv2.THRESH_BINARY)
# find contours in mask
im, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# draw contour on image
for cnt in contours:
if cv2.contourArea(cnt) < 3000 and cv2.contourArea(cnt) > 200:
cv2.drawContours(img, [cnt], 0, (255,0,0), 2)
#show images
cv2.imshow("Mask", mask)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to convert from edges to contours in OpenCV

I have been getting images like this after edge detection:
I'd like it to connect the edges together into straight-line polygons.
I thought this could be done using findCountours with chain approximation, but that doesn't seem to be working well for me.
How can I convert an image like the one above into a simple straight-line polygons (that look like skewed triangles and trapezoids and squares)?
You need to first detect the lines and then construct the contours. You can do that using HoughLines(). There is a short tutorial here.
Blur the image, then find the contours.
If the edges are that close together, a simple blurring with something like
def blur_image(image, amount=3):
'''Blurs the image
Does not affect the original image'''
kernel = np.ones((amount, amount), np.float32) / (amount**2)
return cv2.filter2D(image, -1, kernel)
should connect all the little gaps, and you can do contour detection with that.
If you then want to convert those contours into polygons, you can look to approximate those contours as polygons. A great tutorial with code for that is here.
The basic idea behind detecting polygons is running
cv2.findContours(image, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
Which tells OpenCV to approximate the contours with straight lines, creating polygons. (I'm not sure what the cv2.RETR_EXTERNAL parameter does at this time.)

Canny Operation on Image

I want in the image below to (by using Python):
1.) Find the contours of the bones (only the sides will do)
2.) Recognize and draw all the contours.
It could look something like this:
A better contour is even good. I am not entirely sure as to how I could tackle this,
The gradient of the image is:
A initial way of approaching this would be using canny edge detection using the right threshold values and then find the contours.
import cv2
# Load the image
img = cv2.imread("/home/tribta/Desktop/feet.png")
# Find the contours
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,60,200)
im2, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions
# For each contour, find the bounding rectangle and draw it
cv2.drawContours(img, contours, -1, (0,255,0), 3)
# Finally show the image
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Then you could add some biomedical processing criteria to distinguish the different contours and verify if it is really a bone.
You should apply first a threshold using cv2.threshold to clean your images from objects you don't want to, try different values of intensity to catch the borders, then apply some morphology operation such as OPENING and CLOSING with cv2.morphologyEx to clean a bit the image, fill the holes and finally applied cv2.findContours and cv2.drawContours to get the finally image of the contour of the bones.
check those command on opencv library
You will find really good examples on stackoverflow and internet, try adapting your code to those.
Hope this was a good help to you.

Categories