I have some contours for example the three colored shapes in this image.
I would like to approximate the boundary of each with a polygon WITH the constraint that the interior angles of the polygon are some fixed values. So the angle should snap to say {45 or 90} degrees and also ensure no edge is less that a certain number of pixels .e.g 50 - so there aren't many tiny edges. So the result for the green contour would be:
Is this possible in OpenCV?
Related
I have to find the exact centroid of multiple rectangles. The coordinates of each rectangle are as follows:
coord = (0.294792, 0.474537, 0.0989583, 0.347222) ## (xcenter, ycenter, width, height)
I have around 200 rectangles, how can I compute the centroid of them?
I already tried to implement it, but the code did not work well.
My code:
for i in range(len(xCenter)):
center = np.array((xCenter[i]+(Width[i]/2), yCenter[i]+(Height[i]/2)))
This is a somewhat vague question, but if you mean the centroid of all rectangles by area, then each center of a rectangle is weighted by the area of the rectangle. Think of it as the all the mass of the rectangle being compressed into the center, and then having to take the centroid of several weighted points. The formula for that would be the sum of 1 through n (assuming rectangles are numbered 1 to n) of Area(Rec(i)) * vec(center(i)) all divided by the total mass of the system (the sum of all the areas). If you are referring to the centroid of the area in general, ignoring rectangle overlap, that is a little more tricky. One thing you could do is for each rectangle, check it against all other rectangles, and if a pair of rectangles overlap, split them up into a set of non-overlapping rectangles and put them back into the set of rectangles. Once all rectangles are non-overlapping, find the centroid by mass.
My project is about image detection. I am using 1 camera and have 2 shapes (rectangle and triangle) placed on a flat surface. I have successfully been able to:
Detect the contours
Detect the area of the contours
Make a relation between both areas and the distances, to be able to calculate both distances from the camera
Calculate the lengths, corner coordinates and the angle of each corner in both shapes
Now I need to calculate the angle of rotation of the whole surface when rotated around the x-axis (up & down), y-axis (left & right) and both axes (oblique). This can be seen in the gif I provided. The default angle (0 degrees) is when the surface is parallel to the camera, and any rotation would mean that an angle around 1 or both axes is present.
P.S When the surface is rotating around both axes (oblique), both angles have to be detected separately. For example, I would get 14° x-axis & 27° y-axis.
Any help would be appreciated.
I need the approximate radii of the following ellipse.
The bottom/top and left/right radii should be the same nevertheless need to be checked. Which means 4 radii should be the result of my code. I did the following in paint, the green circle should give me the top radius and red the left (the right and bottom one aren't drawn here).
The idea I'm working on is to crop the image (left/right/top/bottom side) and approximate circles to the cropped images. With the cv2.findContours-feature some white pixels get recognized as highlighted here.
Is there a way to approximate my drawn red circle from above with these given coordinates? The problems I've seen on the internet are all with a given center point or angle which I don't have. Is there a cv2 function that draws circles with only some given coordinates or something similar?
Use this function : cv2.fitEllipse(points) and pass contour points -Ziri
Yes this did the trick. I got the radii after your function with:
(x, y), radius = cv2.minEnclosingCircle(i)
I'm very new to the image processing and object detection. I'd like to extract/identify the position and dimensions of teeth in the following image:
Here's what I've tried so far using OpenCV:
import cv2
import numpy as np
planets = cv2.imread('model.png', 0)
canny = cv2.Canny(planets, 70, 150)
circles = cv2.HoughCircles(canny,cv2.HOUGH_GRADIENT,1,40, param1=10,param2=16,minRadius=10,maxRadius=80)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(planets,(i[0],i[1]),i[2],(255,0,0),2)
# draw the center of the circle
cv2.circle(planets,(i[0],i[1]),2,(255,0,0),3)
cv2.imshow("HoughCirlces", planets)
cv2.waitKey()
cv2.destroyAllWindows()
This is what I get after applying canny filter:
This is the final result:
I don't know where to go from here. I'd like to get all of the teeth identified. How can I do that?
I'd really appreciate any help..
Note that the teeth-structure is more-or-less a parabola (upside-down). If you could somehow guess the parabolic shape that defines the centroids of those blobs (teeth), then your problem could be simplified to a reasonable extent. I have shown a red line that passes through the centers of the teeth.
I would suggest you to approach it as follows:
Binarize your image (background=0, else 1). You could use sklearn.preprocessing.binarize.
Calculate the centroid of all the non-zero pixels. This is the central blue circle in the image. Call this structure_centroid. See this: How to center the nonzero values within 2D numpy array?.
Make polar slices of the entire image, centered at the location of the structure_centroid. I have shown a cartoon image of such polar slices (triangular semi-transparent). Cover complete 360 degrees. See this: polarTransform library.
Determine the position of the centroid of the non-zero pixels for each of these polar slices. See these:
find the distance between a point and a curve python.
Find the minimum distance from a point to a curve.
The array containing these centroids gives you the locus (path) of the average location of the teeth. Call this centroid_path.
Run an elimination/selection algorithm on the circles you were able to detect, that are closest to the centroid_path. Use a threshold distance to drop the outliers.
This should give you a good approximation of the teeth with the circles.
I hope this helps.
I don't know exactly how to state this question, so consider the following picture.
The polygons were generated by detecting contours of a rasterized map of different region boundaries. Notice the "inlets" created by letters in the original image. I'd like to identify sets of points which if the endpoints were connected would reduce the length of the polygon's perimeter by at least some value. I tried generating the convex hull for each polygon and basing the perimeter-savings on the difference in the distance between the polygon perimeter between hull vertices and the distance between the vertices but there is no guarantee that these vertices are near the edge of the "inlet".
I feel like there is a term in computational geometry for this problem but don't know what it is. Do I have to compute the distance saved for each possible combination of starting/ending points or is there a simplified algorithm which does this recursively?
An example of when using the convex hull breaks down is the polygon in the center of the following example:
Here, the convex hull connects the corners of the polygon whereas I only want to close off the large inlet on the right side of the polygon while retaining the curvature of that side.
You could try an alpha shape. Alpha shape is defined as edges in a delaunay triangulation not exceeding alpha.