Good morning all!
I have to calculate the area of a polygon using python.
The formula to do that is given by (sorry, can't post pictures yet..)
(x0*y1 - y0*x1) + (x1*y2 - y1*x2) + ... + (xn-1*y0 - yn-1*x0)
2
This is the code i came up with. However, it results in a (correct) negative value, and i have no idea why.
Would it be valid to simply multiply the area time -0.5 or is there something wrong with my code?
Any help is greatly appreciated!!
polygon = [[0,0],[-1,5],[2,3],[1,5],[3,6],[4,5],[5,3],[8,-2],[4,-4],[2,-5]]
area = 0.0
n = len(polygon)
for i in range(n):
i1 = (i+1)%n
area += polygon[i][0]*polygon[i1][1] - polygon[i1][0]*polygon[i][1]
area *= 0.5
print 'area = ', area
The formula works by computing a sum of the cross products of each pair of vectors between the origin and each end of the line segment composing the polygon. In essence, the area is computed as the difference between the area of the green and red triangles in the picture below. (Note that the red triangles are partially underneath the green ones.)
The sign of the cross product depends on the orientation of the vectors, i.e. if we can make the second vector to align with the first by turning it left or right. Therefore, you will get either a negative or positive area depending on whether the points in the polygon are ordered clockwise or counter-clockwise. The solution is correct, but you need to apply the abs() function to the result, since you don't need the sign.
The sign of the final answer is based on the orientation of the polygon. You can check it by taking the reverse list of polygon in the given example.
polygon = [[0,0],[-1,5],[2,3],[1,5],[3,6],[4,5],[5,3],[8,-2],[4,-4],[2,-5]]
polygon.reverse()
...
In this case you'll find area to be positive, thought it is essentially the same polygon.
You can read more about why orientation makes area negative here.
You simply need to take the absolute value of the final result.
print 'area = ', abs(area)
Related
I am working with 2D convex polygons, and have two sets of points surrounding it, lets call the neighborhood closest to the polygon A and the one further out B. These sets of points are offsets of the original polygon, the offsetting itself was done using Shapely in Python (which admittedly does not do a great job, if anyone knows of any alternatives please do comment).
What I would like to do now is for each point in A, find the point in B that is closest to it in normal direction of the polygon's edges. I've attached some images below that explain what I'm doing better than I can in words.
So far I've attempted to do this myself and get poor results. My approach is for each point in A, find its two closest vertices using a nearest neighbor search, these two points can be used to calculate a slope. I find the line perpendicular to this slope anchored at the point A I am currently iterating through. I then construct a bunch of points along this line and find the point in B that is closest to it. To make sure I am finding points in the right direction, I actually find the two closest points in B and chose the one such that its distance to the point I'm iterating through in A is minimized. This is done for all points in A.
A few shortcomings of this approach:
If a point is exactly at a vertex it fails and gives garbage
If there is a long edge and the point is near the beginning or end of this edge, it might chose the wrong two vertices to calculate the slope.
This leads me to believe there must be a better way. Here is an image explaining what I would like to do:
The red dashed line is an example of the convex polygon I am working with. The black points represent the smaller offset, A, and the white points are the bigger offset, B. The green points are the points in B that my code currently identifies as the normal points, though they are clearly wrong. I have drawn in the blue arrows to show what I mean by the normal direction of each edge. As en example of the code's shortcoming, you can see how at the rightmost points where two points reside exactly at a vertex the code doesn't choose the points we would expect.
Here is a copy of the code itself:
def Nearest_Points(inner, outer, vertices):
step = 0.1
near_points = []
tree = KDTree(vertices)
line_tree = KDTree(outer_points)
for point in inner_points:
#Finding closest vertices to point
dist, index = tree.query(point, k = 2)
nearest_vertex1 = vertices[index[0]]
nearest_vertex2 = vertices[index[1]]
#Constructing line perpendicular to edge and anchored at point
dy = nearest_vertex2[1]-nearest_vertex1[1]
dx = nearest_vertex2[0]-nearest_vertex1[0]
line_points = []
if dy != 0:
for i in range(-50, 51):
x = point[0]+i*step
y = -dx/dy*(x-point[0])+point[1]
line_points.append([x, y])
else:
for i in range(-50, 51):
line_points.append([point[0], point[1]+step*i])
#Finding the two points in outer neighborhood closest to line
dist2, index_2 = line_tree.query(line_points, k = 2)
dist2_arr = np.asarray(dist2)
min_1 = np.min(dist2_arr[:,0])
min_2 = np.min(dist2_arr[:,1])
for i in range(len(dist2)):
if dist2[i][0] == min_1:
index_2_1 = i
if dist2[i][1] == min_2:
index_2_2 = i
near1 = outer_points[index_2[index_2_1][0]]
near2 = outer_points[index_2[index_2_2][1]]
#Of these two points, finding the one closest to the point currently being iterated through
near1_dist = (near1[0]-point[0])**2 + (near1[1]-point[1])**2
near2_dist = (near2[0]-point[0])**2 + (near2[1]-point[1])**2
if near1_dist < near2_dist:
near_points.append(near1)
else:
near_points.append(near2)
return near_points
Thank you.
I am measuring the current that passes through a sample as a vary the voltage over it. The result is a current-voltage plot like this
https://content.sciendo.com/view/journals/joeb/9/1/graphic/j_joeb-2018-0023_fig_004.jpg
I want to calculate the area within the curve of the first half period (part of the curve in the first quadrant). Not sure what the best way to do this in python. I have tried to write some code that finds pairs with same x-coordinate and subtracts the bottom y-value from the top y-value, and iterates over all the points in the first quadrant.
def LobeAreaByPeriod(all_periods_I):
print('Starting Lobe area by period')
LobeArea = []
for period in all_periods_I:
halfperiod = round(len(period)/2)
duration = round(halfperiod/2)
area = 0
for i in range(duration):
area += (period[i] - period[halfperiod - i])
LobeArea.append(area)
return LobeArea
I am not certain the pairs will be located directly above each other this way and I find it difficult to check it the answer is really correct. Any tips on how to do this?
To calculate area within a curve you're going to have to do some sort of approximation. Depending on your necessity of precision the method you choose will vary. I like the trapezoidal rule. Numpy has an implementation that is quite nice.
np.trapz
I have a single contour that I'd like to find the area of (image at end of post- contour is the clue line, and is described by the blue set of points). I have tried to use alphashape for this (see below), but it does not work well enough for my purposes (alphashape approximation of contour in green on the image). Are there other methods that I can use to get the area inside this contour?
Thanks!
Here's my code so far. points is a list of all the single contours that I want to do this to.
for cnum in np.arange(0,(len(points))):
cpoints_nan=points[cnum]
cpoints=[]
for i in np.arange(0, len(cpoints_nan)): #Get rid of nan pairs
if np.isnan(cpoints_nan[i][0]): #If current pair of points is [nan, nan]
continue
else: #If current pair of points is not nan, add to cpoints
cpoints.append([cpoints_nan[i][0][np.logical_not(np.isnan(cpoints_nan[i][0]))][0], cpoints_nan[i][1][np.logical_not(np.isnan(cpoints_nan[i][1]))][0]])
if len(cpoints)>3: #Need more than two points to use alphashape. Greater than 3 because for closed "loops" the last point is the first and so this will still just form a line rather than a polygon
alpha=alphashape.optimizealpha(cpoints)
hull = alphashape.alphashape(cpoints, alpha)
contour_areas.append(hull.area) #Get area inside contour
EDIT: Many of the contours I'm looking at are self-intersecting, so this is an additional layer of complexity
Contour image: https://i.stack.imgur.com/qBnmv.png
Assuming the points are in the correct order, that the line doesn't self-intersect and that the line forms a complete shape (after points[-1] comes point[1]) you can compute the are by summing the individual partial signed areas:
# p = cpoints
area = 0
for i in range(len(p) -1):
area += (p[i][0] - p[i+1][0]) * (p[i][1] + p[i+1][1]) / 2
area += (p[i][0] - p[-1][0]) * (p[i][1] + p[-1][1]) / 2
For each line segment you compute the area between it and the x axis. If the line goes to the left the area is added, if the line goes to the right the area is subtracted. This gives you the correct area because you add for the portion of the contour that faces up and subtract for the portion that faces down.
This is sensitive to the order of the points (they have to be in trigonometric sense to give a positive area). If your points go clockwise multiply by -1. If you don't care take the absolute value at the end.
This should give you a good result, but be careful of numeric instability if you have extreme values or a lot of points.
I want to calculate the intersection over union IoU between two rectangles with axes not aligned, but with an angle of the axes smaller than 30 degrees. An approximate value is also seeked.
One possible solution is to check if the angle between the two rectangles is less than 30 degree and than rotate them parallel to aligne the axis. From here it is easy to calculate the IoU.
Another possibility is to use monte carlo methods for the intersection ( generate a point, find if the point is under some line of one rectangle and above some line of the other), but this seems expensive because I need to use this calculation a large number of times.
I was hopping that there is something better out there; maybe a geometry library, or maybe an algorithm from the computer vision folks.
I am trying to learn grasping positions using deep neural networks. My algorithem should predict a bounding box (rectangle) for an object in an rgb image. For any image I have also the ground truth (another rectangle) bounding box. From this two rectangles I need the IoU.
Any idea?
Since you're working in Python, I think the Shapely package would serve your needs.
There is quite effective algorithm for calculation of intersection between two convex polygons, described in O'Rourke book "Computational Geometry in C".
C code is available at the book page (convconv).
Algorithm traverse edges of the first polygon, checking orientations of the second polygon vertices in order to detect intersections. When two consequent vertices lie on the different sides of the edge, intersection occurs (there is a lot of trick cases). Algorithm outline is here
You can consider a number of numerical approaches, practically "rendering" the rectangles into some "canvas"/canvases, and traverse the pixels for making your statistics. The size of the canvas should be the size of the bounding box for the entire scene, practically you can find that via picking the minimum and maximum coordinates occurring for each axis.
1) "most CG" approach: really get a rendering library, render one rectangle with red, other rectangle with transparent blue. Then visit each pixel and if it has a non-0 red component, it belongs to the first rectangle, if it has a non-0 blue component, it belongs to the second rectangle. And if it has both, it belongs to the intersection too. This approach is cheap for coding, but requires both writing and reading the canvas even in the rendering phase, which is slower than just writing. This might be even done on GPU too, though I am not sure if setup costs and getting back the result do not weight out the benefit for such a simple scene.
2) another CG-approach would be rendering into 2 arrays, preferably some 1-byte-per-pixel variant, for the sake of speed (you may have to go back in time a bit in order to find such dedicated rendering libraries). This way the renderer only writes, into one array per rectangle, and you read from two when creating the statistics
3) as writing and reading pixels take time, you can do some shortcut, but it needs more coding: convex shapes can be rendered via collecting the minimum and maximum coordinates per scanline, and just filling between the two. If you do it yourself, you can spare the filling part and also the read-and-check-every-pixel step at the end. Build such min-max list for both rectangles, and then you "just" have to check their relation/order for each scanline, to recognize overlaps
And then there is the mathematical way: this is not really useful, see EDIT below while it is unlikely that you would find some sane algorithm for calculating intersection area, specifically for the case of rectangles, if you find such algorithm for triangles, which is more probable, that would be enough. Both rectangles can be split into two triangles, 1A+1B and 2A+2B respectively, and then you just have to run such algorithm 4 times: 1A-2A, 1A-2B, 1B-2A, 1B-2B, sum the results and that is the area of your intersection.
EDIT: for the maths approach (though this also comes from graphics), I think https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm can be applied here (as both rectangles are convex polygons, A-B and B-A should produce the same result) for finding the intersection polygon, and then the remaining task is to calculate the area of that polygon (here and now I think it is going to be convex, and then it is really easy).
I ended up using Sutherland-Hodgman algorithm implemented as this functions:
def clip(subjectPolygon, clipPolygon):
def inside(p):
return(cp2[0]-cp1[0])*(p[1]-cp1[1]) > (cp2[1]-cp1[1])*(p[0]-cp1[0])
def computeIntersection():
dc = [ cp1[0] - cp2[0], cp1[1] - cp2[1] ]
dp = [ s[0] - e[0], s[1] - e[1] ]
n1 = cp1[0] * cp2[1] - cp1[1] * cp2[0]
n2 = s[0] * e[1] - s[1] * e[0]
n3 = 1.0 / (dc[0] * dp[1] - dc[1] * dp[0])
return [(n1*dp[0] - n2*dc[0]) * n3, (n1*dp[1] - n2*dc[1]) * n3]
outputList = subjectPolygon
cp1 = clipPolygon[-1]
for clipVertex in clipPolygon:
cp2 = clipVertex
inputList = outputList
outputList = []
s = inputList[-1]
for subjectVertex in inputList:
e = subjectVertex
if inside(e):
if not inside(s):
outputList.append(computeIntersection())
outputList.append(e)
elif inside(s):
outputList.append(computeIntersection())
s = e
cp1 = cp2
return(outputList)
def PolygonArea(corners):
n = len(corners) # of corners
area = 0.0
for i in range(n):
j = (i + 1) % n
area += corners[i][0] * corners[j][1]
area -= corners[j][0] * corners[i][1]
area = abs(area) / 2.0
return area
intersection = clip(rec1, rec2)
intersection_area = PolygonArea(intersection)
iou = intersection_area/(PolygonArea(rec1)+PolygonArea(rec2)-intersection_area)
Another slower method (don't know what algorithm) could be:
from shapely.geometry import Polygon
p1 = Polygon(rec1)
p2 = Polygon(rec2)
inter_sec_area = p1.intersection(rec2).area
iou = inter_sec_area/(p1.area + p2.area - inter_sec_area)
It is worth mentioning that in just one case with bigger polygons (not my case) the shapely module had an area twice greater than the first method. I didn't test both methods intensively.
This might help
What about using Pythagorean theorem ? Since you have two rectangles, when they intersect, you will have one or more triangles, each with one angle of 90°.
Since you also know the angle between the two rectangles (20° in my example), and the coordinates of each rectangle, you can use the the appropriate function (cos/sin/tan) to know the length of all the edges of the triangles.
I hope this can help
Is there even such a thing as a 3D centroid? Let me be perfectly clear—I've been reading and reading about centroids for the last 2 days both on this site and across the web, so I'm perfectly aware at the existing posts on the topic, including Wikipedia.
That said, let me explain what I'm trying to do. Basically, I want to take a selection of edges and/or vertices, but NOT faces. Then, I want to place an object at the 3D centroid position.
I'll tell you what I don't want:
The vertices average, which would pull too far in any direction that has a more high-detailed mesh.
The bounding box center, because I already have something working for this scenario.
I'm open to suggestions about center of mass, but I don't see how this would work, because vertices or edges alone don't define any sort of mass, especially when I just have an edge loop selected.
For kicks, I'll show you some PyMEL that I worked up, using #Emile's code as reference, but I don't think it's working the way it should:
from pymel.core import ls, spaceLocator
from pymel.core.datatypes import Vector
from pymel.core.nodetypes import NurbsCurve
def get_centroid(node):
if not isinstance(node, NurbsCurve):
raise TypeError("Requires NurbsCurve.")
centroid = Vector(0, 0, 0)
signed_area = 0.0
cvs = node.getCVs(space='world')
v0 = cvs[len(cvs) - 1]
for i, cv in enumerate(cvs[:-1]):
v1 = cv
a = v0.x * v1.y - v1.x * v0.y
signed_area += a
centroid += sum([v0, v1]) * a
v0 = v1
signed_area *= 0.5
centroid /= 6 * signed_area
return centroid
texas = ls(selection=True)[0]
centroid = get_centroid(texas)
print(centroid)
spaceLocator(position=centroid)
In theory centroid = SUM(pos*volume)/SUM(volume) when you split the part into finite volumes each with a location pos and volume value volume.
This is precisely the calculation done for finding the center of gravity of a composite part.
There is not just a 3D centroid, there is an n-dimensional centroid, and the formula for it is given in the "By integral formula" section of the Wikipedia article you cite.
Perhaps you are having trouble setting up this integral? You have not defined your shape.
[Edit] I'll beef up this answer in response to your comment. Since you have described your shape in terms of edges and vertices, then I'll assume it is a polyhedron. You can partition a polyedron into pyramids, find the centroids of the pyramids, and then the centroid of your shape is the centroid of the centroids (this last calculation is done using ja72's formula).
I'll assume your shape is convex (no hollow parts---if this is not the case then break it into convex chunks). You can partition it into pyramids (triangulate it) by picking a point in the interior and drawing edges to the vertices. Then each face of your shape is the base of a pyramid. There are formulas for the centroid of a pyramid (you can look this up, it's 1/4 the way from the centroid of the face to your interior point). Then as was said, the centroid of your shape is the centroid of the centroids---ja72's finite calculation, not an integral---as given in the other answer.
This is the same algorithm as in Hugh Bothwell's answer, however I believe that 1/4 is correct instead of 1/3. Perhaps you can find some code for it lurking around somewhere using the search terms in this description.
I like the question. Centre of mass sounds right, but the question then becomes, what mass for each vertex?
Why not use the average length of each edge that includes the vertex? This should compensate nicely areas with a dense mesh.
You will have to recreate face information from the vertices (essentially a Delauney triangulation).
If your vertices define a convex hull, you can pick any arbitrary point A inside the object. Treat your object as a collection of pyramidal prisms having apex A and each face as a base.
For each face, find the area Fa and the 2d centroid Fc; then the prism's mass is proportional to the volume (== 1/3 base * height (component of Fc-A perpendicular to the face)) and you can disregard the constant of proportionality so long as you do the same for all prisms; the center of mass is (2/3 A + 1/3 Fc), or a third of the way from the apex to the 2d centroid of the base.
You can then do a mass-weighted average of the center-of-mass points to find the 3d centroid of the object as a whole.
The same process should work for non-convex hulls - or even for A outside the hull - but the face-calculation may be a problem; you will need to be careful about the handedness of your faces.