I am trying to do this but in 3d and using a 2d circle instead of a box.
I have a line starting between the two points [ (0,0,0), (3,4,5) ] and I want to see if it intersects through
circle = Circle((2, 1), 0.5)
ax.add_patch(circle)
art3d.pathpatch_2d_to_3d(circle, z=1, zdir="x")
Is it possible to test for a path intersect on a 2d object plotted on 3d axis? From the linked example above, I want to do path.intersects_circle where I define a circle as:
I have had a look through the Bbox documentation and it seems that I can't use this method for a circle?
This sounds more like an algebraic problem than related to matplotlib.
This is how I understand your question:
you have a circle at (x=2,y=1) with a radius of r=0.5
this circle is located in a plane at a constant z=1
1.) You need to determine where your vector pierces the plane which is parallel to the x,y-plane and at z=1. For the vector you specify in your question this intersection is at:
x = 3./(2.**0.5)
y = 4./(2.**0.5)
z = 1.
2.) You need to determine if this intersection falls into the part of the plane covered by the circle. The maximum y-coordinate your circle reaches is 1.5 - the y-coordinate of the intersection is already larger. Hence your straight line does not pierce the circle.
All this being said, I would recommend implementing an algebraic check based on the intersection with the plane and determining if this intersection is part of the circle. And only then using matplotlib.
Related
I have a 3D scatter plot of some data generated using matploblib Axes 3D. I need to decide if it lies on a plane or a curve. I am trying to understand the visual differences that would indicate plane or curve. My guess is that if there are points along a wide range of z values then it lies on a curve because if it lied a plane, this would mean that the points are spread only over a flat surface. Even if my guess is correct, I am only right by virtue of eliminating the only other possibility so how would I tell specifically if it the data lies on a curve?
If the plane is tilted you will also find a wide range of z values.
Assuming you have your 3D points in an nx3 array, you can calculate the plane that fits them using this:
centroid = np.mean(points, axis=0)
_, eigenvalues, eigenvectors = np.linalg.svd(points - centroid, full_matrices=False)
normal = eigenvectors[2]
dispersion = eigenvalues[2]
The plane that best approximates the scattered points is defined by a point (centroid) and its normal vector.
Then, according to the dispersion value along the normal axis you can decide whether it is low enough (the points lie on a plane) or it is too high (they don't lie on a plane).
I'm interested in finding the normal vector of a plane given 3 2D projection/pixel coordinates of the corners of an equilateral triangle.
I have to modify my actual use case because I'm not allowed to share the details: In my use case I have a poster of an equilateral triangle pinned to the wall. I can detect the 3 corners in pixel coordinates and want to calculate the normal vector of the triangle coming straight out of the wall.
It's my intuition that there should be an analytical solution for this and I found similar questions but fail to apply them to my use case. Here are a few of my thoughts:
I thought about bringing the 2D pixel coordinates to 3D by adding a third dimension filled with zero putting it on the XY-plane. Doing the same with a reference triangle of side length 1 gives me the source and destination for finding their homography. But because all 6 z-values are 0, I don't think this would work.
I know the distance between all 3 points is the same in 3D which I want to use as a constraint to solve for the problem.
The normal can be calculated in 3D by taking the cross product of 2 sides of the triangle
Using openCV's findHomography() function requires at least 4 3D points to solve and I don't believe I can just take the mean coordinates of my 3 points in pixel space to find find the center (aka a 4th point), correct?
openCV also comes with a getAffineTransform() function that works with 3 pairs of 2D points. I tried to use it like this:
# equilateral reference triangle
tri_ref = np.array([
[0.0,0.0],
[0.5,0.866],
[1.0,0.0]
], dtype=np.float32)
# detected corners in pixel space
tri_pixel = np.array([
[0.397,0.317],
[0.441,0.848],
[0.698,0.324]
], dtype=np.float32)
A = cv2.getAffineTransform(tri_pixel, tri_ref)
img2 = cv2.warpAffine(img1, A, (400,300))
However, the transformation looks completely wrong and I still would need to know how I can calculate the normal from the transformation matrix.
You can't really get a 100% correct 3D normal vector from three 2D points.
For example , let us consider a pinhole camera system, draw a line from a point to camera, then randomly pick another point on the line we draw, you will still have the same image with the picked point.
If we add the condition that the distance between all three points are the same, you may still have at most 4 possible answers of the vector(imaging two fixed points,the possible position of the last point which have the same distance with the two fixed points will be a circle,draw a line start from the camera to attach the circle , if you cut through the circle than there is two possible position of the last point, the only condition there is only one position is true is the line is a tangent line of the circle, but most of the time this does not happen, change the fixed points so we can have at most 4 possible position of the three points)
You can't add the 4th point by just take the mean coordinates of the 3 points because you do not know where the real position of the centroid project to the image plane.
I have a 3D point P(x,y,z) and a polygon S defined by arbitrary amount of 3D points. The polygon is not necessary convex (so in my opinion, the solutions provided here do not answer my question), but P is in the same plane as S. Now, I need to determine if the P lies inside of S.
For example:
polygon = np.array([[6173.953125 , 9689.90136719, 298.03326416],
[6173.95410156, 9689.90136719, 298.09350586],
[6173.89355469, 9689.86621094, 298.23690796],
[6173.89355469, 9689.86621094, 298.23690796],
[6173.83496094, 9689.83398438, 298.5083313 ],
[6173.89453125, 9689.86816406, 298.38320923],
[6173.89697266, 9689.87011719, 298.44348145],
[6173.89697266, 9689.87011719, 298.44348145],
[6173.953125 , 9689.90136719, 298.03326416],
[6173.89355469, 9689.86621094, 298.23690796],
...
])
point = np.array([6171.37079656, 9688.35796064, 309.00229108])
Example Image: Point outside Polygon:
In this example, it is obvious that the result should be "False" (point is outside).
Using the point:
point2 = np.array([6173.83496094, 9689.83398438, 297.72579346])
the result should be "True" (point is in polygon).
Example Image: Point inside Polygon:
I know, I basically have to solve a simple Point-in-Polygon (PiP) problem. However, is there a package that includes such a function directly? Or how can I transform all the points to apply PiP algorithms provided by shapely or similar packages?
When P is in the same plane as S is guaranteed, make (virtual) projection of polygon and point onto any coordinate plane (not perpendicular to polygon plane). For example, to make projection onto OXZ (suitable for the second example), just ignore Y-components.
Then use any algorithm intended for 2D (pnpoly)
It might seem a bit odd that I am asking for python code to calculate the area of a polygon with a list of (x,y) coordinates given that there have been solutions offered in stackoverflow in the past. However, I have found that all the solutions provided are sensitive to the order of the list of (x,y) coordinates given. For example, with the code below to find an area of a polygon:
def area(p):
return 0.5 * abs(sum(x0*y1 - x1*y0
for ((x0, y0), (x1, y1)) in segments(p)))
def segments(p):
return zip(p, p[1:] + [p[0]])
coordinates1 = [(0.5,0.5), (1.5,0.5), (0.5,1.5), (1.5,1.5)]
coordinates2 = [(0.5,0.5), (1.5,0.5), (1.5,1.5), (0.5,1.5)]
print "coordinates1", area(coordinates1)
print "coordinates2", area(coordinates2)
This returns
coordinates1 0.0
coordinates2 1.0 #This is the correct area
For the same set of coordinates but with a different order. How would I correct this in order to get the area of the non-intersecting full polygon with a list of random (x,y) coordinates that I want to make into a non-intersecting polygon?
EDIT: I realise now that there can be multiple non-intersecting polygons from a set of coodinates. Basically I am using scipy.spatial.Voronoi to create Voronoi cells and I wish to calculate the area of the cells once I've fed the coordinates to the scipy Voronoi function - unfortunately the function doesn't always output the coordinates in the order that will allow me to calculate the correct area.
Several non-intersecting polygons can be created from a random list of coordinates (depending on its order), and each polygon will have a different area, so it is essential that you specify the order of the coordinates to build the polygon (see attached picture for an example).
The Voronoi cells are convex, so that the polygon is unambiguously defined.
You can compute the convex hull of the points, but as there are no reflex vertices to be removed, the procedure is simpler.
1) sort the points by increasing abscissa; in case of ties, sort on ordinates (this is a lexicographical ordering);
2) consider the straight line from the first point to the last and split the point sequence in a left and a right subsequence (with respect to the line);
3) the requested polygon is the concatenation of the left subsequence and the right one, reversed.
I need a way to characterize the size of sets of 2-D points, so I can determine whether to render them as individual points in a space or as representative polygons, dependent on the scale of the viewport. I already have an algorithm to calculate the convex hull of the set to produce the representative polygon, but I need a way to characterize its size. One obvious measure is the maximum distance between points on the convex hull, which is the diameter of the set. But I'm really more interested in the size of its cross-section perpendicular to its diameter, to figure out how narrow the bounding polygon is. Is there a simple way to do this, given the sorted list of vertices and and the indices of the furthest points (ideally in Python)?
Or alternatively, is there an easy way to calculate the radii of the minimal area bounding ellipse of a set of points? I have seen some approaches to this problem, but nothing that I can readily convert to Python, so I'm really looking for something that's turnkey.
You can compute:
the size of its cross-section perpendicular to its diameter
with the following steps:
Find the convex hull
Find the two points a and b which are furthest apart
Find the direction vector d = (a - b).normalized() between those two
Rotate your axes so that this direction vector lies horizontal, using the matrix:
[ d.x, d.y]
[-d.y, d.x]
Find the minimum and maximum y value of points in this new coordinate system. The difference is your "width"
Note that this is not a particularly good definition of "width" - a better one is:
The minimal perpendicular distance between two distinct parallel lines each having at least one point in common with the polygon's boundary but none with the polygon's interior
Another useful definition of size might be twice the average distance between points on the hull and the center
center = sum(convexhullpoints) / len(convexhullpoints)
size = 2 * sum(abs(p - center) for p in convexhullpoints) / len(convexhullpoints)