locating cell containing point in irregular 3D grid - python

I have an irregular 3D grid which looks something like this:
Typical dimensions of the grid are 100/100/100 cells. Each cell is spatially defined by the coords of the 8 corner nodes. The 4 vertices of the each face of a cell are not necessarily co-planar, so I represent each face as a pair of triangles and thus a cell as a polyhedron consisting of 12 triangles (2 per face). I am trying to locate the IJK index of the cell that contains an XYZ point using Python. I bisect sequentially the cell range in the I, J and K directions and test which half of the grid the point lies using the method described here Testing whether a 3D point is inside a 3D polyhedron to locate the point. Unfortunately, this does not work in some cases. In the above figure, point A is physically outside the grid but inside the current bisection range (defined by the brown dotted lines) while point B is inside the grid but outside the current range. I think the reason for this is that triangles representing the faces of the cells within the current range (eg the large brown triangles in the figure) are not co-planar with the triangles that comprise the individual cell faces within that range (eg those shaded yellow, blue etc). I have tried to show this in 2D below:
The current bisection range is shown by the brown dotted line and brown vertices. Initially, the red point is within the current range. We bisect in the X direction (bisection 1) and the red point is within the current range (dotted brown line) so we discard the right half. We now bisect in the Y direction (bisection 2) and the red point is outisde this range so we discard the top half. We eventually arrive at the final step when we have a single index in each of the I & J directions. As shown here, this places the red point in the wrong cell.
Would appreciate any suggestions for an alternative algorithm to the one I am currently trying to implement. Stepping back, I am actually interested in calculating the faces within the grid intersected by a series of line segments, so am using the "point in a polyhedron" method as an intermediate step. I looked at geomdl which could represent each face as a NURBS object but does not seem to implement intersection between a ray and a NURBS object. I also had a quick look at the Python bindings to CGAL but that looked like a massive learning curve to climb, so put that aside. Thanks in advance!

Related

Coordinates Python

I am facing with the sorting airfoil coordinates. In particular given a set of coordinates, which are not sorted, I have to sorted them starting from the trailing edge upper surface. Here I report the code that I have developed but as you can see, the starting point do not match with what I suppose, moreover exist several oscillations as you can see in the reported figure (and a detail, in blue the starting point after the sort).
Can someone suggest me what I miss? How can I do?
Thanks you in advance.
def sort_airfoil(points):
x0 = np.mean(-points[:,1])
y0 = np.mean(points[:,2])
r = np.sqrt((-points[:,1]-x0)**2 + (points[:,2]-y0)**2)
tempx=-points[:,1]
xmax=np.max(tempx)
ind_max=np.where(tempx==xmax)
ymax=np.max(points[ind_max,2])
ind_max_t=np.where((tempx>0.95*xmax) & (tempx<xmax))
ymax_t=points[ind_max_t,2]
ymin=np.min(ymax_t)
indx_temp=np.where(points[:,2]==ymin)
xmin=np.max(tempx[indx_temp])
xmed=(xmin+xmax)/2
ymed=(ymin+ymax)/2
print(x0,y0)
print(xmin,ymin)
print((xmin+xmax)/2, (ymin+ymax)/2)
angle0=np.arctan2((ymed-y0),(xmed-x0))
print("angle", angle0)
angles = np.where((points[:,2]-y0) > 0, np.arccos((-points[:,1]-x0)/r), 2*np.pi-np.arccos((-points[:,1]-x0)/r))
angles=angles-angle0
for i in range(len(angles)):
if angles[i]<0:
angles[i]=angles[i]+2*np.pi
elif angles[i]>2*np.pi:
angles[i]=angles[i]-2*np.pi
mask = np.argsort(angles)
x_sorted = points[mask,1]
y_sorted = points[mask,2]
points_new=np.zeros([len(points), 3])
points_new[:,0]=points[:,0]
points_new[:,1]=x_sorted
points_new[:,2]=y_sorted
return points_new
The issue comes from the algorithm itself: it only work when the points form a convex polygon. However, the shape is concave.
More specifically, the first sorted points (and the last ones) form a zigzag-shaped lines because there is two sets of points (green arrows) interleaving with growing angles (red arrow) from the median point (red line).
Note the points are horizontally flipped on the gathered point from the question. Thus the points are sorted clockwise.
One simple solution is to split horizontally the shape in many set of point (eg. 10 set) so that each set form a convex shape. Then, the parts can be merged to form the final shape. The merge consists in finding the points at the "edge" of each locally-sorted set of points (parts) and reorder the partially sorted array of points consequently.
More specifically, the points of each part are split in 2 sub-sets: the upper ones and the lower ones. You can find them easily by selecting the 2 left-most points of a right part with the right-most points of a left part. The 2 top-most points needs to be connected each other and the same for the 2 bottom-most points. Thus, the sequence of the two upper sets of points needs to be reordered so they are contiguous and the same for the lower part.
Here is an example:
Note that if you are unsure about how to split the points in many parts so that each one form a convex-shaped sets of points, then you can: split the shape in n parts, check if the set of points form a convex shape by computing a convex hull (eg. using a Graham scan) and split evenly the parts that are concave (recursively). This is quite expensive, but more robust.

how to intersect two planes in python and export the coordinates of the intersection

I have a bunch of points (x, y and z) in a 3d space and want to extract some points out of them. I copied a simplified example with two arrays which are linked together:
all_points=[[np.array([[6.8,1.,0.1], [6.8,3.,0.1], [6.8,6.,0.1],\
[4.8,1.,2.], [4.8,3.,2.], [4.8,6.,2.],\
[3.8,1.,3.], [3.8,3.,3.], [3.8,6.,3.],\
[2.8,1.,4.1], [2.8,3.,4.1], [2.8,6.,4.1]]),\
np.array([[5.,1.,2.], [5.,3.,2.], [5.,6.,2.],\
[4.,1.,3.], [4.,3.,3.], [4.,6.,3.],\
[6.,1.,3.], [6.,3.,3.], [6.,6.,3.],\
[7.,1.,4.], [7.,3.,4.], [7.,6.,4.],\
[3.,1.,4.], [3.,3.,4.], [3.,6.,4.]])]]
Firstly, I want to check whether the array is normal or not. If I sort a normal array based on z values, the x value of srted array will be increasing or decreasing. First array (blue dots in upladed fig) clearly show a normal set. For normal arrays I just do a simple task and export four points showing corners of them (shown by yellow and green arrows in my fig). These points are found based on the minimum and maximum of x, y and z. Following code gives me four corners of normals:
four_corners=[]
for points in all_points:
for sub_points in points:
sorted_sub=np.sort(sub_points.view('i8,i8,i8'), order=['f2', 'f1'], axis=0).view('float')
le_st=sorted_sub[np.where(sorted_sub[:,2] == sorted_sub[0,2])]
le_st=len(le_st)
le_en=sorted_sub[np.where(sorted_sub[:,2] == sorted_sub[-1,2])]
le_en=len(le_en)
cor=np.array([sorted_sub[0,:], sorted_sub[int((le_st-1)),:], sorted_sub[-1,:], sorted_sub[-le_en,:]])
four_corners.append(cor)
In abnormal sets (black squares in my fig) usually some points are very close to a normal set (a limit can be defined) and then they go away. I want to extract four points but by creating two planes. First plane is created using three of the four corners points found for the normal points. Second surface is created using each three points of the abnormal points that are not close to the normal points (highlighted by a red line in my fig). Then, I want to find intersection line of two surfaces and find the x and z in the minimum and maximum of y (1 and 6) of the intersection. y value of all my corners points (normal or abnormal) is the minimum or maximum value. Other two points are created by substituting the y and z values of the two corners points coming from the normal plane that have higher z values (highlted by yellow arrows) into the equation of the plane of abnormal set. I only know how to create surfaces based on this solution. In reality I may have several normal and abnormal sets that all are linked to the normal. In advance, I do appreciate any help and contribution for doing what I want in python.

3D normal vector given 2D corner coordinates of an equilateral triangle

I'm interested in finding the normal vector of a plane given 3 2D projection/pixel coordinates of the corners of an equilateral triangle.
I have to modify my actual use case because I'm not allowed to share the details: In my use case I have a poster of an equilateral triangle pinned to the wall. I can detect the 3 corners in pixel coordinates and want to calculate the normal vector of the triangle coming straight out of the wall.
It's my intuition that there should be an analytical solution for this and I found similar questions but fail to apply them to my use case. Here are a few of my thoughts:
I thought about bringing the 2D pixel coordinates to 3D by adding a third dimension filled with zero putting it on the XY-plane. Doing the same with a reference triangle of side length 1 gives me the source and destination for finding their homography. But because all 6 z-values are 0, I don't think this would work.
I know the distance between all 3 points is the same in 3D which I want to use as a constraint to solve for the problem.
The normal can be calculated in 3D by taking the cross product of 2 sides of the triangle
Using openCV's findHomography() function requires at least 4 3D points to solve and I don't believe I can just take the mean coordinates of my 3 points in pixel space to find find the center (aka a 4th point), correct?
openCV also comes with a getAffineTransform() function that works with 3 pairs of 2D points. I tried to use it like this:
# equilateral reference triangle
tri_ref = np.array([
[0.0,0.0],
[0.5,0.866],
[1.0,0.0]
], dtype=np.float32)
# detected corners in pixel space
tri_pixel = np.array([
[0.397,0.317],
[0.441,0.848],
[0.698,0.324]
], dtype=np.float32)
A = cv2.getAffineTransform(tri_pixel, tri_ref)
img2 = cv2.warpAffine(img1, A, (400,300))
However, the transformation looks completely wrong and I still would need to know how I can calculate the normal from the transformation matrix.
You can't really get a 100% correct 3D normal vector from three 2D points.
For example , let us consider a pinhole camera system, draw a line from a point to camera, then randomly pick another point on the line we draw, you will still have the same image with the picked point.
If we add the condition that the distance between all three points are the same, you may still have at most 4 possible answers of the vector(imaging two fixed points,the possible position of the last point which have the same distance with the two fixed points will be a circle,draw a line start from the camera to attach the circle , if you cut through the circle than there is two possible position of the last point, the only condition there is only one position is true is the line is a tangent line of the circle, but most of the time this does not happen, change the fixed points so we can have at most 4 possible position of the three points)
You can't add the 4th point by just take the mean coordinates of the 3 points because you do not know where the real position of the centroid project to the image plane.

Divide a region into parts efficiently Python

I have a square grid with some points marked off as being the centers of the subparts of the grid. I'd like to be able to assign each location within the grid to the correct subpart. For example, if the subparts of the region were centered on the black dots, I'd like to be able to assign the red dot to the region in the lower right, as it is the closest black dot.
Currently, I do this by iterating over each possible red dot, and comparing its distance to each of the black dots. However, the width, length, and number of black dots in the grid is very high, so I'd like to know if there's a more efficient algorithm.
My particular data is formatted as such, where the numbers are just placeholders to correspond with the given example:
black_dots = [(38, 8), (42, 39), (5, 14), (6, 49)]
grid = [[0 for i in range(0, 50)] for j in range(0, 50)]
For reference, in the sample case, I hope to be able to fill grid up with integers 1, 2, 3, 4, depending on whether they are closest to the 1st, 2nd, 3rd, or 4th entry in black_dots to end up with something that would allow me to create something similar to the following picture where each integer correspond to a color (dots are left on for show).
To summarize, is there / what is the more efficient way to do this?
You can use a breadth-first traversal to solve this problem.
Create a first-in, first-out queue. (A queue makes a traversal breadth-first.)
Create a Visited mask indicating whether a cell in your grid has been added to the queue or not. Set the mask to false.
Create a Parent mask indicating what black dot the cell ultimately belongs to.
Place all the black dots into the queue, flag them in the Visited mask, and assign them unique ids in the Parent mask.
Begin popping cells from the queue one by one. For each cell, iterate of the cell's neighbours. Place each neighbour into the Queue, flag it in Visited, and set its value in Parent to be equal to that of the cell you just popped.
Continue until the queue is empty.
The breadth-first traversal makes a wave which expands outward from each source cell (black dot). Since the waves all travel at the same speed across your grid, each wave gobbles up those cells closest to its source.
This solves the problem in O(N) time.
If I understand correctly what you really need is to construct a Voronoi diagram of your centers:
https://en.m.wikipedia.org/wiki/Voronoi_diagram
Which can be constructed very efficiently with similar computational complexity as calculating its convex hull.
The Voronoi diagram allows you to construct the optimal polygons sorrounding your centers which delimit the regions closest to the centers.
Having the Voronoi diagram the task is reduced to detect in which polygon the red dots lies. Since the Voronoi cells are convex you need an algorithm to decide wether a point is inside a convex polygon. However traversing all polygons has complexity O(n).
There are several algorithms to accelerate the point location so it can be done in O(log n):
https://en.m.wikipedia.org/wiki/Point_location
See also
Nearest Neighbor Searching using Voronoi Diagrams
The "8-way" Voronoi diagram can be constructed efficiently (in linear time wrt the number of pixels) by a two-passes scanline process. (8-way means that distances are evaluated as the length of the shortest 8-connected path between two pixels.)
Assign every center a distinct color and create an array of distances of the same size as the image, initialized with 0 at the centers and "infinity" elsewhere.
In a top-down/left-right pass, update the distances of all pixels as being the minimum of the distances of the four neighbors W, NW, N and NE plus one, and assign the current pixel the color of the neighbor that achieves the minimum.
In a bottom-up/right-left pass, update the distances of all pixels as being the minimum of the current distance and the distances of the four neighbors E, SE, S, SW plus one, and assign the current pixel the color of the neighbor that achieves the minimum (or keep the current color).
It is also possible to compute the Euclidean Voronoi diagram efficiently (in linear time), but this requires a more sophisticated algorithm. It can be based on the wonderful paper "A GENERAL ALGORITHM FOR COMPUTING DISTANCE
TRANSFORMS IN LINEAR TIME" by A. MEIJSTERā€š J.B.T.M. ROERDINK and W.H. HESSELINK, which must be enhanced with some accounting of the neighbor that causes the smallest distance.

Triangulation patterns in .ifc file format using coordinates and indexes

I've posted this in another forum as well due to the mathematical nature of the issue:
forum post
I have an .ifc file in which the raw data exported describes a wall in the xy plane by a set of coordinates and their corresponding indexes according to the link explanation:
Explanation
I have a txt where the data is divided into the coordinates in xyz space, then indexes and some other data.
I was hoping that someone can help me understand how to link the indexes to their corresponding coordinates. There are 164 coordinate pairs and 324 index pairs so it doesn't make sense to me that each index relates to only 1 coordinate pair.
The goal is to establish a relationship between indexes and coordinates such that this type of data can output the wall thickness, which is in this case '10'. I was thinking that (according to the link above) by taking the first triangle described, it should describe the edge of the wall in 3D and therefore give us one of its sides as the shortest segment in the wall which is the thickness.
I received an answer in the mentioned forum post, that I should
"...expanding out each coordinate in terms of X's, Y's, and Z's [instead of (X,Y,Z) triples) and then use every index triple to get the actual coordinate for the individual coordinate instead of one triple.
So for example you have X[], Y[] and Z[] and you have an index (a,b,c) then you find X[a], Y[b], and Z[c] not Point(a,b,c)... "
I didn't quite understand this explanation, and would appreciate any help or further explanation in order to achieve my goal.
Thank you
Let's start with the cordinates (IfcCartesianPointList3D): each one is a triplet, resulting in a Point with (x,y,z) coordinates.
Then the IfcTriangulatedFaceSet uses indices to construct triangles. It has 2 indexing modes: direct and indirect via PnIndex. The indexing mode is determined by the existence of an array for PnIndex (attribute number 5). Take note that I call these variants direct and indirect - they aren't mentioned that way in the IFC documentation.
Direct indexing
PnIndex is not set. Lets look at an (simple and constructed) example:
#100=IFCCARTESIANPOINTLIST(((0,0,0),(1,0,0),(1,1,0),(0,1,0)));
#101=IFCTRIANGULATEDFACESET(
/*reference to the points*/ #100,
/*no normals*/ $,
/*no indication if closed or open*/ $,
/*coordinate indices*/ ((1,2,3),(1,3,4)),
/*no PnIndex*/ ());
This describes a square lying in the x-y-plane. Each entry in attribute CoordIndex is a triplet giving a one-based index into a point in the IfcCartesianPointList. This means there are two triangles constructed from the following points:
(0,0,0) (1,0,0) (1,1,0)
(0,0,0) (1,1,0) (0,1,0)
Indirect indexing
Lets build further on the previous example:
#100=IFCCARTESIANPOINTLIST(((0,0,0),(1,0,0),(1,1,0),(0,1,0)));
#101=IFCTRIANGULATEDFACESET(
/*reference to the points*/ #100,
/*no normals*/ $,
/*no indication if closed or open*/ $,
/*coordinate indices*/ ((1,2,3),(1,3,4)),
/*PnIndex*/ (2,3,4,1));
This time there is PnIndex set. It adds a level of indirection to access the points. Triplets from CoordIndex point into PnIndex (1-based). The value found in PnIndex is then used to access the IfcCartesianPointList.
So for the first triangle we have: (1,2,3) in CoordIndex. These point to 2, 3 and 4 in PnIndex. These result in the following points from the point list: (1,0,0) (1,1,0) (0,1,0)
Repeating the procudure for the second triangle (1,3,4) we get values 2, 4, 1 from PnIndex and the following points: (1,0,0) (0,1,0) (0,0,0)
It is again a square, but this time with a different triangulation.
Now if you want to know your wall thickness you will need to calculate the extents from the resulting geometry. If your wall is aligned with the coordinate system axes this is easy (get the difference between the smallest and largest X, Y and Z). If it is not, you might need to transform the points or look further into 3D-extent calculations (my knowledge ends there).
In a triangulation it's roughly num of triangles = 2 * num of vertices.
A wall (e.g. a rectangle) may be described by two triangles that share an edge and the two vertices of this edge.
Instead of describing the whole model triangle by triangle, each with its three vertices, or edge by edge, it's cheaper, avoids repeating vertex data, to set an index for each vertex and set a triangle by the three indices of its vertices. This is usually called "indexed rendering".

Categories