Compute distance between combinations of points in a grid - python

I am looking for an efficient solution to the following problem. This should work with python, but does not have to be in python.
I have a 2D matrix, each element of the matrix represents a point in a 2D, orthogonal grid. I want to compute the shortest distance between couples of points in the grid. This would be trivial if there were no "obstacles" in the grid.
A figure helps explaining:
Each cell in the figure is one element of the matrix (the matrix is square, but it could be rectangular). Gray cells are obstacles, any path between two points must go around them. The green cells are those I am interested in. I am not interested in red cells, but a path can go trough them.
The distance between points like A and B is trivial to compute, but how to compute the path between A and C as shown in the figure?
I have read about the A* algorithm, but since I am working with a rather big grid, generally (few hundred) x (few hundred), I was wondering if there is a smarter alternative. Remember: I have to find the distance between all couples of "green cells", not just between two of them. If I have n green cells, I will have a number of combinations equal to the binomial coefficient (n 2).
The grid is fixed, I have to compute all the distances once and them use them in further calculations, say accessing them based on the relevant indices in the matrix.
Note: the problem is NOT this one, were coordinates are in a list. My 2D coordinates are organised in a 2D grid and the question is about exploiting this aspect for having a more efficient algorithm.

I suppose the most straightforward solution would be the Floyd-Warshall algorithm, which computes the shortest distances between all pairs of nodes in a graph. This doesn't necessarily exploit the fact that you happen to have a 2D grid (it could work on other kinds of graphs too), but it should work fine. The fact that you do have a 2D grid may enable you to implement it more efficiently than if you had to write an implementation for any arbitrary graph (e.g. you can just store distances as they're computed in a matrix, instead of some less efficient data structure).
The regular version only produces the distances of all shortest paths as output, and doesn't actually store the paths themselves as output. There's additional info on the wikipedia page on how to modify the algorithm to enable you to efficiently reconstruct paths if necessary.
Intuitively, I suspect more efficient implementations may be possible which do exploit the fact that you have a 2D grid, probably using ideas from Rectangular Symmetry Reduction and/or Jump Point Search. Both of those ideas are traditionally used with A* for single-pair pathfinding queries though, I'm not aware of any work using them for all-pair shortest path computations. My intuition says they can be exploited there too, but in the time it'll take to figure that out exactly and implement it correctly, you can probably much more easily implement and run Floyd-Warshall.

Related

Find distribution of 4 points in a given shape, so that areas of Voronoi Diagram have the same and biggest size

I have given a random shape, wherein I want to place 4 (or any other number) dots. The dots should be distributed, so that all areas of their Voronoi-Diagram have the same size of area and have the biggest size of area possible. I want to find an algorithm that I can implement in Python.
Any ideas how to start?
The algorithm should find the best distribution of a swarm of drones that is discovering a room.
A natural approach is to pick some arbitrary starting points and apply the Lloyd algorithm repeatedly moving the sites to their Voronoi centers. It isn't guaranteed to get the optimal configuration but generally gives a much nice (and nearly locally optimal) configuration in a few steps.
In practice the ugliest part of the code is restricting the Voronoi cells to the polygonal domain. See discussion here and here among other duplicates of this question.
Here is an alternative that maybe easier to implement quickly. Rasterize your polygonal domain, computing finite set of points inside the polygon. Now run k-means clustering which is just the discrete variant of Lloyd's method (and also in scipy) to find your locations. This way, you avoid the effort of trimming infinite Voronoi cells and only have to rely on a geometric inside-outside test for the input polygon to do your rasterization. The result has a clear accuracy-performance trade-off: the finer you discretize the domain, the longer it will take. But in practice, the vast majority of the benefit (getting a reasonably balanced partition) comes with a coarse approximation and just a few clustering iterations.
Finally, implementing things is a lot more complex if you need to use geodesic distances that don't allow sites to see directly around non-corners of the domain. (For example, see Figure 2a here.)

Efficient determination of adjacent cells in a Voronoi diagram from scipy.spatial.Voronoi

I'm working on an algorithm that uses Voronoi diagrams. I need to know for every given cell, which cells it has as neighbors; that is, which neighbors it shares an edge with. This is similar to an existing question. However, I already have an algorithm that computes this, but I'm looking to speed it up and avoid redundant calculations.
Currently I'm doing this with the output from scipy.spatial.Voronoi which gives me arrays of the vertices, points, etc that I can build this mapping with. However, I'm running this algorithm with a lot of points and I'd like to speed up the process.
My understanding is that scipy and Qhull calculate the Delaunay triagulation and then uses that to compute the Voronoi diagram. I think (but could be mistaken) that adjacency information can be found from the Delaunay triangulation. I'm wondering if there is a way to extract this information (if it exists) from scipy/Qhull when I generate the Voronoi diagram.
If not, are there any preferred methods to do this? Would I be better off, in the long run, using Qhull directly ?
Thanks.
I think it's only possible with fortune algorithm:https://cs.stackexchange.com/questions/80939/voronoi-diagram-status-structure-in-fortunes-algorithm.
Look for half-egde.
Maybe you can implement the half-edge with other solution but not with qhull.

Measurement for intersection of 2 irregular shaped 3d object

I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex.
I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate.
Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape.
I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method.
Update:
I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.
By straight voxelization:
If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel.
Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.)
Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.
A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure.
As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.

Optimizing scipy.spatial.Delaunay.find_simplex

I have a set of points in a plane where each point has an associated altitude. I'm thinking of using the scipy.spatial library to compute the Delaunay triangulation of the point set and then use the result to interpolate for the points in between.
The library implements a nice function that, given a point, finds the triangle it lies in. This would be particularly useful when calculating the depth map from the mesh. I assume though (please do correct me if I'm wrong) that the search function searches from the same starting point every time it is called. Since the points I will be looking for will tend to lie either on the triangle the previous one lied on or on an adjacent one, I figure that's unneccessary, but can't seem to find a way to optimize the search, other than to implement it myself.
Is there a way to set the initial triangle for the search, or to optimize the depth map calculation otherwise?
You can try point in location test, especially Kirkpatrick algorithm/data structure. Basically you subdivide the mesh in both axis and re-triangulate it. A better and simpler solution is to give each triangle a color and draw a bitmap then check the color of the bitmap with the point.

python: sorting two lists of polygons for intersections

I have two big lists of polygons.
Using python, I want to take each polygon in list 1, and find the results of its geometric intersection with the polygons in list 2 (I'm using shapely to do this).
So for polygon i in list 1, there may be several polygons in list 2 that would intersect with it.
The problem is that both lists are big, and if I simply nest two loops and run the intersection command for every
possible pair of polygons, it takes a really long time. I'm not sure if preceding the intersection with a boolean test would speed this up significantly (e.g. if intersects: return intersection).
What would be a good way for me to sort or organize these two lists of polygons in order to make the intersections
more efficient? Is there a sorting algorithm that would be appropriate to this situation, and which I could make with python?
I am relatively new to programming, and have no background in discrete mathematics, so if you know an existing algorithm
that I should use, (which I assume exist for these kinds of situations), please link to or give some explanation that could assist me in actually
implementing it in python.
Also, if there's a better StackExchange site for this question, let me know. I feel like it kind of bridges general python programming, gis, and geometry, so I wasn't really sure.
Quadtrees are often used for the purpose of narrowing down the sets of polygons that need to be checked against each other - two polygons only need to be checked against each other if they both occupy at least one of the same regions in the quadtree. How deep you make your quadtree (in the case of polygons, as opposed to points) is up to you.
Even just dividing your space up to smaller constant-size areas would speed up the intersection detection (if your polygons are small and sparse enough). You make a grid and mark each polygon to belong to some cells in the grid. And then find cells that have more than one polygon in them and make the intersection calculations for those polygons only. This optimization is the easiest to code, but the most ineffective. The second easiest and more effective way would be quadtrees. Then there are BSP tres, KD trees, and BVH trees that are probably the most effective, but the hardest to code.
Edit:
Another optimization would be the following: find out the left-most and the right-most vertices of each polygon and put them in a list. Sort the list and then loop it somehow from left to right and easily find polygons whose bounding boxes' x coordinates overlap, and then make the intersection calculations for those polygons.

Categories