A MST(minimum spanning tree) is necessarily a MBST(minimum bottleneck spanning tree).
Given a set of points in a 2D plane with the edges of the weight of the Euclidean distance between each pair of point, I want to find the minimum bottleneck edge in the spanning tree (the maximum-weighted edge).
According to my limited knowledge in this field and research on the internet, the best way to calculate this using the MST approach is to do a Delaunay triangulation of the given points, and use the fact that the MST of these points is a subset of the triangulation and then run Kruskal's or Prim's algorithm. Then all I need to do is to find the maximum weight in the MST.
I am wondering if I can do this more efficiently by building a MBST, as I am only looking for the bottleneck edge.
Thanks in advance.
Edit: my current way to find it, both using MST or MBST calculates the weight of all V * (V-1) / 2 edges first, which I consider quite inefficient. I'd like to know if there's an alternative around this.
Related
I have a list of cities (nodes) plotted in a 2D plane each given by an X,Y coordinate.
I now want to add roads (edges) to it, but the roads cannot intersect. I want to create the most number of roads possible. By count, not by total length.
In more general graph theory parlance, I think I want the maximum number of edges (or regions?? maybe it's the same thing), where edges do not intersect in 2-dimensions, for a given set of Nodes at X,Y points.
In a brief view of NetworkX, it seems that they generate Graphs by making "nodes" but nodes can be "anywhere" and cannot force nodes to be at a certain location with respect to each other (they have abstracted too far!).
Edit: networkx add_node with specific position
suggests that you can plot them in a given location. #Stef thanks!!
Am i thinking about the problem correctly?
Can I visualize using some python package my Nodes/edges, where this package can automatically calculate the proper edges given a set of nodes?
Is automatically finding the maximum number of non-intersecting edges a thing (and what is this called so I can find out more about it?)
Very possibly similar to this question, but this question wasn't really answered and from 8 years ago (Algorithm for finding minimal cycle basis of planar graph)
I have given a random shape, wherein I want to place 4 (or any other number) dots. The dots should be distributed, so that all areas of their Voronoi-Diagram have the same size of area and have the biggest size of area possible. I want to find an algorithm that I can implement in Python.
Any ideas how to start?
The algorithm should find the best distribution of a swarm of drones that is discovering a room.
A natural approach is to pick some arbitrary starting points and apply the Lloyd algorithm repeatedly moving the sites to their Voronoi centers. It isn't guaranteed to get the optimal configuration but generally gives a much nice (and nearly locally optimal) configuration in a few steps.
In practice the ugliest part of the code is restricting the Voronoi cells to the polygonal domain. See discussion here and here among other duplicates of this question.
Here is an alternative that maybe easier to implement quickly. Rasterize your polygonal domain, computing finite set of points inside the polygon. Now run k-means clustering which is just the discrete variant of Lloyd's method (and also in scipy) to find your locations. This way, you avoid the effort of trimming infinite Voronoi cells and only have to rely on a geometric inside-outside test for the input polygon to do your rasterization. The result has a clear accuracy-performance trade-off: the finer you discretize the domain, the longer it will take. But in practice, the vast majority of the benefit (getting a reasonably balanced partition) comes with a coarse approximation and just a few clustering iterations.
Finally, implementing things is a lot more complex if you need to use geodesic distances that don't allow sites to see directly around non-corners of the domain. (For example, see Figure 2a here.)
I'm working on an algorithm that uses Voronoi diagrams. I need to know for every given cell, which cells it has as neighbors; that is, which neighbors it shares an edge with. This is similar to an existing question. However, I already have an algorithm that computes this, but I'm looking to speed it up and avoid redundant calculations.
Currently I'm doing this with the output from scipy.spatial.Voronoi which gives me arrays of the vertices, points, etc that I can build this mapping with. However, I'm running this algorithm with a lot of points and I'd like to speed up the process.
My understanding is that scipy and Qhull calculate the Delaunay triagulation and then uses that to compute the Voronoi diagram. I think (but could be mistaken) that adjacency information can be found from the Delaunay triangulation. I'm wondering if there is a way to extract this information (if it exists) from scipy/Qhull when I generate the Voronoi diagram.
If not, are there any preferred methods to do this? Would I be better off, in the long run, using Qhull directly ?
Thanks.
I think it's only possible with fortune algorithm:https://cs.stackexchange.com/questions/80939/voronoi-diagram-status-structure-in-fortunes-algorithm.
Look for half-egde.
Maybe you can implement the half-edge with other solution but not with qhull.
I am looking for an efficient solution to the following problem. This should work with python, but does not have to be in python.
I have a 2D matrix, each element of the matrix represents a point in a 2D, orthogonal grid. I want to compute the shortest distance between couples of points in the grid. This would be trivial if there were no "obstacles" in the grid.
A figure helps explaining:
Each cell in the figure is one element of the matrix (the matrix is square, but it could be rectangular). Gray cells are obstacles, any path between two points must go around them. The green cells are those I am interested in. I am not interested in red cells, but a path can go trough them.
The distance between points like A and B is trivial to compute, but how to compute the path between A and C as shown in the figure?
I have read about the A* algorithm, but since I am working with a rather big grid, generally (few hundred) x (few hundred), I was wondering if there is a smarter alternative. Remember: I have to find the distance between all couples of "green cells", not just between two of them. If I have n green cells, I will have a number of combinations equal to the binomial coefficient (n 2).
The grid is fixed, I have to compute all the distances once and them use them in further calculations, say accessing them based on the relevant indices in the matrix.
Note: the problem is NOT this one, were coordinates are in a list. My 2D coordinates are organised in a 2D grid and the question is about exploiting this aspect for having a more efficient algorithm.
I suppose the most straightforward solution would be the Floyd-Warshall algorithm, which computes the shortest distances between all pairs of nodes in a graph. This doesn't necessarily exploit the fact that you happen to have a 2D grid (it could work on other kinds of graphs too), but it should work fine. The fact that you do have a 2D grid may enable you to implement it more efficiently than if you had to write an implementation for any arbitrary graph (e.g. you can just store distances as they're computed in a matrix, instead of some less efficient data structure).
The regular version only produces the distances of all shortest paths as output, and doesn't actually store the paths themselves as output. There's additional info on the wikipedia page on how to modify the algorithm to enable you to efficiently reconstruct paths if necessary.
Intuitively, I suspect more efficient implementations may be possible which do exploit the fact that you have a 2D grid, probably using ideas from Rectangular Symmetry Reduction and/or Jump Point Search. Both of those ideas are traditionally used with A* for single-pair pathfinding queries though, I'm not aware of any work using them for all-pair shortest path computations. My intuition says they can be exploited there too, but in the time it'll take to figure that out exactly and implement it correctly, you can probably much more easily implement and run Floyd-Warshall.
I want to calculate the minimal spanning tree based on the euclidean distance between a set of points on a 2D-plane. My current code stores all the edges, and then performs Prim's algorithm in order to get the minimal spanning tree. However, I am aware that doing this takes O(n^2) space for all the edges.
After doing some research, it becomes clear that the memory and runtime can be optimized if I calculate the delaunay triangulation first on this set of points, then obtain the minimal spanning tree via running Prim's or Kruskal's algorithm on the edges of the triangulation.
This is part of a programming contest (https://prologin.org/train/2017/qualification/taxi_des_neiges), so I doubt I'd be able to use scipy.spatial. Is there any alternatives to simply get the edges contained in the Delaunay triangulation?
Thanks in advance.
Would a module help? Here's a few that might work:
delaunator
poly2tri
triangle
matplotlib?
Roll your own? Both of these describe the incremental algorithm, which Wikipedia seems to say is O(n log n):
http://www.geom.uiuc.edu/~samuelp/del_project.html
http://graphics.stanford.edu/courses/cs368-06-spring/handouts/Delaunay_1.pdf
Here's an ActiveState recipe that might help to get started, but it looks like it's not finished.
It looks like Scipy uses QHull, which.. somewhere in this folder... has the code for performing the delaunay triangulation and getting the edges (albeit implemented in C)
https://github.com/scipy/scipy/tree/master/scipy/spatial/qhull/src