I have been trying to find an implementation of Korf's Rubik's cube solving algorithm in python as I am making a project where I am comparing algorithm efficiency. Unfortunately, I haven't been able to find any implementation so far, does anyone have any examples of Korf's algorithm in Python?
Korf's algorithm is not the best method for an optimal solver for Rubik's cube. It is far better to implement an algorithm which exploits the symmetries of the cube which leads to smaller sizes of the pruning tables. Michael Reid's optimal solver algorithm is best suited. It uses phase 1 of the two-phase algorithm (which can exploit 16 cube symmetries) in three different directions to exploit all 48 cube symmetries. The pruning depth for IDA* is considerably higher than with Korf's method on average.
I am sceptical if Python if fast enough though. I may give it a try since I implemented my two-phase algorithm in Python and it should not be too difficult to adapt this code.
Related
I am calculating the shortest path from one source to one goal on a weighted graph with networkx and single_source_dijkstra.
However, I run into memory problems.
Is there a more efficient way to calculate this? An alternative to Networkx? See my code:
cost, shortestpath = nx.single_source_dijkstra(graph, startpointcoords, secondptcoords,cutoff=10000000)
The bidirectional dijkstra algorithm should produce a significant improvement. Here is the documentation.
A good analogy would be in 3D: place one balloon at point x and expand it till it reaches point y. The amount of air you put in is proportional to the cube of the distance between them. Now put a balloon at each point and inflate both until they touch. The combined volume of air is only 1/4 of the original. In higher dimensions (which is a closer analogy to most networks), there is even more reduction.
Apparently the A* Algorithm of networkx is way more efficient. Afterwards I calculate the length of the resulting path with the dijkstra algorithm I posted.
Perhaps try using another algorithm? Your graph may have too many vertices but few edges, in which case you could use Bellman-Ford bellman_ford_path() link in networkX
Another solution would be to use another python package, for example the answers to this question has different possible libraries.
The last solution would be to implement your own algorithm! Perhaps Gabow's algorithm, but you would have to be very efficient for example by using numpy with numba
I am trying to implement this method of edge detection on 3D meshes and I got fairly impressive results. But its far from good.
The proposed thinning algorithm is returning funky branched edges, that aren't really useful. I think it could be my fault on reproducing the algorithm in python.
https://s1.postimg.org/87w1ul2zxb/Screen_Shot_20171009161230.png
So I guess there are three ways to solve this right now.
- Find a way to clip those short branches
- Find my error on the algorithm.
- Or try to make another thinning algorithm.
And I wasn't capable of doing any of these things.
Suppose I have multi-dimensional datasets, which have many vectors as data. I am writing an algorithm which needs to do k nearest neighbour searches for all those vectors - classical KNN. However, during my algorithm I add new vectors to the overall dataset and need to include those new vectors into my KNN search. I want to do that efficiently. I looked into KD tree and ball tree of scikit-learn, but they don't allow inserts (by the nature of the concepts). I am not sure whether SR tree or R tree would provide inserts, but in any case, I was not able to find a python implementation for data beyond 3D.
Regarding the search I am fine with either the query "give me the closest vector" (so 1-NN) or "give me all vectors that are closer then radius".
General comment: I don't quite understand why KD-Trees are so popular for high-dimensional kNN queries. In my experience, other trees scale much better with high dimensionality or large datasets (I tested up to 25Million points and (only) up to 40 dimensions). Some more details:
KD-Trees: As far as I know, KD-Trees should support insertion at any time, but there is a chance that they get imbalanced. I don't use python, so I don't know why your KD-tree does not support insertion/deletion on the fly.
Quadtree: Depending on the dimensionality, you could also use quadtree/octrees, but standard implementations are not good for more than 10 dimensions or so. In the reference above I tested a quadtree with a special 'hypecube' navigation approach. That requires a lot of memory but scales much better with dimensionality in terms of performance.
R-Tree/R*Tree: The original R-Trees are not very good with insertion on the fly. However, if you look at R+Trees, (R-Plus-Tree), they are quite fast with reinsertion and kNN queries.
PH-Trees have basically the same kNN performance as R+Trees, but much better insertion time, because PH-Trees do not need rebalancing, while having inherently limited depth and nodesize. Unfortunately, implementations gets a lot more complicated for >=64 dimensions (the tree uses one bit of a long integer for each dimensions). I'm not aware of an implementation that supports more than 63 dimensions.
Python:
R+Plus trees should be available for Python. If not, you could adapt a normal R-Tree (only the insertion algorithm is different)
I heard once of someone starting to implement a PH-Tree in Python, but I haven't seen any open-source variant yet.
If you have some time/interest to do your own implementation, you could look at the Java implementations here and translate them to Python. The library contains various multidimensional indexes, except KD-Trees. KD-Tree implementations that allow on-the-fly insertion can be found here and here.
I have a matrix on Z^2 with large dimensions (e.g. 20000 vectors of 200 elements). Each vector contains the same number of ones. I want to find minimal set of the vectors that gives a vector of ones in bitwise OR. This is solved by dynamic programming, but the time complexity of the solution is atrocious. I want to apply some optimization like annealing or genetic algorithm or something else to find less or more good approximation of the answer. But I have no experience in optimizing such functions and just don't know what to try first and what to start with. I want to learn some optimization in Python working on this problem, so some advice on pythonic way of discrete optimization here will be appreciated!
I need to find the diameter of the points cloud (two points with maximum distance between them) in 3-dimensional space. As a temporary solution, right now I'm just iterating through all possible pairs and comparing the distance between them, which is a very slow, O(n^2) solution.
I believe it can be done in O(n log n). It's a fairly easy task in 2D (just find the convex hull and then apply the rotating calipers algorithm), but in 3D I can't imagine how to use rotating calipers, since there is no way to order the points.
Is there any simple way to do it (or ready-to-use implementation in python or C/C++)?
PS: There are similar questions on StackOverflow, but the answers that I found only refers to Rotating Calipers (or similar) algorithms, which works fine in 2D situation but not really clear how to implement in 3D (or higher dimensionals).
While O(n log n) expected time algorithms exist in 3d, they seem tricky to implement (while staying competitive to brute-force O(n^2) algorithms).
An algorithm is described in Har-Peled 2001. The authors provide a source code than can optionally be used for optimal computation. I was not able to download the latest version, the "old" version could be enough for your purpose, or you might want to contact the authors for the code.
An alternative approach is presented in Malandain & Boissonnat 2002 and the authors provide code. Altough this algorithm is presented as approximate in higher dimensions, it could fit your purpose. Note that their code provide an implementation of Har-Peled's method for exact computation that you might also check.
In any case, in a real-world usage you should always check that your algorithm remains competitive with respect to the naïve O(n^2) approach.