I am trying to implement this method of edge detection on 3D meshes and I got fairly impressive results. But its far from good.
The proposed thinning algorithm is returning funky branched edges, that aren't really useful. I think it could be my fault on reproducing the algorithm in python.
https://s1.postimg.org/87w1ul2zxb/Screen_Shot_20171009161230.png
So I guess there are three ways to solve this right now.
- Find a way to clip those short branches
- Find my error on the algorithm.
- Or try to make another thinning algorithm.
And I wasn't capable of doing any of these things.
Related
So I would like to make a game where you control the character using your head with Python.
But, all of the tutorials I have found so far doesn't seem to be 2D. I just like to know the position of the face and its angle. Is there any tutorial/modules that can do this without too much coding?
The short answer is no, there is no way to do a generic head pose estimation in general. This is a nice tutorial on the subject which explains some of the current limitations and constraints. The biggest problem you will have is in getting a robust, diverse, well-sampled dataset in a variety of conditions suitable to your use case.
I have been trying to find an implementation of Korf's Rubik's cube solving algorithm in python as I am making a project where I am comparing algorithm efficiency. Unfortunately, I haven't been able to find any implementation so far, does anyone have any examples of Korf's algorithm in Python?
Korf's algorithm is not the best method for an optimal solver for Rubik's cube. It is far better to implement an algorithm which exploits the symmetries of the cube which leads to smaller sizes of the pruning tables. Michael Reid's optimal solver algorithm is best suited. It uses phase 1 of the two-phase algorithm (which can exploit 16 cube symmetries) in three different directions to exploit all 48 cube symmetries. The pruning depth for IDA* is considerably higher than with Korf's method on average.
I am sceptical if Python if fast enough though. I may give it a try since I implemented my two-phase algorithm in Python and it should not be too difficult to adapt this code.
I am calculating the shortest path from one source to one goal on a weighted graph with networkx and single_source_dijkstra.
However, I run into memory problems.
Is there a more efficient way to calculate this? An alternative to Networkx? See my code:
cost, shortestpath = nx.single_source_dijkstra(graph, startpointcoords, secondptcoords,cutoff=10000000)
The bidirectional dijkstra algorithm should produce a significant improvement. Here is the documentation.
A good analogy would be in 3D: place one balloon at point x and expand it till it reaches point y. The amount of air you put in is proportional to the cube of the distance between them. Now put a balloon at each point and inflate both until they touch. The combined volume of air is only 1/4 of the original. In higher dimensions (which is a closer analogy to most networks), there is even more reduction.
Apparently the A* Algorithm of networkx is way more efficient. Afterwards I calculate the length of the resulting path with the dijkstra algorithm I posted.
Perhaps try using another algorithm? Your graph may have too many vertices but few edges, in which case you could use Bellman-Ford bellman_ford_path() link in networkX
Another solution would be to use another python package, for example the answers to this question has different possible libraries.
The last solution would be to implement your own algorithm! Perhaps Gabow's algorithm, but you would have to be very efficient for example by using numpy with numba
I need to find the diameter of the points cloud (two points with maximum distance between them) in 3-dimensional space. As a temporary solution, right now I'm just iterating through all possible pairs and comparing the distance between them, which is a very slow, O(n^2) solution.
I believe it can be done in O(n log n). It's a fairly easy task in 2D (just find the convex hull and then apply the rotating calipers algorithm), but in 3D I can't imagine how to use rotating calipers, since there is no way to order the points.
Is there any simple way to do it (or ready-to-use implementation in python or C/C++)?
PS: There are similar questions on StackOverflow, but the answers that I found only refers to Rotating Calipers (or similar) algorithms, which works fine in 2D situation but not really clear how to implement in 3D (or higher dimensionals).
While O(n log n) expected time algorithms exist in 3d, they seem tricky to implement (while staying competitive to brute-force O(n^2) algorithms).
An algorithm is described in Har-Peled 2001. The authors provide a source code than can optionally be used for optimal computation. I was not able to download the latest version, the "old" version could be enough for your purpose, or you might want to contact the authors for the code.
An alternative approach is presented in Malandain & Boissonnat 2002 and the authors provide code. Altough this algorithm is presented as approximate in higher dimensions, it could fit your purpose. Note that their code provide an implementation of Har-Peled's method for exact computation that you might also check.
In any case, in a real-world usage you should always check that your algorithm remains competitive with respect to the naïve O(n^2) approach.
for a datastructures & algorithms class in college we have to implement an algorithm presented in a paper. The paper can be found here.
So i fullly implemented the algorithm, with still some errors left (but that's not really why I'm asking this question, if you want to see how I implemented it thus far, you can find it here)
The real reason why I'm asking a question on Stackoverflow is the second part of the assignment: we have to try to make the algorithm better. I had a few ways in mind, but all of them sound good in theory but they won't really do good in practice:
Draw a line between the source and end node, search the node closest to the middle of that line and divide the "path" in 2 recursively. The base case would be a smaller graph were a single Dijkstra would do the computation. This isn't really an adjustment to the current algorithm but with some thinking it is clear this wouldn't give an optimal solution.
Try to give the algorithm some sense of direction by giving a higher priority to edges that point to the end node. This also won't be optimal..
So now I'm all out of ideas and hoping that someone here could give me a little hint for a possible adjustment. It doesn't really have to improve the algorithm, I think the first reason why they asked us to do this is so we don't just implement the algorithm from the paper without knowing what's behind it.
(If Stackoverflow isn't the right place to ask this question, my apologies :) )
A short description of the algorithm:
The algorithm tries to select which nodes look promising. By promising I mean that they have a good chance on lying on a shortest path. How promising a node is is represented by it's 'reach'. The reach of a vertex on a path is the minimum of it's distances to the start and to the end. The reach of a vertex in a graph is the maximum of the reaches of the vertex on all shortest paths.
To eventually determine whether a node is added to the priority queue in Dijkstra's algorithm, a test() function is added. Test returns true (if the reach of a vertex in the graph is larger or equal then the weight of the path from the origin to v at the time v is to be inserted in the priority queue) or (the reach of the vertex in the graph is larger or equal then the euclidean distance from v to the end vertex).
Harm De Weirdt
Your best bet in cases like this is to think like a researcher: Research in general and Computer Science research specifically is about incremental improvement, one person shows that they can compute something faster using Dijkstra's Algorithm and then later they, or someone else, show that they can compute the same thing a little faster using A*. It's a series of small steps.
That said, the best place to look for ways to improve an algorithm presented in a paper is in the future directions section. This paper gives you a little bit to work on in that direction, but your gold mine in this case lies in sections 5 and 6. There are multiple places where the authors admit to different possible approaches. Try researching some of these approaches, this should lead you either to a possible improvement in the algorithm or at least an arguable one.
Best of luck!