I am new to Python. I have this Code with me
http://code.activestate.com/recipes/577519-a-star-shortest-path-algorithm/history/1/
its the A star algorithm code. I wanted to know how exactly is it generating the obstacles.
Basically I have a black and white image. My task is to reach the end point from the start point avoiding the white spaces.
Please help me
Thank you!
The path-finding algorithm in function pathFind() is well-commented and decomposed. If there's something specific confusing you, tell us what it is. The important lines are after # generate moves (child nodes) in all possible directions, which admittedly is dense code and not very OO.
I suggest you reduce the parameters (m,n) to a 4x5 or 5x6 then step through its execution, either with pdb debugger (single-step or set breakpoints at key points), or hacking in as many print statements as you need:
# map matrix
n = 30 # horizontal size
m = 30 # vertical size
First try giving it your own custom maps to get a feel for how it works (modify the # fillout the map matrix... lines). Then run it interactively in the debugger.
Related
I'm trying to build a simulation that will take place in a 1000x1000x1000 space. For each point in space, I need to be able to encode 2 or 3 properties.
I also need to be able to do some basic operations on the space, such as, given a point, find the properties of the 26 adjacent 3D neighbors of the point.
All points will lie on vertices in the 1000x1000x1000 space (i.e. every point is discrete).
I wrote up a version in python using numpy and it is much too slow. I've also looked for libraries that would speed it up but couldn't find anything.
Does anyone know of a python library that would provide useful helper functions and would be appropriate for a simulation of this size?
Using Numpy to together with the Numba python-compiler for the more intricate algorithms can take you a long way.
Also, I think you are refering to a "stencil" algorithm, and numba has specific stencil-functionality that could help you.
But start with a smaller grid during development!
I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.
I'm trying to figure out a way of calculating if a world space point is inside of an arbitrary mesh.
I'm not quite sure of the math on how to calculate it if it's not a cube or sphere.
Any help would be great!
One can use a simple ray tracing trick to test if you are inside or outside of a shape. It turns out that 2D, 3D objects or possibly even higher dimension objects have a neat property. That is if you shoot an arbitrary ray in any direction you are inside the shape if, and only if you hit the boundaries of your shape and odd number of times. No need to know the normal direction or anything. Just know how many intersections you have. This is easy to visualize in 2D, and since 3D is just many 2D slices same applies to 3D too.
figure 1: Shooting a ray from a point in an arbitrary direction produces an odd number of hits if inside and an even if outside, So O1 is inside and O2 is not. As a special case glancing hits need to be tested for curves because they make 2 hits coincide in one place (O3).
figure 2: Meshed surfaces have a better boundary condition as only vertex hits are glancing However most tracing engines ignore glancing hits as totally perpendicular hits (O4) would be problematic so they behave right for purposes of this test. Maya tracer is no exception.
Please note this method does not need the surface to be closed, it works none the less it just closes the gap in direction of the ray and open surfaces can report weird results. But can be acceptable in some cases.
Admittedly ray tracing is pretty heavy operation without acceleration routines, however it becomes quite fast once acceleration is in place. Maya API provides a method for this. Note the accelerator is built first then each subsequent call is much cheaper. Here is a quickly written scaffold without acceleration see docs for MFnMesh for more info on how to accelerate:
import maya.cmds as cmd
import maya.OpenMaya as om
def test_if_inside_mesh(point=(0.0, 0.0, 0.0), dir=(0.0, 0.0, 1.0)):
sel = om.MSelectionList()
dag = om.MDagPath()
#replace torus with arbitrary shape name
sel.add("pTorusShape1")
sel.getDagPath(0,dag)
mesh = om.MFnMesh(dag)
point = om.MFloatPoint(*point)
dir = om.MFloatVector(*dir)
farray = om.MFloatPointArray()
mesh.allIntersections(
point, dir,
None, None,
False, om.MSpace.kWorld,
10000, False,
None, # replace none with a mesh look up accelerator if needed
False,
farray,
None, None,
None, None,
None
)
return farray.length()%2 == 1
#test
cmd.polyTorus()
print test_if_inside_mesh()
print test_if_inside_mesh((1,0,0))
In your specific case this may be overkill. I assume your doing some kind of rejection sampling. It is also possible to build the body out of prisms and randomize with barycentric like coordinates. This has the advantage of never wasting results. But the tracing code is much easier to generally use.
If you're attempting to solve this problem for any mesh, you'll have trouble, because not every arbitrary mesh is closed. If your mesh can be assumed closed and well-formed, then you might have to do something like a 3D flood-fill algorithm to determine whether there's a path to a point that can see outside the object from the point you're testing.
If you're willing to take a looser approach that gives you an approximate answer, and assumes that normals are all uniformly pointed outward, there's a code example on this page, written in MEL, that you might be able to convert to Python.
http://forums.cgsociety.org/archive/index.php/t-747732.html
Mark is correct, there's no guaranteed test that works for open meshes. Also, arbitrary mesh tests are going to be slow and expensive, so try the cheap tests (bounding sphere and or bounding box) first. You can also just tell the user 'sorry, no dice' if you have any open edges on your mesh -- that guarantees there is no solution for the concept of 'inside'
If you want an approximate answer that's better than a bounds test but not as expensive as something like a voxel test, you can use qHull or something similar to generate a convex hull for your mesh and test against the convex mesh. That will not handle serious concavities of meshes that twist inside out, but will catch oddly shaped objects with more grace than a bounds test.
If you really need speed or have complex objects, you probably want to voxelize the object and test the voxel data. This is typically too math-heavy for scripting (see this, for example) and it's not trivial to implement.
All that said, here's a very hacky approximation of voxels using the built-in nParticles:
If you have nParticles (maya 2011 + ) you can try filling your object (nParticles > createNParticles > Fill Object) with particles. You can then distance test your point against the positition of each particle in the particle set. If the distance to any particle is less than or equal to the particles' radius, you're 'inside' to within 1/2 particle radius accuracy. You'll note that some shapes can't be filled by nparticles - those are the kind of shapes you can't test 'insidedness' for anyway.
I'm trying to make a (sort-of) clone of Asteroids in Python using Pyglet. I figured I'd try to get a little fancy and implement the separating axis theorem to do collision. I got it to work, but the problem is that it's miserably slow. I test collision between bullets that the player shoots and the asteroids on the screen in a double for-loop, which I believe is quadratic time, but the frame rate drops from about 60 to 30 fps by the time there's about 6 asteroids and 6 bullets on the screen, which seems incredibly slow, even for a non-optimized way of detecting collision.
So I ran a profiler to determine where, exactly, in the code the program is getting hung up. It seems to be hung up in the method where I transform shape vertices into world space (I define the shapes around the origin and use OpenGL code to transform to world space for drawing, which I believe is the right way to do it). I grab the transformation matrix from OpenGL, turn it into a NumPy array, and then multiply each vertex by this matrix to get the transformed vertices. It's worth noting that I do this every collision check: I used to use XNA, and when I implemented the SAT in that (I made an asteroids clone there, too), the vertices were also defined around the origin and then you had to transform them using a world matrix.
Is it best to store the vertices around (0, 0) and transform each call, or just store the transformed vertices? I feel like the algorithm shouldn't be THIS slow, so I'm willing to bet I screwed up implementing something. If I was better at profiling (I'm pretty unfamiliar with it) I might be able to get a more complete picture, but I was hoping you guys might have some idea.
Here's a direct link to the file with the Shape class in it, where all the collision logic happens: shape.py. The specific method that the profiler seemed to mark as the bottleneck was __get_transformed_verts. Obviously you can get to the entire repo from there too, but just be aware that there's still a good deal not commented.
As Nico suggests in comments, a quick way to get a good speed-up would be to check simpler geometry first. For an Asteroids clone I guess a circle will be a good fit (or sphere for 3D). If the circles (at least large enough to cover your actual shape) don't overlap, then there is no need to do the more expensive geometry test.
If you have many objects, you will probably want to avoid doing n*n tests every frame. Take a look at space partitioning structures/algorithms. The simplest scheme with a lot of moving objects in 2D would be a grid. Then you only need to test objects belonging to the same - or neighbouring - grid cells for collision.
Another thing I noticed: You generate the transformed vertices every time you test for collision. It would be quicker to generate them only once per timestep (frame) for each object that fails the circle-circle test.
Hey everyone. I'm really struggling to figure out the logic with this one and was hoping you could help me out. Before I continue I just want to let you know that I am amateur programmer and a beginner at that, with no formal Computer Science training of any sort, so please bear with me. :D Also, I'm using Python, but I could use Java or something similar.
Anywho, I am looking to implement a Region Growing for use in a rudimentary Drawbot.
Here is an article on region growing: http://en.wikipedia.org/wiki/Region_growing
The way I envision it, the image the draw is based upon will meet the following criteria:
The image will be at most 3x3 inches in size at an arbitrary Color Depth
The image will be a black continuous shape on a white background
The shape can be located anywhere on the background.
I've considered the following solutions to this problem. While some work to an extent, each has some considerable flaws in either their performance or feasibility (at least they don't seem feasible to me). Furthermore, because this is a Drawbot, this needs to be done with a single continuous line. This doesn't mean however that I can't backtrack, it only eliminates the possibility of multiple starting points (seeds).
Considered Approaches:
Random Walk:
Solving this problem with a random walk was my first instinct. A random walk program accomplishing this would, I imagine, look something like this:
pseudo python...
Cells To Visit = Number of Black Cells
Cells Visited = 0
MarkColor = red
While Cells Visited < Cells To Visit:
if currentcell is black:
Mark Current Cell As Visited #change pixel to red
Cells Visited +=1
neighbors = Get_Adjacent_Cells() #returns cells either black or red
next cell = random.choose(neighbors)
currentCell = next cell
While I suppose this is feasible, it seems to me to be highly ineffective and doesn't guarantee good results, but in the interest of actually getting something done I may end up trying this... Is my logic in the pseudocode even vaguely correct?
Sweeping Pattern:
This method to me seemed to be the most trivial to implement. My idea here is that I could choose a starting point at one extreme of the shape (e.g. the lowest most left point). From there it would draw to the right, moving only on the x axis until it hit a white pixel. From here it would move up one pixel on the y axis, and then move left on the x axis until it reached a white pixel. If the pixel directly above it happend to be white, backtrack on the x axis until it finds a black pixel above it.
This method upon further inspection has some major short comings.
When faced with a shape such as this:
The result will look like this:
And even if I were to tell it to start sweeping down after awhile, the middle leg would still be overlooked.
4/8 Connected Neighborhood:
http://en.wikipedia.org/wiki/8-connected_neighborhood
This method appears to me to be the most powerful and effective, but at this point I can't figure it out fully, nor can I think of how I would implement it without potentially leaving some overlooked areas
At every cell I would look at the neighboring black cells, devise some method for ranking which one I should visit first, visit all of them, and repeat the process until all cells are covered.
The problems I could see here is first of all dealing with the data structure necessary to accomplish this, and also merely figuring out the logic behind it.
Those are the best solutions I've been able to think of. Thank you for taking the time to read this, I realize it is long, but I thought that I should make it as explicit as possible. Any and all suggestions will be greatly appreciated... Thanks!
Edit:
I also looked into maze generating and solving algorithms, but wasn't sure how to implement that here. My understanding of the maze solving algorithms is that they rely on the passages of the maze to be of equal width. I could of course be wrong about that.
Basic region growing, in pseudocode looks something like:
seed_point // starting point
visited // boolean array/matrix, same size as image
point_queue // empty queue
point_queue.enqueue( seed_point )
visited( seed_point ) = true
while( point_queue is not empty ) {
this_point = point_queue.dequeue()
for each neighbour of this_point {
if not visited( neighbour ) and neighbour is black/red/whatever
point_queue.enqueue( neighbour )
visited( neighbour ) = true
}
}
// we are done. the "visited" matrix tells
// us which pixels are in the region
I don't understand where the ranking that you've mentioned comes into it though. Am I missing something?
I'm confused by the very long question.
Are you sure you aren't just trying to do a flood fill?
Here's a really nice little screencast on writing a recursive maze solver: http://thinkcode.tv/catalog/amazing-python/
I think it might give you some ideas for the problem you are trying to solve.
Also, here's a little recursive maze solving script that I wrote after watching the screencast http://pastie.org/1854582. Equal width passages are not necessary, the only things that are necessary are open space, walls, and some kind of an ending condition, in this case, finding the end of the maze.
If you don't want to go recursive, the other thing you can do is use a "backtracking" method. You can see a little example of it being used in the random generation of mazes on this page:
http://weblog.jamisbuck.org/2011/2/7/maze-generation-algorithm-recap (First example on the page).
Is this sounding relevant? If it is, let me know if you want me to explain anything in more detail.
Edit:
This seems like a really good discussion on doing flood fills in python http://www.daniweb.com/software-development/python/threads/148874
A simple technique that can help with some maze solving problems, of keeping one hand on the wall, might help.
Note however that if you chose a random starting point, you might chose a point that whichever way you travel from there, you block off a portion. i.e. if you were to start in the middle of an hour-glass shape, you would only be able to fill in one half.