Querying of a point is within a mesh maya python api - python

I'm trying to figure out a way of calculating if a world space point is inside of an arbitrary mesh.
I'm not quite sure of the math on how to calculate it if it's not a cube or sphere.
Any help would be great!

One can use a simple ray tracing trick to test if you are inside or outside of a shape. It turns out that 2D, 3D objects or possibly even higher dimension objects have a neat property. That is if you shoot an arbitrary ray in any direction you are inside the shape if, and only if you hit the boundaries of your shape and odd number of times. No need to know the normal direction or anything. Just know how many intersections you have. This is easy to visualize in 2D, and since 3D is just many 2D slices same applies to 3D too.
figure 1: Shooting a ray from a point in an arbitrary direction produces an odd number of hits if inside and an even if outside, So O1 is inside and O2 is not. As a special case glancing hits need to be tested for curves because they make 2 hits coincide in one place (O3).
figure 2: Meshed surfaces have a better boundary condition as only vertex hits are glancing However most tracing engines ignore glancing hits as totally perpendicular hits (O4) would be problematic so they behave right for purposes of this test. Maya tracer is no exception.
Please note this method does not need the surface to be closed, it works none the less it just closes the gap in direction of the ray and open surfaces can report weird results. But can be acceptable in some cases.
Admittedly ray tracing is pretty heavy operation without acceleration routines, however it becomes quite fast once acceleration is in place. Maya API provides a method for this. Note the accelerator is built first then each subsequent call is much cheaper. Here is a quickly written scaffold without acceleration see docs for MFnMesh for more info on how to accelerate:
import maya.cmds as cmd
import maya.OpenMaya as om
def test_if_inside_mesh(point=(0.0, 0.0, 0.0), dir=(0.0, 0.0, 1.0)):
sel = om.MSelectionList()
dag = om.MDagPath()
#replace torus with arbitrary shape name
sel.add("pTorusShape1")
sel.getDagPath(0,dag)
mesh = om.MFnMesh(dag)
point = om.MFloatPoint(*point)
dir = om.MFloatVector(*dir)
farray = om.MFloatPointArray()
mesh.allIntersections(
point, dir,
None, None,
False, om.MSpace.kWorld,
10000, False,
None, # replace none with a mesh look up accelerator if needed
False,
farray,
None, None,
None, None,
None
)
return farray.length()%2 == 1
#test
cmd.polyTorus()
print test_if_inside_mesh()
print test_if_inside_mesh((1,0,0))
In your specific case this may be overkill. I assume your doing some kind of rejection sampling. It is also possible to build the body out of prisms and randomize with barycentric like coordinates. This has the advantage of never wasting results. But the tracing code is much easier to generally use.

If you're attempting to solve this problem for any mesh, you'll have trouble, because not every arbitrary mesh is closed. If your mesh can be assumed closed and well-formed, then you might have to do something like a 3D flood-fill algorithm to determine whether there's a path to a point that can see outside the object from the point you're testing.
If you're willing to take a looser approach that gives you an approximate answer, and assumes that normals are all uniformly pointed outward, there's a code example on this page, written in MEL, that you might be able to convert to Python.
http://forums.cgsociety.org/archive/index.php/t-747732.html

Mark is correct, there's no guaranteed test that works for open meshes. Also, arbitrary mesh tests are going to be slow and expensive, so try the cheap tests (bounding sphere and or bounding box) first. You can also just tell the user 'sorry, no dice' if you have any open edges on your mesh -- that guarantees there is no solution for the concept of 'inside'
If you want an approximate answer that's better than a bounds test but not as expensive as something like a voxel test, you can use qHull or something similar to generate a convex hull for your mesh and test against the convex mesh. That will not handle serious concavities of meshes that twist inside out, but will catch oddly shaped objects with more grace than a bounds test.
If you really need speed or have complex objects, you probably want to voxelize the object and test the voxel data. This is typically too math-heavy for scripting (see this, for example) and it's not trivial to implement.
All that said, here's a very hacky approximation of voxels using the built-in nParticles:
If you have nParticles (maya 2011 + ) you can try filling your object (nParticles > createNParticles > Fill Object) with particles. You can then distance test your point against the positition of each particle in the particle set. If the distance to any particle is less than or equal to the particles' radius, you're 'inside' to within 1/2 particle radius accuracy. You'll note that some shapes can't be filled by nparticles - those are the kind of shapes you can't test 'insidedness' for anyway.

Related

Is there a way to get matplotlib.path.contains_points to be inclusive of boundary points

We have a hit test use case that wishes to also consider points on the actual boundary of a path. We're using matplotlib.path.contains_points to test a bunch of points en masse and it's working very well for us. Today I discovered it returns false for points on the boundary, in our case the origin (0,0) which is a point on the path.
Is there a way to configure the function call to yield True on boundary points, that is to be inclusive of the boundary? I'm only seeing it support an exclusive test, but according to the documentation there's the other parameters for transform and radius, maybe they have some magic number for pushing the thing to true for boundary points?
contains_points(points, transform=None, radius=0.0) Returns a bool
array which is True if the path contains the corresponding point.
If transform is not None, the path will be transformed before
performing the test.
radius allows the path to be made slightly larger or smaller.
A quick scan of the source code says no. matplotlib isn't really intended for spatial ops; it just has a little bit of support for them just because it's an old codebase with a huge API surface.
I suggest using
shapely, which is built specifically for spatial operations like this one, instead. If you use shapely you can take advantage of object.intersects to handle this.
You can roll your points into a pandas.Series consisting of shapely.Point objects and your polygons (given a sequence of coordinates) into a poly = shapely.Polygon(coordinates) object. Then do something like points_that_intersect = points[points.map(lambda p: poly.intersects(p)]

How to determine and extract surface points of 3D object?

I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.

Optimizing scipy.spatial.Delaunay.find_simplex

I have a set of points in a plane where each point has an associated altitude. I'm thinking of using the scipy.spatial library to compute the Delaunay triangulation of the point set and then use the result to interpolate for the points in between.
The library implements a nice function that, given a point, finds the triangle it lies in. This would be particularly useful when calculating the depth map from the mesh. I assume though (please do correct me if I'm wrong) that the search function searches from the same starting point every time it is called. Since the points I will be looking for will tend to lie either on the triangle the previous one lied on or on an adjacent one, I figure that's unneccessary, but can't seem to find a way to optimize the search, other than to implement it myself.
Is there a way to set the initial triangle for the search, or to optimize the depth map calculation otherwise?
You can try point in location test, especially Kirkpatrick algorithm/data structure. Basically you subdivide the mesh in both axis and re-triangulate it. A better and simpler solution is to give each triangle a color and draw a bitmap then check the color of the bitmap with the point.

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Separating Axis Theorem in Python/Pyglet

I'm trying to make a (sort-of) clone of Asteroids in Python using Pyglet. I figured I'd try to get a little fancy and implement the separating axis theorem to do collision. I got it to work, but the problem is that it's miserably slow. I test collision between bullets that the player shoots and the asteroids on the screen in a double for-loop, which I believe is quadratic time, but the frame rate drops from about 60 to 30 fps by the time there's about 6 asteroids and 6 bullets on the screen, which seems incredibly slow, even for a non-optimized way of detecting collision.
So I ran a profiler to determine where, exactly, in the code the program is getting hung up. It seems to be hung up in the method where I transform shape vertices into world space (I define the shapes around the origin and use OpenGL code to transform to world space for drawing, which I believe is the right way to do it). I grab the transformation matrix from OpenGL, turn it into a NumPy array, and then multiply each vertex by this matrix to get the transformed vertices. It's worth noting that I do this every collision check: I used to use XNA, and when I implemented the SAT in that (I made an asteroids clone there, too), the vertices were also defined around the origin and then you had to transform them using a world matrix.
Is it best to store the vertices around (0, 0) and transform each call, or just store the transformed vertices? I feel like the algorithm shouldn't be THIS slow, so I'm willing to bet I screwed up implementing something. If I was better at profiling (I'm pretty unfamiliar with it) I might be able to get a more complete picture, but I was hoping you guys might have some idea.
Here's a direct link to the file with the Shape class in it, where all the collision logic happens: shape.py. The specific method that the profiler seemed to mark as the bottleneck was __get_transformed_verts. Obviously you can get to the entire repo from there too, but just be aware that there's still a good deal not commented.
As Nico suggests in comments, a quick way to get a good speed-up would be to check simpler geometry first. For an Asteroids clone I guess a circle will be a good fit (or sphere for 3D). If the circles (at least large enough to cover your actual shape) don't overlap, then there is no need to do the more expensive geometry test.
If you have many objects, you will probably want to avoid doing n*n tests every frame. Take a look at space partitioning structures/algorithms. The simplest scheme with a lot of moving objects in 2D would be a grid. Then you only need to test objects belonging to the same - or neighbouring - grid cells for collision.
Another thing I noticed: You generate the transformed vertices every time you test for collision. It would be quicker to generate them only once per timestep (frame) for each object that fails the circle-circle test.

Categories