I'm having a really weird behaviour when using vedo's intersectWith() function to check intersection between meshes. I'm checking intersections between thousands of simple planar meshes (basically squares, of the same size, made by 2 triangles, rotated in different angles in both z and x axis) and sometimes, when checking some of those pairs, python crashes. No error, it just crashes in that function.
I found out that when the center point of both meshes is exactly the same, the problem happens. I corrected that by adding noise, but still, some other combinations throw the same problem.
I was able to isolate one of the cases and make a small testing example that produces the crash:
m1v = np.array([[17903.93947473, 11418.12995419, 2572.68075251],[17889.43531656, 11423.09300945, 2546.89320749],[17913.65204344, 11446.51421055, 2572.68075251],[17899.14788527, 11451.47726581, 2546.89320749]])
m2v = np.array([[17897.68174692, 11454.30365475, 2573.22453367],[17917.8546173 , 11442.45883457, 2554.44274633],[17882.4917027 , 11428.43354543, 2573.22453367],[17902.66457308, 11416.58872525, 2554.44274633]])
m1f = [[0,1,2],[1,2,3]]
m1 = vd.Mesh([m1v,m1f])
m2 = vd.Mesh([m2v,m1f])
m1.intersectWith(m2,tol=1e-06)
I suspect that if (i don't know really..) maybe some coordinates of the vertices are the same, for some reason it crashes (maybe some division by 0 or something in the intersect check).
Has anybody working with vedo (or vtk, vedo is a wraapper of vtk) had the same issue or has any idea why it's happening?
Related
I'm trying to replicate an N dimensional Delaunay triangulation that is performed by the Matlab delaunayn function in Python using the scipy.spatial.Delaunay function. However, while the Matlab function gives me the result I want and expect, scipy is giving me something different. I find this odd considering both are wrappers of the QHull library. I assume Matlab is implicitly setting different parameters in its call. The situation I'm trying to replicate between the two of them is found in Matlab's documentation.
The set up is to have a cube with a point in the center as below. The blue lines I provided to help visualize the shape, but they serve no purpose or meaning for this problem.
The triangulation I expect from this results in 12 simplices (listed in the Matlab example) and looks like the following.
However this python equivalent produces "extra" simplices.
x = np.array([[-1,-1,-1],[-1,-1,1],[-1,1,-1],[1,-1,-1],[1,1,1],[1,1,-1],[1,-1,1],[-1,1,1],[0,0,0]])
simp = scipy.spatial.Delaunay(x).simplices
The returned variable simp should be an M x N array where M is the number of simplices found (should be 12 for my case) and N is the number of points in the simplex. In this case, each simplex should be a tetrahedron meaning N is 4.
What I'm finding though is that M is actually 18 and that the extra 6 simplices are not tetrahedrons, but rather the 6 faces of the cube.
What's going on here? How can I limit the returned simplices to only be tetrahedrons? I used this simple case to demonstrate the problem so I'd like a solution that isn't tailored to this problem.
EDIT
Thanks to an answer by Amro, I was able to figure this out and I can get a match in simplices between Matlab and Scipy. There were two factors in play. First, as pointed out, Matlab and Scipy use different QHull options. Second, QHull returns simplices with zero volume. Matlab removes these, Scipy doesn't. That was obvious in the example above because all 6 extra simplices were the zero-volume coplanar faces of the cube. These can be removed, in N dimensions, with the following bit of code.
N = 3 # The dimensions of our points
options = 'Qt Qbb Qc' if N <= 3 else 'Qt Qbb Qc Qx' # Set the QHull options
tri = scipy.spatial.Delaunay(points, qhull_options = options).simplices
keep = np.ones(len(tri), dtype = bool)
for i, t in enumerate(tri):
if abs(np.linalg.det(np.hstack((points[t], np.ones([1,N+1]).T)))) < 1E-15:
keep[i] = False # Point is coplanar, we don't want to keep it
tri = tri[keep]
I suppose the other conditions should be addressed, but I'm guaranteed that my points contain no duplicates already, and the orientation condition appears to have no affect on the outputs that I can discern.
Some notes comparing MATLAB and SciPy functions:
According to MATLAB docs, by default it uses Qt Qbb Qc Qhull options for 3-dimensional input, while SciPy uses Qt Qbb Qc Qz.
not sure if it matters, but your NumPy array is not in the same order as the points created with ndgrid in MATLAB.
In fact if you look at the MATLAB code in edit delaunayn.m, you can see three extra steps performed:
first it merges duplicate points mergeDuplicatePoints (this is not an issue in your case)
then it enforces an orientation convention for the points (see the code)
finally after getting the result from Qhull (implemented as a MEX-function qhullmx), there is the following comment above a few lines of code:
Strip the zero volume simplices that may have been created by the presence of degeneracy.
Since the file is copyrighted, I won't post the code here, but you can check it on your end.
I have made a three way venn diagram. I have three issues with it that I can't seem to solve.
What is the code to move the circle labels (i.e."Set1","Set2","Set3") because right now one is too far away from the circle.
What is the code to make the circles be three equal sizes/change the circle size?
What is the code to move the circles around the plot. Right now, set2 is within set3 (but coloured differently), I would like the diagram to look more like the "standard" way of showing a venn diagram (i.e. 3 separate circles with some overlap in the middle).
On another note, I found it difficult to find what the commands such as "set_x", "set_alpha" should be; if anyone knew of a manual that would answer by above questions I would appreciate it, I couldn't seem to find one place with all the information I needed.
import sys
import numpy
import scipy
from matplotlib_venn import venn3,venn3_circles
from matplotlib import pyplot as plt
#Build three lists to make 3 way venn diagram with
list_line = lambda x: set([line.strip() for line in open(sys.argv[x])])
set1,set2,set3 = list_line(1),list_line(2),list_line(3)
#Make venn diagram
vd = venn3([set1,set2,set3],set_labels=("Set1","Set2","Set3"))
#Colours: get the HTML codes from the net
vd.get_patch_by_id("100").set_color("#FF8000")
vd.get_patch_by_id("001").set_color("#5858FA")
vd.get_patch_by_id("011").set_color("#01DF3A")
#Move the numbers in the circles
vd.get_label_by_id("100").set_x(-0.55)
vd.get_label_by_id("011").set_x(0.1)
#Strength of color, 2.0 is very strong.
vd.get_patch_by_id("100").set_alpha(0.8)
vd.get_patch_by_id("001").set_alpha(0.6)
vd.get_patch_by_id("011").set_alpha(0.8)
plt.title("Venn Diagram",fontsize=14)
plt.savefig("output",format="pdf")
What is the code to move the circle labels (i.e."Set1","Set2","Set3") because right now one is too far away from the circle.
Something like that:
lbl = vd.get_label_by_id("A")
x, y = lbl.get_position()
lbl.set_position((x+0.1, y-0.2)) # Or whatever
The "A", "B", and "C" are predefined identifiers, denoting the three sets.
What is the code to make the circles be three equal sizes/change the circle size?
If you do not want the circle/region sizes to correspond to your data (not necessarily a good idea), you can get an unweighted ("classical") Venn diagram using the function venn3_unweighted:
from matplotlib_venn import venn3_unweighted
venn3_unweighted(...same parameters you used in venn3...)
You can further cheat and tune the result by providing a subset_areas parameter to venn3_unweighted - this is a seven-element vector specifying the desired relative size of each region. In this case the diagram will be drawn as if the region areas were subset_areas, yet the numbers will be shown from the actual subsets. Try, for example:
venn3_unweighted(...., subset_areas=(10,1,1,1,1,1,1))
What is the code to move the circles around the plot.
The need to "move the circles around" is somewhat unusual - normally you would either want the circles to be positioned so that their intersection sizes correspond to your data, or use the "default" positioning. The functions venn3 and venn3_unweighted cater to those two requirements. Moving circles around arbitrarily is possible, but would require some lower-level coding and I'd advice against that.
I found it difficult to find what the commands such as "set_x", "set_alpha" should be
The object you get when you call v.get_label_by_id is a Matplotlib Text object. You can read about its methods and properties here. The object returned by v.get_patch_by_id is a PathPatch, look here and here for reference.
I have written a Python Script for ABAQUS to create several parts with many partitions. To get a structered mesh I have to select several edges. Now there is one edge I apparently cannot select in ABAQUS 6.10 & 6.11. Oddly, everything is fine with ABAQUS 6.13+.
p = mdb.models[name_model].parts[name_part_1]
e = p.edges
pickedEdges = e.getByBoundingBox(((cos(alpha_rad)*ri)-delta_p),((sin(alpha_rad)*ri)-delta_p),0.0,
((cos(alpha_rad)*d_core/2)+delta_p),((sin(alpha_rad)*d_core/2)+delta_p),0.0)
p.seedEdgeByBias(biasMethod=SINGLE, end2Edges=pickedEdges, ratio=bias_f, number=elem_num_rad, constraint=FINER)
Here, 'ri' is used to describe a radius, 'delta_p' (=0.001) is used to get a boundingbox slightly bigger than the original edge.
I also tried to use a bigger boundingbox by increasing delta_p but nothing works.
Any ideas? Thank you in advance! :)
for a sketch:
click me
the described bounding box is box E and I try to get the orange line
Its not clear from your post why the method isn't working.
you could determine a point on your edge and use the findAt method instead of getByBoundingBox.
By setting delta_p to a very large number, you should select every edge in your model? Its not clear what you mean by "not working"
I am creating a toolset for creating nurbs curves/surfaces inside maya using python.
I have a set of dictionaries that include cvPositions, knots, form etc. each of which describe a preset 3d shape (cube, circle, pyramid etc).
I also have a 3d matrix stored in the nodes metadata that is used as an offset for the shape. This allows you to scale/move/rotate the shape without moving the transform.
The problem is in the way I am applying this matrix is very slow:
First I will create a new (edit)transform at the position of the (orig)transform containing the curves.
Next I will transfer cv positions in world space from (orig)transform to (edit)transform
Next i will move the (edit)transform into the matrix position.
Finally I will transfer the cvPositions back to the (orig)transform
When creating hundreds of shapes, this is becoming prohibitively slow...
Can someone describe a mathematical way to apply a matrix to a set of 3d points?
Perhaps using one of the math modules or numpy?
Alternatively,
Is there a way using OpenMaya api functions to do this? Perhaps with MPointArray? This is as far as I have gotten on that front:
crv = OpenMaya.MFnNurbsCurve( self.dagPath )
cvs = OpenMaya.MPointArray()
space = OpenMaya.MSpace.kWorld
crv.getCVs(cvs, space)
positions = []
for i in range(cvs.length()):
pt = cvs[i]
positions.append( (pt[0], pt[1], pt[2]) )
The easiest method is to use pymel's built-in versions of points and matrices (pymel is built in to maya 2011+). The math types are in pymel.datatatypes; here's an example of transforming a point by a matrix in pymel:
import pymel.core as pm
pt = pm.datatypes.Point(0,0,0)
mt = pm.datatypes.Matrix(1,0,0,0, 0,1,0,0, 0,0,1,0, 5,5,5,1 )
moved = pt * mt
print moved
# [5,5,5]
Pymel points and matrices will let you do your algorithm. The math is going to be done in the API but the Python <> C++ conversions may still make it feel pretty slow for big data.
It sounds like you're basically re-creating 'freeze transforms' followed by 'zero pivots'. Maybe you should try that as an alternative to doing this in python math...
I'm trying to draw fast lines using pygame that aren't rendered directly to the screen. I've got a Python list as large as the number of pixels for the desired resolution, and store integer values corresponding to the number of times that pixel was hit by the line algorithm. Using this, a 2D heat map is built up, so rather than drawing a flat pixel value, pixel values are incremented based on the number of times a line runs through it, and "hot" pixels get brighter colours.
The reason for doing it this way is that we don't know in advance how many of these lines are going to get drawn, and what the maximum number of times any given pixel is going to be hit. Since we'd like to scale the output so that each rendering has the correct maximum and minimum RGB values, we can't just draw to the screen.
Is there a better way to draw these lines than a relatively naive Bresenham's algorithm? Here's the critical part of the drawLine function:
# before the loop, to save repeated multiplications
xm = []
for i in range(resolution[0]):
xm.append(i * resolution[0])
# inside of drawLine, index into the f list, of size resolution[0] * resolution[1]
for x in range(x0, x1 + 1):
if steep:
idx = y + xm[x]
f[idx] += 1
else:
idx = x + xm[y]
f[idx] += 1
The end result is scaled and drawn to the screen based on the maximum value inside of f. For example, if the maximum value is 1000, then you can assume the RGB value of each of the pixels is (f[i] * 255) / 1000.
The profile information says that runtime is dominated by the index lookups into f. I've used previous questions here to prove that these basic lists are faster than numpy arrays or arrays in Python, but for drawing lines like this, it still seems like there's room to improve.
What's a good and fast method for drawing an unknown number of lines to the screen, knowing that you'll be scaling the output in the end to render to the screen? Is there a good way to get rid of the index overhead?
Try Cython or something similar. (If you do, I would be interested in knowing if/how much that helped)
Cython is a programming language to
simplify writing C and C++ extension
modules for the CPython Python
runtime. Strictly speaking, Cython
syntax is a superset of Python syntax
additionally supporting: Direct
calling of C functions, or C++
functions/methods, from Cython code.
Strong typing of Cython variables,
classes, and class attributes as C
types. Cython compiles to C or C++
code rather than Python, and the
result is used as a Python Extension
Module or as a stand-alone application
embedding the CPython runtime.
(http://en.wikipedia.org/wiki/Cython)