NurbsCurve MatrixMath Maya api Python - python

I am creating a toolset for creating nurbs curves/surfaces inside maya using python.
I have a set of dictionaries that include cvPositions, knots, form etc. each of which describe a preset 3d shape (cube, circle, pyramid etc).
I also have a 3d matrix stored in the nodes metadata that is used as an offset for the shape. This allows you to scale/move/rotate the shape without moving the transform.
The problem is in the way I am applying this matrix is very slow:
First I will create a new (edit)transform at the position of the (orig)transform containing the curves.
Next I will transfer cv positions in world space from (orig)transform to (edit)transform
Next i will move the (edit)transform into the matrix position.
Finally I will transfer the cvPositions back to the (orig)transform
When creating hundreds of shapes, this is becoming prohibitively slow...
Can someone describe a mathematical way to apply a matrix to a set of 3d points?
Perhaps using one of the math modules or numpy?
Alternatively,
Is there a way using OpenMaya api functions to do this? Perhaps with MPointArray? This is as far as I have gotten on that front:
crv = OpenMaya.MFnNurbsCurve( self.dagPath )
cvs = OpenMaya.MPointArray()
space = OpenMaya.MSpace.kWorld
crv.getCVs(cvs, space)
positions = []
for i in range(cvs.length()):
pt = cvs[i]
positions.append( (pt[0], pt[1], pt[2]) )

The easiest method is to use pymel's built-in versions of points and matrices (pymel is built in to maya 2011+). The math types are in pymel.datatatypes; here's an example of transforming a point by a matrix in pymel:
import pymel.core as pm
pt = pm.datatypes.Point(0,0,0)
mt = pm.datatypes.Matrix(1,0,0,0, 0,1,0,0, 0,0,1,0, 5,5,5,1 )
moved = pt * mt
print moved
# [5,5,5]
Pymel points and matrices will let you do your algorithm. The math is going to be done in the API but the Python <> C++ conversions may still make it feel pretty slow for big data.
It sounds like you're basically re-creating 'freeze transforms' followed by 'zero pivots'. Maybe you should try that as an alternative to doing this in python math...

Related

Problems re-implementing the fipy mesh20x20 example from my own IDE

I am currently using fipy but am still relatively new to the nuiances associated with the package. While I have been able to regenerate the desired heatmap from the examples folder in for the mesh20x20 diffusion example using the command line, I have struggled to replicate it within a Spyder IDE. I am using python version 3.8 . It is simple enough to generate it using the "examples" folder from the command line the command line image generated, however, when I attempt to "re-program" it I end up with iterations of the following. the following result. I am hoping to be able to regenerate the smooth color transition from the examples folder, as opposed to the discrete dichromatic option that I have been limited to at present. I believe there is some issues with the viewer in some capacity I believe some related issues may have cropped up in the past for others, potentially as it relates to colorbar reformatting, though I have not yet been capable of effectively implementing these workarounds to generate the desired imagery. datamin and datamax in Viewer() did not work
I would be greatly indebted for any assitance the community could provide.
from fipy.terms.transientTerm import TransientTerm
from fipy.terms.implicitDiffusionTerm import ImplicitDiffusionTerm
from fipy.terms.explicitDiffusionTerm import ExplicitDiffusionTerm
from fipy.meshes.nonUniformGrid2D import NonUniformGrid2D
from fipy.variables.cellVariable import CellVariable
from fipy.viewers.matplotlibViewer.matplotlib2DViewer import Matplotlib2DViewer
####
#Global Inputs
D=1
steps=10
#Dimensional Inputs
nx=20
dx=1
ny=20
dy=1
L=dx*nx
#Temporal Inputs
#nt=20
#dt=1
#cell variable initial values
value=0
#construct mesh from dimensional pts
mesh=NonUniformGrid2D(nx=nx, dx=dx, ny=ny, dy=dy)
#construct term variables phi with name, mesh design
phi=CellVariable(name="solutionvariable", mesh=mesh, value=0)
#construct boundary conditions
#dirichlet ---> we can an automatic application of neumann to top right and bottom left
valueTopLeft=0
valueBottomRight=1
#assign boundary conditions to a face or cell
X, Y=mesh.faceCenters
facesTopLeft=((mesh.facesLeft & (Y > L/2 )) | (mesh.facesTop &( X < L/2)))
facesBottomRight=((mesh.facesRight & (Y < L/2)) | (mesh.facesBottom & (X > L/2)))
#constrain variables
phi.constrain(valueTopLeft, facesTopLeft)
phi.constrain(valueBottomRight, facesBottomRight)
#equation construction
eq=TransientTerm()==ExplicitDiffusionTerm(coeff=D)
#equation solving and either viewing and/or extraction
timestepduration=0.9 *(dx**2)/(2*D)
for step in range(steps):
eq.solve(var=phi, dt=timestepduration)
print(phi[step])
viewer=Matplotlib2DViewer(vars=phi, datamin=0, datamax=1)
viewer.axes.set_title("Solutionvbl(Step %d)" % (step+1,))
Figured it out I think. I was using ExplicitDiffusion and the example utilizes ImplicitDiffusion. When I tried this all I got back was a blank monochromatic image (and returned zeros for my phi[step] at the end. I am happy to report that once a "kickstart" value is provided in the value section for cellVariable (I used 0.001), and utilized in conjunction with ImplicitDiffusion, and the timestepduration is increased from its limit of 0.9x**2/2D to the utilized 9x**2/2D used in the example documentation it more or less adheres to the image generated when run from the command line. Grateful to have this sorted. Hope this provides assistance to anyone else who might run into a similar problem.

What simple filter could I use to de-noise my data?

I'm processing some experimental data in Python 3. The data (raw_data in my code) is pretty noisy:
One of my goal is to find the peaks, and for this I'd like to filter the noise. Based on what I found in the documentation of SciPy's Signal module, the theory of filtering seems to be really complicated, and unfortunately I have zero background. Of course I got to learn it sooner or later - and I intend to - but now now the profit doesn't worth the time (and learning filter theory isn't the purpose of my work), so I shamefully copied the code in Lyken Syu's answer without a chance of understanding the background:
import numpy as np
from scipy import signal as sg
from matplotlib import pyplot as plt
# [...] code, resulting in this:
raw_data = [arr_of_xvalues, arr_of_yvalues] # xvalues are in decreasing order
# <magic beyond my understanding>
n = 20 # the larger n is, the smoother the curve will be
b = [1.0 / n] * n
a = 2
filt = sg.lfilter(b, a, raw_data)
filtered = sg.lfilter(b, a, filt)
# <\magic>
plt.plot(filtered[0], filtered[1], ".")
plt.show()
It kind of works:
What concerns me is the curve from 0 to the beginning of my dataset the filter adds. I guess it's a property of the IIR filter I used, but I don't know how to prevent this. Also, I couldn't make other filters work so far. I need to use this code on other experimental results alike this, so I need a somewhat more general solution than e.g. cutting out all y<10 points.
Is there a better (possibly simpler) way, or choice of filter that is easy to implement without serious theoretical background?
How, if, could I prevent my filter adding that curve to my data?

How I could get the actual toposhape data in OpenCascade?

All
have file from CAD (SW) in STEP format and was able to read it via Python OCC binding:
importer = aocxchange.step.StepImporter(fname)
shapes = importer.shapes
shape = shapes[0]
# promote up
if (shape.ShapeType() == OCC.TopAbs.TopAbs_SOLID):
sol = OCC.TopoDS.topods.Solid(shape)
I could display it, poke at it, check flags etc
t = OCC.BRepCheck.BRepCheck_Analyzer(sol)
print(t.IsValid())
print(sol.Checked())
print(sol.Closed())
print(sol.Convex())
print(sol.Free())
print(sol.Infinite())
So far so good. It really looks like small tube bent along some complex path.
Question: how I could extract geometry features from what I have? I really need tube parameters and path it follows. Any good example in Python and/or C++ would be great
In OpenCASCADE there's a separation between topology and geometry. So, usually your first contact will be the topological entities (i.e.: TopoDS_Wire or a TopoDS_Edge), that can give you access to the geometry (take a look here for more details).
In your case, after reading the STEP file you ended up with a TopoDS_Shape. This is the highest level topological entity and most probably is formed by one or more sub-shapes.
Assuming that your shape is formed by a bspline curve (it seems to be!), you could explore the shape, looking for TopoDS_Edge objects (they are the topological entities that map to geometric curves):
TopExp_Explorer myEdgeExplorer(shape, TopAbs_EDGE);
while (myEdgeExplorer.More())
{
double u0, u1;
auto edge = TopoDS::Edge(myEdgeExplorer.Current());
auto curve = BRep_Tool::Curve(edge, u0, u1);
// now you have access to the curve ...
// to get a point lying on it, check
// the method curve->Value(u);
myEdgeExplorer.Next();
}

How can I take a list of points that create a line and extend them into polygons inward of a certain length?

I have gps coordinates. They are just points that create an outline in a few different places, . I want to be able to have a script convert these points into polygons that extend X distance inwards, and also- a way for them to extend both inwards and outwards.
So if I had something like this:
(dots are points)
00000000000000000
00000000000000000
00............000
00.0000000000.000
00.0000000000.000
00.0000000000.000
00.0000000000.000
00.0000000000.000
00............000
00000000000000000
00000000000000000
I could run this program with a distance of 1 and "inwards", and I would end up with a polygon of # shape:
00000000000000000
00000000000000000
00&&&&&&&&&&&&000
00&&&&&&&&&&&&000
00&&00000000&&000
00&&00000000&&000
00&&00000000&&000
00&&&&&&&&&&&&000
00&&&&&&&&&&&&000
00000000000000000
00000000000000000
So far I have tried using circles and then reducing them but it seems wrong / not really feasible. This isn't being performed on a grid, actually it used floats for coordinates.
Any libraries that could do this as well are appreciated.
GDAL/OGR is another option. Ultimately what you want to do is a buffer. To expand your polygons shape outward use a buffer with a positive buffer distance, inwards it would be negative buffer distance. The following is a simple example using a shapefile. Not sure what format your data is in, but I would be surprised if GDAL/OGR can't read it.
import osgeo.ogr
# using ESRI Shape file in this example but there are a number
# of different files this lib supports: http://www.gdal.org/ogr/ogr_formats.html
driver = osgeo.ogr.GetDriverByName('ESRI Shapefile')
osgeo.ogr.UseExceptions()
# Create a data source using the driver...
dataSource = driver.Open("/home/user1/data.shp")
# Get the layer
lyr = dataSource.GetLayer()
# Select the feature in this case using an attribute query
lyr.SetAttributeFilter("column = 'value'")
# verify that you have a feature selected
print 'features in layer:', lyr.GetFeatureCount()
# get the firest feature from the layer
feature = lyr.GetNextFeature()
# get the geometry from the feature
geom = feature.GetGeometryRef()
# perform a 100 unit buffer, not sure what units the coordinates of the
# the data you have are in.
bufferGeom = geom.buffer(100)
# bufferGeom is a geometry object, which is described here:
# <http://cosmicproject.org/OGR/ogr_classes.html#Geometry>
The following is a fantastic resource for getting started with working with spatial data using GDAL/ORG: http://www.gis.usu.edu/~chrisg/python/2009/
Api docs: http://cosmicproject.org/OGR/ogr_classes.html
finally here is the link to the GDAL/OGR page. http://www.gdal.org/
https://pypi.python.org/pypi/Shapely
Shapely is a very good 2d computational geometry library; the way I understand it, it reduces your problem to a single line of code.

Image transformation in OpenCV

This question is related to this question: How to remove convexity defects in sudoku square
I was trying to implement nikie's answer in Mathematica to OpenCV-Python. But i am stuck at the final step of procedure.
ie I got the all intersection points in square like below:
Now, i want to transform this into a perfect square of size (450,450) as given below:
(Never mind the brightness difference of two images).
Question:
How can i do this in OpenCV-Python? I am using cv2 version.
Apart from etarion's suggestion, you could also use the remap function. I wrote a quick script to show how you can do this. As you see coding this is really easy in Python. This is the test image:
and this is the result after warping:
And here is the code:
import cv2
from scipy.interpolate import griddata
import numpy as np
grid_x, grid_y = np.mgrid[0:149:150j, 0:149:150j]
destination = np.array([[0,0], [0,49], [0,99], [0,149],
[49,0],[49,49],[49,99],[49,149],
[99,0],[99,49],[99,99],[99,149],
[149,0],[149,49],[149,99],[149,149]])
source = np.array([[22,22], [24,68], [26,116], [25,162],
[64,19],[65,64],[65,114],[64,159],
[107,16],[108,62],[108,111],[107,157],
[151,11],[151,58],[151,107],[151,156]])
grid_z = griddata(destination, source, (grid_x, grid_y), method='cubic')
map_x = np.append([], [ar[:,1] for ar in grid_z]).reshape(150,150)
map_y = np.append([], [ar[:,0] for ar in grid_z]).reshape(150,150)
map_x_32 = map_x.astype('float32')
map_y_32 = map_y.astype('float32')
orig = cv2.imread("tmp.png")
warped = cv2.remap(orig, map_x_32, map_y_32, cv2.INTER_CUBIC)
cv2.imwrite("warped.png", warped)
I suppose you can google and find what griddata does. In short, it does interpolation and here we use it to convert sparse mappings to dense mappings as cv2.remap requires dense mappings. We just need to convert to the values to float32 as OpenCV complains about the float64 type. Please let me know how it goes.
Update: If you don't want to rely on Scipy, one way is to implement the 2d interpolation function in your code, for example, see the source code of griddata in Scipy or a simpler one like this http://inasafe.readthedocs.org/en/latest/_modules/engine/interpolation2d.html which depends only on numpy. Though, I'd suggest to use Scipy or another library for this, though I see why requiring only CV2 and numpy may be better for a case like this. I'd like to hear how your final code solves Sudokus.
if you have source points and end points (you only need 4), you can plug them into cv2.getPerspectiveTransform, and use that result in cv2.warpPerspective. Gives you a nice flat result.

Categories