Fundamental misunderstanding of translation and rotation in PythonOCC (OpenCascade) - python

Maybe this will help others trying to learn through tutorials/documentation/stackoverflow.
How can I rotate or translate a TopoDS_Shape (or any object), providing coordinates, angles, and axis? For Example: If my part is at (5.0, 1.0, 0.0), can I move it to (0.0, 0.0, 0.0)? Or make it face a new direction?
Attempted methods (not including what I think is unimportant code). I've tried to include some of the stuff I've spent most of my time on. Can't recall all of the other attempts I've made. Maybe someone experienced with PythonOCC or OpenCascade can see where I'm going wrong.
display, start_display, add_menu, add_function_to_menu = init_display()
aResShape = openFile.open(fileToOpen) #RETURNS SHAPE FROM STEP FILE
aResShape.Orientable(True)
#EXAMPLE
aResShape.Location().Transformation().SetRotation(gp_Quaternion(1., 1., 0., 1.))
#EXAMPLE
aResShape.Location().Transformation().SetTransformation(a,b)
#EXAMPLE
aResShape.Move(TopLoc_Location(gp_Trsf( gp_Trsf2d(1., 0.) )))
#EXAMPLE
aResShape.Reverse()
#EXAMPLE
p1 = gp_Pnt(700., 10., 80.)
d1 = gp_Dir(50., 50., 60.)
a = gp_Ax3(p1, d1)
p2 = gp_Pnt(2., 3., 4.)
d2 = gp_Dir(4., 5., 6.)
b = gp_Ax3(p2, d2)
print(aResShape.Location().Transformation().Transforms())
aResShape.Location().Transformation().SetTransformation(a,b)
print(aResShape.Location().Transformation().Transforms()) #RETURNS SAME VALUES
#EXAMPLE (TRYING TO SEE WHAT WORKS)
transform = gp_Trsf
transform.SetRotation(
gp_Ax1(
gp_Pnt(0.,0.,0.),
gp_Dir(0.,0.,1.)
),
1.570796
)
print(transform)
display.DisplayShape(aResShape, color='Black', update=True)
display.FitAll()
display.SetModeWireFrame()
start_display()
Sometimes I'll get errors like this:
NotImplementedError: Wrong number or type of arguments for overloaded function 'new_gp_Trsf2d'.
Possible C/C++ prototypes are:
gp_Trsf2d::gp_Trsf2d()
gp_Trsf2d::gp_Trsf2d(gp_Trsf const &)
But most of the time I get nothing and the shape doesn't change in the display.
Spent days in here:
https://cdn.rawgit.com/tpaviot/pythonocc-core/804f7f3/doc/apidoc/0.18.1/index.html
https://dev.opencascade.org/doc/refman/html/index.html
https://github.com/tpaviot/pythonocc-demos/tree/master/examples
So I know what functions to pass I think, but nothing seems to work out.
Maybe the display simply isn't showing me changes that are actually happening?
I asked a different PythonOCC question earlier (pythonOCC set default units to inches) but I think I'm really just missing something basic.
Can anyone think of why I'm not managing to make any real changes? Thanks for your time!

I used Open Cascade with C++ and BRepBuilderAPI_Transform(const TopoDS_Shape &S, const gp_Trsf &T, const Standard_Boolean Copy=Standard_False) is achieving transformation. See:
https://www.opencascade.com/doc/occt-6.9.1/refman/html/class_b_rep_builder_a_p_i___transform.html
This is how i used it:
gp_Trsf trsf;
trsf.SetTransformation(gp_Quaternion(gp_Mat(gp_XYZ(d.x1, d.y1, d.z1), gp_XYZ(d.x2, d.y2, d.z2), gp_XYZ(d.x1, d.y1, d.z1).Crossed(gp_XYZ(d.x2, d.y2, d.z2)))), gp_Vec(d.x, d.y, d.z));
*d.shape = BRepBuilderAPI_Transform(*d.shape, trsf, true);

Related

Combining Pointdraw and Sample in Holoviews

I'm trying to combine Holoviews' Pointdraw functionality with its Sample functionality (I couldn't find a specific page, but it is shown in action here http://holoviews.org/gallery/demos/bokeh/mandelbrot_section.html)
Specifically, I want to have two subplots with interactivity. The one on the left shows a colormap, and the one on the right shows a sample (a linecut) of the colormap. This is achieved with .sample. Inside this right plot I'd like to have points that can be drawn, moved, and removed, typically done with pointdraw. I'd then also like to access their coordinates once I am done moving, which is possible when following the example from the documentation.
Now, I've got the two working independently, following the examples above. But when combined in the way that I have, this results in a plot that looks like this:
It has the elements I am looking for, except the points cannot be interacted with. This is somehow related to Holoviews' streams, but I am not sure how to solve it. Would anyone be able to help out?
The code that generates the above:
%%opts Points (color='color' size=10) [tools=['hover'] width=400 height=400]
%%opts Layout [shared_datasource=True] Table (editable=True)
import param
import numpy as np
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
from holoviews import streams
def lorentzian(x, x0, gamma):
return 1/np.pi*1/2*gamma/((x-x0)**2+(1/2*gamma)**2)
xs = np.arange(0,4*np.pi,0.05)
ys = np.arange(0,4*np.pi,0.05)
data = hv.OrderedDict({'x': [2., 2., 2.], 'y': [0.5, 0.4, 0.2], 'color': ['red', 'green', 'blue']})
z = lorentzian(xs.reshape(len(xs),1),2*np.sin(ys.reshape(1,len(ys)))+5,1) + lorentzian(xs.reshape(len(xs),1),-2*np.sin(ys.reshape(1,len(ys)))+5,1)
def dispersions(f0):
points = hv.Points(data, vdims=['color']).redim.range(x=(xs[0], xs[-1]), y=(np.min(z), np.max(z)))
point_stream = streams.PointDraw(data=points.columns(), source=points, empty_value='black')
image = hv.Image(z, bounds=(xs[0], ys[0], xs[-1], ys[-1]))
return image* hv.VLine(x=f0) + image.sample(x=f0)*points
dmap = hv.DynamicMap(dispersions, kdims=['f0'])
dmap.redim.range(f0=(0,10)).redim.step(f0=(0.1))
I apologize for the weird function that we are plotting, I couldn't immediately come up with a simple one.
Based on your example it's not yet quite clear to me what you will be doing with the points but I do have some suggestions on structuring the code better.
In general it is always better to compose plots from several separate DynamicMaps than creating a single DynamicMap that does everything. Not only is it more composable but you also get handles on the individual objects allowing you to set up streams to listen to changes on each component and most importantly it's more efficient, only the plots that need to be updated will be updated. In your example I'd split up the code as follows:
def lorentzian(x, x0, gamma):
return 1/np.pi*1/2*gamma/((x-x0)**2+(1/2*gamma)**2)
xs = np.arange(0,4*np.pi,0.05)
ys = np.arange(0,4*np.pi,0.05)
data = hv.OrderedDict({'x': [2., 2., 2.], 'y': [0.5, 0.4, 0.2], 'color': ['red', 'green', 'blue']})
points = hv.Points(data, vdims=['color']).redim.range(x=(xs[0], xs[-1]), y=(np.min(z), np.max(z)))
image = hv.Image(z, bounds=(xs[0], ys[0], xs[-1], ys[-1]))
z = lorentzian(xs.reshape(len(xs),1),2*np.sin(ys.reshape(1,len(ys)))+5,1) + lorentzian(xs.reshape(len(xs),1),-2*np.sin(ys.reshape(1,len(ys)))+5,1)
taps = []
def vline(f0):
return hv.VLine(x=f0)
def sample(f0):
return image.sample(x=f0)
dim = hv.Dimension('f0', step=0.1, range=(0,10))
vline_dmap = hv.DynamicMap(vline, kdims=[dim])
sample_dmap = hv.DynamicMap(sample, kdims=[dim])
point_stream = streams.PointDraw(data=points.columns(), source=points, empty_value='black')
(image * vline_dmap + sample_dmap * points)
Since the Image and Points are not themselves dynamic there is no reason to put them inside the DynamicMap and the VLine and the sampled Curve are easily split out. The PointDraw stream doesn't do anything yet but you can now set that up as yet another DynamicMap which you can compose with the rest.

python numpy data type error and extremely inefficient use of pyplot :(

[Using windows 10 and python 3.5 with newest modules]
Hello!
I have two slightly different problems that belong together because one is the buggy solution of the other. The first function here is extremely slow with datapoints over 75000 and does not work with 150000. This on does exactly what I want though.
#I call the functions like this:
plt.plot(logtime[:recmax-(degree*2-1)] - (logtime[0]-degree), smoothListTriangle(cpm, degree), color="green", linewidth=2, label="Smoothed n="+degree)
plt.plot(logtime[:recmax] - logtime[0], smoothListGaussian2(str(cpm), degree), color="lime", linewidth=5, label="")
#And cpm is always:
cpm = cpm.astype(int) #Array of big number of values
def smoothListTriangle(cpm,degree): #Thank you Scott from swharden.com!
weight=[]
window=degree*2-1
smoothed=[0.0]*(len(cpm)-window)
for x in range(1,2*degree):
weight.append(degree-abs(degree-x))
w=np.array(weight)
for i in range(len(smoothed)):
smoothed[i]=sum(np.array(cpm[i:i+window])*w)/float(sum(w))
#Very, VERY slow...
return smoothed
The higher "degree" is the longer it takes. But with lesser degree it would not look good.
...
The second function here should be (way?) more efficient, but i cant resolve the data type error:
def smoothListGaussian2(myarray, degree):
myarray = np.pad(myarray, (degree-1,degree-1), mode='edge')
window = degree*2-1
weight = np.arange(-degree+1, degree)/window
weight = np.exp(-(16*weight**2))
weight /= sum(weight)
#weight = weight.astype(int) #Does throw the "invalid literal" error
smoothed = np.convolve(myarray, weight, mode='valid')
return smoothed
#TypeError: Cannot cast array data from dtype('float64') to dtype('<U32') according to the rule 'safe'
Im desperately trying to resolve this data type error here with numpy. Its killing me! IT seems to be the array "weight" thats the one who's float64, but converting it throws more errors like:
ValueError: invalid literal for int() with base 10: '[31 31 33 ..., 48 49 51]'
So... Im new to python and use this to log data from my geiger counter. Do you have any idea how to either make the first function WAY more efficient or resolve the error in the second? Im at a loss here.
I found the scripts here: http://www.swharden.com/wp/2008-11-17-linear-data-smoothing-in-python/#comments (I found Scotts other triangle-smooth-function on this site, but i couldnt get this to work either. Its more complicated)
Note that the number of data points are depending on the length in seconds of the measurement and this length can very well be several days. I guess one million data points and more are not unusual.
Thank you!
I just had a revelation of some sort. All i had to do is convert the "myarray" to float before convolving.
I had to do so many conversions to make the whole code work correctly, its ridiculous! I thought this is easy in Python, but no.. :(( Seems to me that c++ is better in that case.
def smoothListGaussian2(myarray, degree):
myarray = np.pad(myarray, (degree - 1, degree - 1), mode='edge')
window = degree * 2 - 1
weight = np.arange(-degree + 1, degree) / window
weight = np.exp(-(16 * weight ** 2))
weight /= sum(weight)
myarray = myarray.astype(float)
smoothed = np.convolve(myarray, weight, mode='valid')
return smoothed
Since this works now I coud test the speed and its pretty fast. I cant see a difference in speed between 40k and 150k data points anymore. cool

Change spacing in Mayavi

I am creating a surf() plot using Mayavi/mlab but the resluting picture is not really satisfying since the spacing is not really good. Here is my Code:
import pygrib
from mayavi.mlab import *
from mayavi import mlab
grbs = pygrib.open("lfff00000000c_1h.grb")
data = grbs.select(name='Geometric Height of the earths surface above sea level')[0].values
# --> data is a simple 2D array
mlab.figure(1, fgcolor=(0,0,0), bgcolor=(1,1,1))
s = surf(data, colormap='gist_earth')
mlab.title("geom. height", size = 0.5)
So actually i want to increase the spacing for the x and y axis in the resulting picture. But i don't know how to do this. I know that I somehow have to work with array_source.spacing = array([ 5., 5., 1.]) in my Python Code but i don't know how? :(
Actually i figured out what solves my problem:
I simply added warp_scale to my surf() function. In this way the z-scale is influenced and since I was only interested in changing the x and y-axis in the same way this solves my problem.
s = surf(data, colormap='gist_earth', warp_scale=0.05)
Perhaps this helps other people with the same issue.

Defining a gradient with respect to a subtensor in Theano

I have what is conceptually a simple question about Theano but I haven't been able to find the answer (I'll confess upfront to not really understanding how shared variables work in Theano, despite many hours with the tutorials).
I'm trying to implement a "deconvolutional network"; specifically I have a 3-tensor of inputs (each input is a 2D image) and a 4-tensor of codes; for the ith input codes[i] represents a set of codewords which together code for input i.
I've been having a lot of trouble figuring out how to do gradient descent on the codewords. Here are the relevant parts of my code:
idx = T.lscalar()
pre_loss_conv = conv2d(input = codes[idx].dimshuffle('x', 0, 1,2),
filters = dicts.dimshuffle('x', 0,1, 2),
border_mode = 'valid')
loss_conv = pre_loss_conv.reshape((pre_loss_conv.shape[2], pre_loss_conv.shape[3]))
loss_in = inputs[idx]
loss = T.sum(1./2.*(loss_in - loss_conv)**2)
del_codes = T.grad(loss, codes[idx])
delc_fn = function([idx], del_codes)
train_codes = function([input_index], loss, updates = [
[codes, T.set_subtensor(codes[input_index], codes[input_index] -
learning_rate*del_codes[input_index]) ]])
(here codes and dicts are shared tensor variables). Theano is unhappy with this, specifically with defining
del_codes = T.grad(loss, codes[idx])
The error message I'm getting is: theano.gradient.DisconnectedInputError: grad method was asked to compute the gradient with respect to a variable that is not part of the computational graph of the cost, or is used only by a non-differentiable operator: Subtensor{int64}.0
I'm guessing that it wants a symbolic variable instead of codes[idx]; but then I'm not sure how to get everything connected to get the intended effect. I'm guessing I'll need to change the final line to something like
learning_rate*del_codes) ]])
Can someone give me some pointers as to how to define this function properly? I think I'm probably missing something basic about working with Theano but I'm not sure what.
Thanks in advance!
-Justin
Update: Kyle's suggestion worked very nicely. Here's the specific code I used
current_codes = T.tensor3('current_codes')
current_codes = codes[input_index]
pre_loss_conv = conv2d(input = current_codes.dimshuffle('x', 0, 1,2),
filters = dicts.dimshuffle('x', 0,1, 2),
border_mode = 'valid')
loss_conv = pre_loss_conv.reshape((pre_loss_conv.shape[2], pre_loss_conv.shape[3]))
loss_in = inputs[input_index]
loss = T.sum(1./2.*(loss_in - loss_conv)**2)
del_codes = T.grad(loss, current_codes)
train_codes = function([input_index], loss)
train_dicts = theano.function([input_index], loss, updates = [[dicts, dicts - learning_rate*del_dicts]])
codes_update = ( codes, T.set_subtensor(codes[input_index], codes[input_index] - learning_rate*del_codes) )
codes_update_fn = function([input_index], updates = [codes_update])
for i in xrange(num_inputs):
current_loss = train_codes(i)
codes_update_fn(i)
To summarize the findings:
Assigning grad_var = codes[idx], then making a new variable such as:
subgrad = T.set_subtensor(codes[input_index], codes[input_index] - learning_rate*del_codes[input_index])
Then calling
train_codes = function([input_index], loss, updates = [[codes, subgrad]])
seemed to do the trick. In general, I try to make variables for as many things as possible. Sometimes tricky problems can arise from trying to do too much in a single statement, plus it is hard to debug and understand later! Also, in this case I think theano needs a shared variable, but has issues if the shared variable is created inside the function that requires it.
Glad this worked for you!

scipy.optimize.curve_fit : Not able to do a curve fitting

I am still new with python and I have a problem wit curve fitting. The following program is a simplification of a bigger program that I create but it represent the problem that I have.
The problem is that I have a function which I called burger that I cannot fit a curve. This line : y=np.sqrt(y) : is a problem. When I remove it, i can fit it perfectly but that not the function I want.
How Can I do a fitting of this function y=np.sqrt(y)?
# -*- coding: utf-8 -*-
"""
Created on Wed Dec 11 22:14:54 2013
#author:
"""
import numpy as np
import matplotlib.pyplot as plt
import pdb
import scipy.optimize as optimization
from math import *
from scipy.optimize import curve_fit
import math
import moyenne
####################Function Burger###############################
def burger(t, E1, E2, N,tau):
nu=0.4 #Coefficient de Poisson
P=50 #Peak force
alpha=70.3 #Tip angle
y=((((pi/2.)*P*(1.-nu**2.))/(tan(alpha)))*(1./E1 + 1./E2*(1.-np.exp(-t/tau)) + 1./((N)*(1.-nu))*t))
y=np.sqrt(y)
return y
#######exemple d'utilisation##########
xlist=np.linspace(0,1,100)
ylist=[ burger(t,3, 2,1,0.1) for t in xlist]
#pdb.set_trace()
pa,j = curve_fit(burger,xlist,ylist)
yfit=[burger(x,*pa) for x in xlist]
plt.figure()
plt.plot(xlist,ylist,marker='o')
plt.plot(xlist,yfit)
plt.show()
So, this probably won't be the best answer you get, but while you wait for others here are some things to think about.
First, since you are new to python maybe you don't know, or maybe there is reason to solve these things in list comprehension, but I don't think you need the list comprehensions. You can use the numpy math operations to handle a whole array at a time. Instead of
y=((((pi/2.)*P*(1.-nu**2.))/(tan(alpha)))* ...
You can write
y = ((((np.pi/2.)*P*(1.-nu**2.))/(np.tan(alpha)))* ...
Then instead of
[ burger(t, 3., 2., 1., 0.1) for t in xlist]
you can do
burger(xlist, 3., 2., 1., 0.1)
This is will be a lot faster when you are working with arrays.
Secondly, just looking through a couple of things that were happening in the algorithm. It wasn't looking for your parameters in the right ranges. I looked up the algorithm it is using on the scipy.optimize page (here) and wikipedia says that the convergence is dependent on the initial guess and also that it finds the local, not global, minima (Sometimes your code hit negative values for the parameters which made the sqrt of y undefined for some cases). If there is a way you can give it a good initial guess then it should work ([1., 3., 3., 2] worked for me). My command that solved it was: pa,j = curve_fit(burger,xlist,ylist, [1., 3., 3., 2], maxfev=10000)).
Thirdly, the first error I got when I used your code was that it reached the max number of fevals. Add maxfev=10000 (or more if you need) as the last argument to curve_fit.
Check it out. If you can give your bigger problem an initial guess then maybe you'll get it to converge. Otherwise maybe a different algorithm could be more suitable?
Update: See this question for a more detailed explanation of why this works, but you can get it to work without a guess if you give it another kwg, diag.
Use:
pa,j = curve_fit(burger,xlist,ylist, diag=(1./xlist.mean(), 1./ylist.mean()), maxfev=10000)

Categories