Problems re-implementing the fipy mesh20x20 example from my own IDE - python

I am currently using fipy but am still relatively new to the nuiances associated with the package. While I have been able to regenerate the desired heatmap from the examples folder in for the mesh20x20 diffusion example using the command line, I have struggled to replicate it within a Spyder IDE. I am using python version 3.8 . It is simple enough to generate it using the "examples" folder from the command line the command line image generated, however, when I attempt to "re-program" it I end up with iterations of the following. the following result. I am hoping to be able to regenerate the smooth color transition from the examples folder, as opposed to the discrete dichromatic option that I have been limited to at present. I believe there is some issues with the viewer in some capacity I believe some related issues may have cropped up in the past for others, potentially as it relates to colorbar reformatting, though I have not yet been capable of effectively implementing these workarounds to generate the desired imagery. datamin and datamax in Viewer() did not work
I would be greatly indebted for any assitance the community could provide.
from fipy.terms.transientTerm import TransientTerm
from fipy.terms.implicitDiffusionTerm import ImplicitDiffusionTerm
from fipy.terms.explicitDiffusionTerm import ExplicitDiffusionTerm
from fipy.meshes.nonUniformGrid2D import NonUniformGrid2D
from fipy.variables.cellVariable import CellVariable
from fipy.viewers.matplotlibViewer.matplotlib2DViewer import Matplotlib2DViewer
####
#Global Inputs
D=1
steps=10
#Dimensional Inputs
nx=20
dx=1
ny=20
dy=1
L=dx*nx
#Temporal Inputs
#nt=20
#dt=1
#cell variable initial values
value=0
#construct mesh from dimensional pts
mesh=NonUniformGrid2D(nx=nx, dx=dx, ny=ny, dy=dy)
#construct term variables phi with name, mesh design
phi=CellVariable(name="solutionvariable", mesh=mesh, value=0)
#construct boundary conditions
#dirichlet ---> we can an automatic application of neumann to top right and bottom left
valueTopLeft=0
valueBottomRight=1
#assign boundary conditions to a face or cell
X, Y=mesh.faceCenters
facesTopLeft=((mesh.facesLeft & (Y > L/2 )) | (mesh.facesTop &( X < L/2)))
facesBottomRight=((mesh.facesRight & (Y < L/2)) | (mesh.facesBottom & (X > L/2)))
#constrain variables
phi.constrain(valueTopLeft, facesTopLeft)
phi.constrain(valueBottomRight, facesBottomRight)
#equation construction
eq=TransientTerm()==ExplicitDiffusionTerm(coeff=D)
#equation solving and either viewing and/or extraction
timestepduration=0.9 *(dx**2)/(2*D)
for step in range(steps):
eq.solve(var=phi, dt=timestepduration)
print(phi[step])
viewer=Matplotlib2DViewer(vars=phi, datamin=0, datamax=1)
viewer.axes.set_title("Solutionvbl(Step %d)" % (step+1,))

Figured it out I think. I was using ExplicitDiffusion and the example utilizes ImplicitDiffusion. When I tried this all I got back was a blank monochromatic image (and returned zeros for my phi[step] at the end. I am happy to report that once a "kickstart" value is provided in the value section for cellVariable (I used 0.001), and utilized in conjunction with ImplicitDiffusion, and the timestepduration is increased from its limit of 0.9x**2/2D to the utilized 9x**2/2D used in the example documentation it more or less adheres to the image generated when run from the command line. Grateful to have this sorted. Hope this provides assistance to anyone else who might run into a similar problem.

Related

Questions about the Gmsh Python API

I am trying to write a script that will automatically mesh geometries for CFD analysis using the Gmsh Python API. There are a few issues I am running into:
First of all, I would like to be able to write Gmsh script files (.geo) for debugging purposes. I looked through the source code of the Gmsh API and found that the .geo_unrolled extension is supported for the gmsh.write() function, but not just .geo. This extension does the trick mostly, but it seems that any meshing operations (such as marking curves as transfinite) or transformations (such as dilate) are not written to the output file when using gmsh.write('test.geo_unrolled'). I assume this has something to do with the _unrolled part. But is there any way to get the full Gmsh script out of the API?
Secondly, when I try to make a copy of a spline like in this example:
p1 = gmsh.model.geo.addPoint(-1, 0.5, 0, 0.1)
p2 = gmsh.model.geo.addPoint(0, 1, 0, 0.1)
p3 = gmsh.model.geo.addPoint(1, 0.5, 0, 0.1)
s1 = gmsh.model.geo.addSpline([p1, p2, p3])
s2 = gmsh.model.geo.copy([s1])
I get ValueError: ('gmshModelGeoCopy returned non-zero error code: ', 1). The error code, 1, seems to indicate that the tag of the original spline (s1) cannot be found when copy() is called. Am I missing something here? I have tried, for example, to call gmsh.model.geo.synchronize() before attempting to call copy(), but this had no effect.
Finally, when I use the dilate transformation in the Gmsh GUI using Modules - Geometry - Elementary entities - Transform - Scale, checking the Apply scaling on copy option in the dialog, on the example spline from above, I indeed get a scaled version of the curve as expected, including the three points. Assuming I was able to accomplish the same with the API, how do I then refer to the three new points that the scaled spline goes through, for example, if I wanted to draw a line between the start point of the original spline and that of the scaled spline?
In the end, what I want to accomplish is the following: draw a spline through a list of points, create a scaled copy of this spline, draw lines between the start and end points, and create a plane surface bounded by the two splines and lines. Is there a better way to do this than what I am trying to do with the dilation?
It's probably too late, but you never know.
I've never had to create .geo files using the API. But I found this discussion in the Gmsh mailing list archive, which may be helpful.
Regarding your error with copy, you have to specify the dimension of the entity to be copied, and not just the tag (check the documentation, which refers to dimTag). It's the same thing with transformations such as rotate, symmetrize etc. Using the following should work:
s2 = gmsh.model.geo.copy([(1, s1)])
NB: when copying only one entity, I think either the inner parentheses or the brackets are superfluous, and otherwise you have to provide a list of tuples of the form [(dim_1, tag_1), (dim_2, tag_2), ..., (dim_n, tag_n)].
Keep in mind that copy will return a variable of the same kind (list of tuples), i.e. in your case the variable s2 will be [(1, tag_s2)]. Therefore you might not want to use the same kind of variable name, since in order to get the tag you'll have to use s2[0][1] instead of simply s2.
Here you have a partial answer to the following question, as the tags of copied entities will be contained in your return variable.
Hope that helps you or others!

What simple filter could I use to de-noise my data?

I'm processing some experimental data in Python 3. The data (raw_data in my code) is pretty noisy:
One of my goal is to find the peaks, and for this I'd like to filter the noise. Based on what I found in the documentation of SciPy's Signal module, the theory of filtering seems to be really complicated, and unfortunately I have zero background. Of course I got to learn it sooner or later - and I intend to - but now now the profit doesn't worth the time (and learning filter theory isn't the purpose of my work), so I shamefully copied the code in Lyken Syu's answer without a chance of understanding the background:
import numpy as np
from scipy import signal as sg
from matplotlib import pyplot as plt
# [...] code, resulting in this:
raw_data = [arr_of_xvalues, arr_of_yvalues] # xvalues are in decreasing order
# <magic beyond my understanding>
n = 20 # the larger n is, the smoother the curve will be
b = [1.0 / n] * n
a = 2
filt = sg.lfilter(b, a, raw_data)
filtered = sg.lfilter(b, a, filt)
# <\magic>
plt.plot(filtered[0], filtered[1], ".")
plt.show()
It kind of works:
What concerns me is the curve from 0 to the beginning of my dataset the filter adds. I guess it's a property of the IIR filter I used, but I don't know how to prevent this. Also, I couldn't make other filters work so far. I need to use this code on other experimental results alike this, so I need a somewhat more general solution than e.g. cutting out all y<10 points.
Is there a better (possibly simpler) way, or choice of filter that is easy to implement without serious theoretical background?
How, if, could I prevent my filter adding that curve to my data?

Abaqus Python getByBoundingBox: 6.10 vs 6.13+

I have written a Python Script for ABAQUS to create several parts with many partitions. To get a structered mesh I have to select several edges. Now there is one edge I apparently cannot select in ABAQUS 6.10 & 6.11. Oddly, everything is fine with ABAQUS 6.13+.
p = mdb.models[name_model].parts[name_part_1]
e = p.edges
pickedEdges = e.getByBoundingBox(((cos(alpha_rad)*ri)-delta_p),((sin(alpha_rad)*ri)-delta_p),0.0,
((cos(alpha_rad)*d_core/2)+delta_p),((sin(alpha_rad)*d_core/2)+delta_p),0.0)
p.seedEdgeByBias(biasMethod=SINGLE, end2Edges=pickedEdges, ratio=bias_f, number=elem_num_rad, constraint=FINER)
Here, 'ri' is used to describe a radius, 'delta_p' (=0.001) is used to get a boundingbox slightly bigger than the original edge.
I also tried to use a bigger boundingbox by increasing delta_p but nothing works.
Any ideas? Thank you in advance! :)
for a sketch:
click me
the described bounding box is box E and I try to get the orange line
Its not clear from your post why the method isn't working.
you could determine a point on your edge and use the findAt method instead of getByBoundingBox.
By setting delta_p to a very large number, you should select every edge in your model? Its not clear what you mean by "not working"

Python's implementation of Mutual Information

I am having some issues implementing the Mutual Information Function that Python's machine learning libraries provide, in particular :
sklearn.metrics.mutual_info_score(labels_true, labels_pred, contingency=None)
(http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mutual_info_score.html)
I am trying to implement the example I find in the Stanford NLP tutorial site:
The site is found here : http://nlp.stanford.edu/IR-book/html/htmledition/mutual-information-1.html#mifeatsel2
The problem is I keep getting different results, without figuring out the reason yet.
I get the concept of Mutual Information and feature selection, I just don't understand how it is implemented in Python. What I do is that I provide the mutual_info_score method with two arrays based on the NLP site example, but it outputs different results. The other interesting fact is that anyhow you play around and change numbers on those arrays you are most likely to get the same result. Am I supposed to use another data structure specific to Python or what is the issue behind this? If anyone has used this function successfully in the past it would be of a great help to me, thank you for your time.
I encountered the same issue today. After a few trials I found the real reason: you take log2 if you strictly followed NLP tutorial, but sklearn.metrics.mutual_info_score uses natural logarithm(base e, Euler's number). I didn't find this detail in sklearn documentation...
I verified this by:
import numpy as np
def computeMI(x, y):
sum_mi = 0.0
x_value_list = np.unique(x)
y_value_list = np.unique(y)
Px = np.array([ len(x[x==xval])/float(len(x)) for xval in x_value_list ]) #P(x)
Py = np.array([ len(y[y==yval])/float(len(y)) for yval in y_value_list ]) #P(y)
for i in xrange(len(x_value_list)):
if Px[i] ==0.:
continue
sy = y[x == x_value_list[i]]
if len(sy)== 0:
continue
pxy = np.array([len(sy[sy==yval])/float(len(y)) for yval in y_value_list]) #p(x,y)
t = pxy[Py>0.]/Py[Py>0.] /Px[i] # log(P(x,y)/( P(x)*P(y))
sum_mi += sum(pxy[t>0]*np.log2( t[t>0]) ) # sum ( P(x,y)* log(P(x,y)/( P(x)*P(y)) )
return sum_mi
If you change this np.log2 to np.log, I think it would give you the same answer as sklearn. The only difference is that when this method returns 0, sklearn will return a number very near to 0. ( And of course, use sklearn if you don't care about log base, my piece of code is just for demo, it gives poor performance...)
FYI, 1)sklearn.metrics.mutual_info_score takes lists as well as np.array; 2) the sklearn.metrics.cluster.entropy uses also log, not log2
Edit: as for "same result", I'm not sure what you really mean. In general, the values in the vectors don't really matter, it is the "distribution" of values that matters. You care about P(X=x), P(Y=y) and P(X=x,Y=y), not the value x,y.
The code below should provided a result: 0.00011053558610110256
c=np.concatenate([np.ones(49), np.zeros(27652), np.ones(141), np.zeros(774106) ])
t=np.concatenate([np.ones(49), np.ones(27652), np.zeros(141), np.zeros(774106)])
computeMI(c,t)

NurbsCurve MatrixMath Maya api Python

I am creating a toolset for creating nurbs curves/surfaces inside maya using python.
I have a set of dictionaries that include cvPositions, knots, form etc. each of which describe a preset 3d shape (cube, circle, pyramid etc).
I also have a 3d matrix stored in the nodes metadata that is used as an offset for the shape. This allows you to scale/move/rotate the shape without moving the transform.
The problem is in the way I am applying this matrix is very slow:
First I will create a new (edit)transform at the position of the (orig)transform containing the curves.
Next I will transfer cv positions in world space from (orig)transform to (edit)transform
Next i will move the (edit)transform into the matrix position.
Finally I will transfer the cvPositions back to the (orig)transform
When creating hundreds of shapes, this is becoming prohibitively slow...
Can someone describe a mathematical way to apply a matrix to a set of 3d points?
Perhaps using one of the math modules or numpy?
Alternatively,
Is there a way using OpenMaya api functions to do this? Perhaps with MPointArray? This is as far as I have gotten on that front:
crv = OpenMaya.MFnNurbsCurve( self.dagPath )
cvs = OpenMaya.MPointArray()
space = OpenMaya.MSpace.kWorld
crv.getCVs(cvs, space)
positions = []
for i in range(cvs.length()):
pt = cvs[i]
positions.append( (pt[0], pt[1], pt[2]) )
The easiest method is to use pymel's built-in versions of points and matrices (pymel is built in to maya 2011+). The math types are in pymel.datatatypes; here's an example of transforming a point by a matrix in pymel:
import pymel.core as pm
pt = pm.datatypes.Point(0,0,0)
mt = pm.datatypes.Matrix(1,0,0,0, 0,1,0,0, 0,0,1,0, 5,5,5,1 )
moved = pt * mt
print moved
# [5,5,5]
Pymel points and matrices will let you do your algorithm. The math is going to be done in the API but the Python <> C++ conversions may still make it feel pretty slow for big data.
It sounds like you're basically re-creating 'freeze transforms' followed by 'zero pivots'. Maybe you should try that as an alternative to doing this in python math...

Categories