ABAQUS - create Contact Damping Object in Python Script - python

So i was trying to creat a damping object under interaction property.
Below is the damping object arguments and such.
**definition**
A SymbolicConstant specifying the method used to define the damping. Possible values are DAMPING_COEFFICIENT and CRITICAL_DAMPING_FRACTION. The default value is DAMPING_COEFFICIENT.
**tangentFraction**
The SymbolicConstant DEFAULT or a Float specifying the tangential damping coefficient divided by the normal damping coefficient. The default value is DEFAULT.
**clearanceDependence**
A SymbolicConstant specifying the variation of the damping coefficient or fraction with respect to clearance. Possible values are STEP, LINEAR, and BILINEAR. The default value is STEP.
If definition=CRITICAL_DAMPING_FRACTION, the only possible value is STEP.
**table**
A sequence of pairs of Floats specifying the damping properties. The items in the table data are
described below.
**Table data**
If definition=DAMPING_COEFFICIENT and clearanceDependence=STEP, the table data specify the following:
• Damping coefficient.
If definition=DAMPING_COEFFICIENT and clearanceDependence=LINEAR or BILINEAR, the table
data specify the following:
• Damping coefficient.
• Clearance.
Two pairs must be given for clearanceDependence=LINEAR and three pairs for clearanceDependence=BILINEAR. The first pair must have clearance=0.0, and the last pair must have coefficient=0.0.
If definition=CRITICAL_DAMPING_FRACTION, the table data specify the following:
• Critical damping fraction.
So the definition i'm using is the CRITICAL_DAMPING_FRACTION. The only difficulty i run into is how to write for the "table" part. Below is my code:
myModel.interactionProperties['Prop-1'].Damping(definition = CRITICAL_DAMPING_FRACTION, table = ((6,),))
so from the manual, it says the table should be a sequence of pairs of floats, and expecting a tuple.Since for critical damping fraction, there is only one number needed. The error message I got is the " invalid damping table".
I really couldn't find out what I did wrong for the table part. Hope someone here can know where I'm wrong! THANKS!!

Your table definition is correct, but you're missing definition for clearanceDependence. To make your command work, write the following:
myModel.interactionProperties['Prop-1'].Damping(definition = CRITICAL_DAMPING_FRACTION, table = ((6,),), clearanceDependence=STEP)
There is only one possible value for clearanceDependence property, which is STEP, but you need to define it anyway. Unfortunately, documentation is not that clear about that.
In the future, you can modify the interaction property manually in Abaqus and just read the value using Python. That way you'll see what it should look like. Plus, the abaqus.rpy file would contain the correct command.

Related

plot results from user defined ACT Extension

As a result of my simulation, I want the volume of a surface body (computed using a convex hull algorithm). This calculation is done in seconds but the plotting of the results takes a long time, which becomes a problem for the future design of experiment. I think the main problem is that a matrix (size = number of nodes =over 33 000 nodes) is filled with the same volume value in order to be plotted. Is there any other way to obtain that value without creating this matrix? (the value retrieved must be selected as an output parameter afterwards)
It must be noted that the volume value is computed in python in an intermediate script then saved in an output file that is later read by Ironpython in the main script in Ansys ACT.
Thanks!
The matrix creation in the intermediate script (myICV is the volume computed) :
import numpy as np
NodeNo=np.array(Col_1)
ICV=np.full_like(NodeNo,myICV)
np.savetxt(outputfile,(NodeNo,ICV),delimiter=',',fmt='%f')
Plot of the results in main script :
import csv #after the Cpython function
resfile=opfile
reader=csv.reader(open(resfile,'rb'),quoting=csv.QUOTE_NONNUMERIC) #read the node number and the scaled displ
NodeNos=next(reader)
ICVs=next(reader)
#ScaledUxs=next(reader)
a=int(NodeNos[1])
b=ICVs[1]
ExtAPI.Log.WriteMessage(a.GetType().ToString())
ExtAPI.Log.WriteMessage(b.GetType().ToString())
userUnit=ExtAPI.DataModel.CurrentUnitFromQuantityName("Length")
DispFactor=units.ConvertUnit(1,userUnit,"mm")
for id in collector.Ids:
collector.SetValues(int(NodeNos[NodeNos.index(id)]), {ICVs[NodeNos.index(id)]*DispFactor}) #plot results
ExtAPI.Log.WriteMessage("ICV read")
So far the result looks like this
Considering that your 'CustomPost' object is not relevant in terms of visualization but just to pass the volume calculation as a parameter, without adding many changes to the workflow, I suggest you to change the 'Scoping Method' to 'Geometry' and then selecting a single node (if the extension result type is 'Node'; you can check data on the xml file), instead of 'All Bodies'.
If you code runs slow due to the plotting this should fix it, cause you will be requesting just one node.
As you are referring to DoE, I understand you are expecting to run this model iteratively and read the parameter result. An easy trick might be to generate a 'NamedSelection' by 'Worksheet' and select 'Mesh Node' (Entity Type) with 'NodeID' as Criterion and equal to '1', for example. So even if through your iterations you change the mesh, we expect to have always node ID 1, so your NamedSelection is guaranteed to be generated successfully in each iteration.
Then you can scope you 'CustomPost' to 'NamedSelection' and then to the one you created. This should work.
If your extension does not accept 'NamedSelection' as 'Scoping Method' and you are changing the mesh in each iteration (if you are not, you can directly scope a node), I think it is time to manually write the parameter as an 'Input Parameter', in the 'Parameter Set'. But in this way you will have to control the execution of the model from Workbench platform.
I am curious to see how it goes.

Constraint on parameters in lmfit

I am trying to fit 3 peaks using lmfit with a skewed-Voigt profile (this is not that important for my question). I want to set a constraint on the peaks centers of the form:
peak1 = SkewedVoigtModel(prefix='sv1_')
pars.update(peak1.make_params())
pars['sv1_center'].set(x)
peak2 = SkewedVoigtModel(prefix='sv2_')
pars.update(peak2.make_params())
pars['sv2_center'].set(1000+x)
peak3 = SkewedVoigtModel(prefix='sv3_')
pars.update(peak3.make_params())
pars['sv3_center'].set(2000+x)
Basically I want them to be 1000 apart from each other, but I need to fit for the actual shift, x. I know that I can force some parameters to be equal using pars['sv2_center'].set(expr='sv1_center'), but what I would need is pars['sv2_center'].set(expr='sv1_center'+1000) (which doesn't work just like that). How can I achieve what I need? Thank you!
Just do:
pars['sv2_center'].set(expr='sv1_center+1000')
pars['sv3_center'].set(expr='sv1_center+2000')
The constraint expression is a Python expression that will be evaluated every time the constrained parameter needs to get its value.

Jagged Quote of IV for BlackVarianceSurface

I am following this example and trying to adapt it to my needs.
In the section of code:
implied_vols[i][j] = data[j][i]
implied_vols = ql.Matrix(len(strikes), len(expiration_dates))
for i in range(implied_vols.rows()):
for j in range(implied_vols.columns()):
implied_vols[i][j] = data[j][i]
[1]: http://gouthamanbalaraman.com/blog/volatility-smile-heston-model-calibration-quantlib-python.html
This assumes the IV matrix has all corresponding strikes for a given expiry. In fact, the [mass] quote is often stored in a dictionary instead of an array exactly for this reason.
For example in the SPX, we have different strike increments at different expiration. So some strikes are empty for one expiry but not another. I realize I can force the situation by making the matrix cell always have numerical value, but I am assuming that inserting a 0 at a given strike/expiry is a bad idea. Alternatively, forcing all expiry to the least common denominator of strikes between the expiry throws out lots of data.
What happens if the volatility quotes you have are not square and you don't want to throw out data when building a ql.Matrix to hand to BlackVarianceSurface?
Unfortunately, there's no ready-made solution. As you say filling the missing cells with 0 is a bad idea; but filling them by interpolating manually the missing values should work. The way to do it probably depends on how sparse your data is...

Python subset Sum with floats?

I'm trying to see if a float can be constructed by adding values from a list of floats. Since the values are experimental, there is another array that contains error room values, such that the float can be constructed from the float list plus 0 or 1 of every value in the error room list, plus some additional error margin. The float may or may not be constructed from these parameters.
The length of lists, error margin and maximum size of the combination values are user defined.
I thought an efficient way to solve this problem was to have a function create and store every possible combination using the given parameters in an array, then check each float to see if it matches the combination, then print out the way the combination was obtained if a match exists.
for example, given a numlist [132.0423,162.0528,176.0321](3-10 values total)
and an errorlist [2.01454,18.0105546] (0-4 values total)
and an error room of 2, and a maximum combination size of 1000,
the float 153.0755519 can be constructed by 132.0423+2.01454+18.0105546 or numlist[0]+errorlist[0]+errorlist[1] and be within the error room
I have no idea how to go about solving such a problem. Perhaps using dynamic programming? I was thinking it would be computationally efficient to create the combinations array once via a separate function, then continuously pass it into a comparison function.
The background: the large floats are fragment masses that come from a mass spectrometer output and my team is attempting to analyze which fragments came from our initial protein that can only fragment into the pieces defined in numlist, but can occasionally lose a small functional group (water, alcohol, hydrogen etc.)

What exactly does the "returned value" in langid.py mean?

beside the correct language ID langid.py returns a certain value - "The value returned is a score for the language. It is not a probability esimate, as it is not normalized by the document probability since this is unnecessary for classification."
But what does the value mean??
I'm actually the author of langid.py. Unfortunately, I've only just spotted this question now, almost a year after it was asked. I've tidied up the handling of the normalization since this question was asked, so all the README examples have been updated to show actual probabilities.
The value that you see there (and that you can still get by turning normalization off) is the un-normalized log-probability of the document. Because log/exp are monotonic, we don't actually need to compute the probability to decide the most likely class. The actual value of this log-prob is not actually of any use to the user. I should probably have never included it, and I may remove its output in the future.
I think this is the important chunk of langid.py code:
def nb_classify(fv):
# compute the log-factorial of each element of the vector
logfv = logfac(fv).astype(float)
# compute the probability of the document given each class
pdc = np.dot(fv,nb_ptc) - logfv.sum()
# compute the probability of the document in each class
pd = pdc + nb_pc
# select the most likely class
cl = np.argmax(pd)
# turn the pd into a probability distribution
pd /= pd.sum()
return cl, pd[cl]
It looks to me that the author is calculating something like the multinomial log-posterior of the data for each of the possible languages. logfv calculates the logarithm of the denominator of the PMF (x_1!...x_k!). np.dot(fv,nb_ptc) calculates the
logarithm of the p_1^x_1...p_k^x_k term. So, pdc looks like the list of language conditional log-likelihoods (except that it's missing the n! term). nb_pc looks like the prior probabilities, so pd would be the log-posteriors. The normalization line, pd /= pd.sum() confuses me, since one usually normalizes probability-like values (not log-probability values); also, the examples in the documentation (('en', -55.106250761034801)) don't look like they've been normalized---maybe they were generated before the normalization line was added?
Anyway, the short answer is that this value, pd[cl] is a confidence score. My understanding based on the current code is that they should be values between 0 and 1/97 (since there are 97 languages), with a smaller value indicating higher confidence.
Looks like a value that tells you how certain the engine is that it guessed the correct language for the document. I think generally the closer to 0 the number, the more sure it is, but you should be able to test that by mixing languages together and passing them in to see what values you get out. It allows you to fine tune your program when using langid depending upon what you consider 'close enough' to count as a match.

Categories