plot results from user defined ACT Extension - python

As a result of my simulation, I want the volume of a surface body (computed using a convex hull algorithm). This calculation is done in seconds but the plotting of the results takes a long time, which becomes a problem for the future design of experiment. I think the main problem is that a matrix (size = number of nodes =over 33 000 nodes) is filled with the same volume value in order to be plotted. Is there any other way to obtain that value without creating this matrix? (the value retrieved must be selected as an output parameter afterwards)
It must be noted that the volume value is computed in python in an intermediate script then saved in an output file that is later read by Ironpython in the main script in Ansys ACT.
Thanks!
The matrix creation in the intermediate script (myICV is the volume computed) :
import numpy as np
NodeNo=np.array(Col_1)
ICV=np.full_like(NodeNo,myICV)
np.savetxt(outputfile,(NodeNo,ICV),delimiter=',',fmt='%f')
Plot of the results in main script :
import csv #after the Cpython function
resfile=opfile
reader=csv.reader(open(resfile,'rb'),quoting=csv.QUOTE_NONNUMERIC) #read the node number and the scaled displ
NodeNos=next(reader)
ICVs=next(reader)
#ScaledUxs=next(reader)
a=int(NodeNos[1])
b=ICVs[1]
ExtAPI.Log.WriteMessage(a.GetType().ToString())
ExtAPI.Log.WriteMessage(b.GetType().ToString())
userUnit=ExtAPI.DataModel.CurrentUnitFromQuantityName("Length")
DispFactor=units.ConvertUnit(1,userUnit,"mm")
for id in collector.Ids:
collector.SetValues(int(NodeNos[NodeNos.index(id)]), {ICVs[NodeNos.index(id)]*DispFactor}) #plot results
ExtAPI.Log.WriteMessage("ICV read")
So far the result looks like this

Considering that your 'CustomPost' object is not relevant in terms of visualization but just to pass the volume calculation as a parameter, without adding many changes to the workflow, I suggest you to change the 'Scoping Method' to 'Geometry' and then selecting a single node (if the extension result type is 'Node'; you can check data on the xml file), instead of 'All Bodies'.
If you code runs slow due to the plotting this should fix it, cause you will be requesting just one node.
As you are referring to DoE, I understand you are expecting to run this model iteratively and read the parameter result. An easy trick might be to generate a 'NamedSelection' by 'Worksheet' and select 'Mesh Node' (Entity Type) with 'NodeID' as Criterion and equal to '1', for example. So even if through your iterations you change the mesh, we expect to have always node ID 1, so your NamedSelection is guaranteed to be generated successfully in each iteration.
Then you can scope you 'CustomPost' to 'NamedSelection' and then to the one you created. This should work.
If your extension does not accept 'NamedSelection' as 'Scoping Method' and you are changing the mesh in each iteration (if you are not, you can directly scope a node), I think it is time to manually write the parameter as an 'Input Parameter', in the 'Parameter Set'. But in this way you will have to control the execution of the model from Workbench platform.
I am curious to see how it goes.

Related

Is there another way to convert ee.Number to float except getInfo()?

Hello friends!
Summarization:
I got a ee.FeatureCollection containing around 8500 ee.Point-objects. I would like to calculate the distance of these points to a given coordinate, lets say (0.0, 0.0).
For this i use the function geopy.distance.distance() (ref: https://geopy.readthedocs.io/en/latest/#module-geopy.distance). As input the the function takes 2 coordinates in the form of 2 tuples containing 2 floats.
Problem: When i am trying to convert the coordinates in form of an ee.List to float, i always use the getinfo() function. I know this is a callback and it is very time intensive but i don't know another way to extract them. Long story short: To extract the data as ee.Number it takes less than a second, if i want them as float it takes more than an hour. Is there any trick to fix this?
Code:
fc_containing_points = ee.FeatureCollection('projects/ee-philadamhiwi/assets/Flensburg_100') #ee.FeatureCollection
list_containing_points = fc_containing_points.toList(fc_containing_points.size()) #ee.List
fc_containing_points_length = fc_containing_points.size() #ee.Number
for index in range(fc_containing_points_length.getInfo()): #i need to convert ee.Number to int
point_tmp = list_containing_points.get(i) #ee.ComputedObject
point = ee.Feature(point_tmp) #transform ee.ComputedObject to ee.Feature
coords = point.geometry().coordinates() #ee.List containing 2 ee.Numbers
#when i run the loop with this function without the next part
#i got all the data i want as ee.Number in under 1 sec
coords_as_tuple_of_ints = (coords.getInfo()[1],coords.getInfo()[0]) #tuple containing 2 floats
#when i add this part to the function it takes hours
PS: This is my first question, pls be patient with me.
I would use .map instead of your looping. This stays server side until you export the table (or possibly do a .getInfo on the whole thing)
fc_containing_points = ee.FeatureCollection('projects/eephiladamhiwi/assets/Flensburg_100')
fc_containing_points.map(lambda feature: feature.set("distance_to_point", feature.distance(ee.Feature(ee.Geometry.Point([0.0,0.0])))
# Then export using ee.batch.Export.Table.toXXX or call getInfo
(An alternative might be to useee.Image.paint to convert the target point to an image then, use ee.Image.distance to calculate the distance to the point (as an image), then use reduceRegions over the feature collection with all points but 1) you can only calculate distance to a certain distance and 2) I don't think it would be any faster.)
To comment on your code, you are probably aware loops (especially client side loops) are frowned upon in GEE (primarily for the performance reasons you've run into) but also note that any time you call .getInfo on a server side object it incurs a performance cost. So this line
coords_as_tuple_of_ints = (coords.getInfo()[1],coords.getInfo()[0])
Would take roughly double the time as this
coords_client = coords.getInfo()
coords_as_tuple_of_ints = (coords_client[1],coords_client[0])
Finally, you could always just export your entire feature collection to a shapefile (using ee.batch.Export.Table.... as above) and do all the operations using geopy locally.

Why does manipulating xarrays takes up so much memory permanently?

I'm new at using Xarray (using it inside jupyter notebooks), and up to now everything has worked like a charm, except when I started to look at how much RAM is used by my functions (e.g. htop), which is confusing me (I didn't find anything on stackexchange).
I am combining monthly data to yearly means, taking into account month lengths, masking nan values and also using specific months only, which requires the use of groupby and resample.
As I can see from using the memory profiler these operations temporarily take up ~15gm RAM, which as such is not a problem because I have 64gb RAM at hand.
Nonetheless it seems like some memory is blocked permanently, even though I call these methods inside a function. For the function below it blocks ~4gb of memory although the resulting xarray only has a size of ~440mb (55*10**6 float 64entries), with more complex operations it blocks more memory.
Explicitly using del , gc.collect() or Dataarray.close() inside the function did not change anything.
A basic function to compute a weighted yearly mean from monthly data looks like this:
import xarray as xr
test=xr.open_dataset(path)['pr']
def weighted_temporal_mean(ds):
"""
Taken from https://ncar.github.io/esds/posts/2021/yearly-averages-xarray/
Compute yearly average from monthly data taking into account month length and
masking nan values
"""
# Determine the month length
month_length = ds.time.dt.days_in_month
# Calculate the weights
wgts = month_length.groupby("time.year") / month_length.groupby("time.year").sum()
# Setup our masking for nan values
cond = ds.isnull()
ones = xr.where(cond, 0.0, 1.0)
# Calculate the numerator
obs_sum = (ds * wgts).resample(time="AS").sum(dim="time")
# Calculate the denominator
ones_out = (ones * wgts).resample(time="AS").sum(dim="time")
# Return the weighted average
return obs_sum / ones_out
wm=weighted_temporal_mean(test)
print("nbytes in MB:", wm.nbytes / (1024*1024))
Any idea how to ensure that the memory is freed up, or am I overlooking something and this behavior is actually expected?
Thank you!
The only hypothesis I have for this behavior is that some of the operations involving the passed in ds modify it in place, increasing its size, as, apart of the returned objects, this the the only object that should survive after the function execution.
That can be easily verified by using del on the ds structure used as input after the function is run. (If you need the data afterwards, re-read it, or make a deepcopy before calling the function).
If that does not resolve the problem, then this is an issue with the xarray project, and I'd advise you to open an issue in their project.

Abaqus Python Command to insert keyword to the .inp file

The whole objective behind asking this question stems from trying to multi-process the creation of Linear Constraint Equations (http://abaqus.software.polimi.it/v6.14/books/usb/default.htm?startat=pt08ch35s02aus129.html#usb-cni-pequation) in Abaqus/CAE for applying periodic boundary conditions to a meshed model. Since my model has over a million elements and I need to perform a Monte Carlo simulation of 1000 such models, I would like to parallelize the procedure for which I haven't found a solution due to the licensing and multi-threading restrictions associated with Abaqus/CAE. Some discussions on this here: Python multiprocessing from Abaqus/CAE
I am currently trying to perform the equation definitions outside of Abaqus using the node sets created as I know the syntax of Equations for the input file.
** Constraint: <name>
*Equation
<dof>
<set1>, <dof>, <coefficient1>.
<set2>, <dof>, <coefficient2>.
<set3>, <dof>, <coefficient3>.
e.g.
** Constraint: Corner_c1_Constraint-1-pair1
*Equation
3
All-1.c1_Node-1, 1, 1.
All-1.c5_Node-1, 1, -1.
RefPoint-3.SetRefPoint3, 1, -1.
Instead of directly writing these lines into the .inp file, I can also write these commands as a separate file and link it to the .inp file of the model using
*EQUATION, INPUT=file_name
I am looking for the Abaqus Python command to write a keyword such as above to the .inp file instead of specifying the Equation constraints themselves.
The User's guide linked above instructs specifying this via GUI but that I haven't been able to do that in my version of Abaqus CAE 2018.
Abaqus/CAE Usage:
Interaction module: Create Constraint: Equation: click mouse button 3 while holding the cursor over the data table, and select Read from File.
So I am looking for a command from the scripting reference manual to do this instead. There are commands to parse input files (http://abaqus.software.polimi.it/v6.14/books/ker/pt01ch24.html) but not something to directly write to input file instead of performing it via scripting. I know I can hard code this into the input file but the sheer number of simulations that I would like to perform calls for every bit of automation possible. I have already tried optimising the code using appropriate algorithms and numpy arrays yet the pre-processing itself takes hours for one single model.
p.s. This is my first post on SO - so I'm not sure if this question is phrased in the appropriate format. Would appreciate any answers to the actual question or any other solutions to the intended outcome of parallelizing the pre-processing steps in Abaqus/CAE.
You are looking for the KeywordBlock object. This allows you to write anything to the input file (keywords, comments, etc) as a formatted string. I believe this approach is much less error-prone than alternatives that require you to programmatically write an input file (or update an existing one) on your own.
The KeywordBlock object is created when a Part Instance is created at the Assembly level. It stores everything you do in the CAE with the corresponding keywords.
Note that any changes you make to the KeywordBlock object will be synced to the Job input file when it is written, but will not update the CAE Model database. So for example, if you use the KeywordBlock to store keywords for constraint equations, the constraint definitions will not show up in the CAE model tree. The rest of the Mdb does not know they exist.
As you know, keywords must be written to the appropriate section of an input file. For example, the *equation keyword may be defined at the Part, Part Instance, or Assembly level (see the Keywords Reference Manual). This must also be considered when you store keywords in the KeywordBlock object (it will not auto-magically put your keywords in the right spot, unfortunately!). A side effect is that it is not always safe to write to the KeywordBlock - that is, whenever it may conflict with subsequent changes made via the CAE GUI. I believe that the Abaqus docs recommend that you add your keywords as a last step.
So basically, read the KeywordBlock object, find the right place, and insert your new keyword. Here's an example snippet to get you started:
partname = "Part-1"
model = mdb.models["Model-1"]
modelkwb = model.keywordBlock
assembly = model.rootAssembly
if assembly.isOutOfDate:
assembly.regenerate()
# Synch edits to modelkwb with those made in the model. We don't need
# access to *nodes and *elements as they would appear in the inp file,
# so set the storeNodesAndElements arg to False.
modelkwb.synchVersions(storeNodesAndElements=False)
# Search the modelkwb for the desired insertion point. In this example,
# we are looking for a line that indicates we are beginning the Part-Level
# block for the specific Part we are interested in. If it is found, we
# break the loop, storing the line number, and then write our keywords
# using the insert method (which actually inserts just below the specified
# line number, fyi).
line_num = 0
for n, line in enumerate(modelkwb.sieBlocks):
if line.replace(" ","").lower() == "*part,name={0}".format(partname.lower()):
line_num = n
break
if line_num:
kwds = "your keyword as a string here (may be multiple lines)..."
modelkwb.insert(position=line_num, text=kwds)
else:
e = ("Error: Part '{}' was not found".format(partname),
"in the Model KeywordBlock.")
raise Exception(" ".join(e))

ABAQUS - create Contact Damping Object in Python Script

So i was trying to creat a damping object under interaction property.
Below is the damping object arguments and such.
**definition**
A SymbolicConstant specifying the method used to define the damping. Possible values are DAMPING_COEFFICIENT and CRITICAL_DAMPING_FRACTION. The default value is DAMPING_COEFFICIENT.
**tangentFraction**
The SymbolicConstant DEFAULT or a Float specifying the tangential damping coefficient divided by the normal damping coefficient. The default value is DEFAULT.
**clearanceDependence**
A SymbolicConstant specifying the variation of the damping coefficient or fraction with respect to clearance. Possible values are STEP, LINEAR, and BILINEAR. The default value is STEP.
If definition=CRITICAL_DAMPING_FRACTION, the only possible value is STEP.
**table**
A sequence of pairs of Floats specifying the damping properties. The items in the table data are
described below.
**Table data**
If definition=DAMPING_COEFFICIENT and clearanceDependence=STEP, the table data specify the following:
• Damping coefficient.
If definition=DAMPING_COEFFICIENT and clearanceDependence=LINEAR or BILINEAR, the table
data specify the following:
• Damping coefficient.
• Clearance.
Two pairs must be given for clearanceDependence=LINEAR and three pairs for clearanceDependence=BILINEAR. The first pair must have clearance=0.0, and the last pair must have coefficient=0.0.
If definition=CRITICAL_DAMPING_FRACTION, the table data specify the following:
• Critical damping fraction.
So the definition i'm using is the CRITICAL_DAMPING_FRACTION. The only difficulty i run into is how to write for the "table" part. Below is my code:
myModel.interactionProperties['Prop-1'].Damping(definition = CRITICAL_DAMPING_FRACTION, table = ((6,),))
so from the manual, it says the table should be a sequence of pairs of floats, and expecting a tuple.Since for critical damping fraction, there is only one number needed. The error message I got is the " invalid damping table".
I really couldn't find out what I did wrong for the table part. Hope someone here can know where I'm wrong! THANKS!!
Your table definition is correct, but you're missing definition for clearanceDependence. To make your command work, write the following:
myModel.interactionProperties['Prop-1'].Damping(definition = CRITICAL_DAMPING_FRACTION, table = ((6,),), clearanceDependence=STEP)
There is only one possible value for clearanceDependence property, which is STEP, but you need to define it anyway. Unfortunately, documentation is not that clear about that.
In the future, you can modify the interaction property manually in Abaqus and just read the value using Python. That way you'll see what it should look like. Plus, the abaqus.rpy file would contain the correct command.

Pattern for associating pyparsing results with a linked-list of nodes

I have defined a pyparsing rule to parse this text into a syntax-tree...
TEXT COMMANDS:
add Iteration name = "Cisco 10M/half"
append Observation name = "packet loss 1"
assign Observation results_text = 0.0
assign Observation results_bool = True
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append Observation name = "packet loss 2"
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
append DataPoint
assign DataPoint metric = txpackets
assign DataPoint units = packets
SYNTAX TREE:
['add', 'Iteration', ['name', 'Cisco 10M/half']]
['append', 'Observation', ['name', 'packet loss 1']]
['assign', 'Observation', ['results_text', '0.0']]
['assign', 'Observation', ['results_bool', 'True']]
['append', 'DataPoint']
['assign', 'DataPoint', ['metric', 'txpackets']]
['assign', 'DataPoint', ['units', 'packets']]
...
I'm trying to associate all the nested key-value pairs in the syntax-tree above into a linked-list of objects... the heirarchy looks something like this (each word is a namedtuple... children in the heirarchy are on the parents' list of children):
Log: [
Iteration: [
Observation:
[DataPoint, DataPoint],
Observation:
[DataPoint, DataPoint]
]
]
The goal of all this is to build a generic test data-acquisition platform to drive the flow of tests against network gear, and record the results. After the data is in this format, the same data structure will be used to build a test report. To answer the question in the comments below, I chose a linked list because it seemed like the easiest way to sequentially dequeue the information when writing the report. However, I would rather not assign Iteration or Observation sequence numbers before finishing the tests... in case we find problems and insert more Observations in the course of conducting the test. My theory is that the position of each element in the list is sufficient, but I'm willing to change that if it's part of the problem.
The problem is that I'm getting lost trying to assign Key-Values to objects in the linked list after it's built. For instance, after I insert an Observation namedtuple into the first Iteration, I have trouble reliably handling the update of assign Observation results_bool = True in the example above.
Is there a generalized design pattern to handle this situation? I have googled this for while, but I can't seem to make the link between parsing the text (which I can do) and managing the data-heirarchy (the main problem). Hyperlinks or small demo code is fine... I just need pointers to get on the right track.
I am not aware of an actual design pattern for what you're looking for, but I have a great passion for the issue at hand. I work heavily with network devices and parsing and organizing the data is a large ongoing challenge for me.
It's clear that the problem is not parsing the data, but what you do with it afterwards. This is where you need to think about the meaning you are attaching to the data you have parsed. The nested-list method might work well for you if the objects containing the lists are also meaningful.
Namedtuples are great for quick-and-dirty class-ish behavior, but they fall flat when you need them to do anything outside of basic attribute access, especially considering that as tuples they are immutable. It seems to me that you'll want to replace certain namedtuple objects with full-blown classes. This way you can highly customize the behavior and methods available.
For example, you know that an Iteration will always contain 1 or more Observation objects which will then contain 1 or more DataPoint objects. If you can accurately describe the relationships, this sets you on the path to handling them.
I wound up using textfsm, which allows me to keep state between different lines while parsing the configuration file.

Categories