Is it possible to import material from material library in Abaqus CAE using scripts? - python

I'm working on Abaqus 6.14 plugin that would help me in my Engineer's Thesis, which I'm writing in Python. According to Abaqus scripting reference guide it is possible to import materials from Output Databases (*.odb files) by calling:
from abaqus import mdb
mdb.models[name].materialsFromOdb(filename)
However, as Abaqus allows user to export/import materials to/from relatively lightweight Material Libraries (*.lib files) and share them between models I would like to import the materials from these rather than from, often bulky, *.odb files.
Of course this can be done manually with ease, although I want to reduce amount of repetitive job with my plugin, as I need to run dozens of simulations on pretty similiar models but with varying materials and some other parameters. I am aware I could also provide necessary materials in template *.cae file too, although this could be quite inconvenient if I had to manually import new material(s) to dozens of models or I had to update existing material's properties.
What am I looking for is a workaround allowing to import materials from material libraries to Abaqus mdb models with Python scripts, but avoiding implementing custom *.lib files parser, if only such workaround exists.

The first thing you need to know is that Abaqus material library is just a pickled file. No special parser is needed to work with it. You can just use the standard Python libraries pickle or cPickle. Of course, you need to figure out the exact structure of the objects inside. This is not difficult as you'll see it's a just list of simple tuples.
However, if you have an existing material library and you want to import a material in your Abaqus CAE database, there is an existing method to do it.
There is a method in Abaqus which uses a material string from a material database and creates a material object from that. I can't remember the exact name, but if you import one material manually and look into abaqus.rpy file, you will see it inside.
One tricky thing here is that in order to use this method, you need to have a material string from the material library. This can be done by reading the material database file. As already mentioned, this is in pickle format.
As you know, Abaqus already has a way of reading data from the material library and importing it into a CAE model. They have a Python module which you could use, but it can only be used inside a GUI process, not the kernel one. If you want to spend some time, you can figure out which module does that. Inside Abaqus installation folder, you will find some .pyc files. If you use a Python decompiler, you can obtain source code for those modules. Look for those beginning with mat.

Related

How do I include a data-extraction module into my python project?

I am currently starting a kind of larger project in python and I am unsure about how to best structure it. Or to put it in different terms, how to build it in the most "pythonic" way. Let me try to explain the main functionality:
It is supposed to be a tool or toolset by which to extract data from different sources, at the moment mainly SQL-databases, in the future maybe also data from files stored on some network locations. It will probably consist of three main parts:
A data model which will hold all the data extracted from files / SQL. This will be some combination of classes / instances thereof. No big deal here
One or more scripts, which will control everything (Should the data be displayed? Outputted in another file? Which data exactly needs to be fetched? etc) Also pretty straightforward
And some module/class (or multiple modules) which will handle the data extraction of data. This is where I struggle mainly
So for the actual questions:
Should I place the classes of the data model and the "extractor" into one folder/package and access them from outside the package via my "control script"? Or should I place everything together?
How should I build the "extractor"? I already tried three different approaches for a SqlReader module/class: I tried making it just a simple module, not a class, but I didn't really find a clean way on how and where to initialize it. (Sql-connection needs to be set up) I tried making it a class and creating one instance, but then I need to pass around this instance into the different classes of the data model, because each needs to be able to extract data. And I tried making it a static class (defining
everything as a#classmethod) but again, I didn't like setting it up and it also kind of felt wrong.
Should the main script "know" about the extractor-module? Or should it just interact with the data model itself? If not, again the question, where, when and how to initialize the SqlReader
And last but not least, how do I make sure, I close the SQL-connection whenever my script ends? Meaning, even if it ends through an error. I am using cx_oracle by the way
I am happy about any hints / suggestions / answers etc. :)
For this project you will need the basic Data Science Toolkit: Pandas, Matplotlib, and maybe numpy. Also you will need SQLite3(built-in) or another SQL module to work with the databases.
Pandas: Used to extract, manipulate, analyze data.
Matplotlib: Visualize data, make human readable graphs for further data analyzation.
Numpy: Build fast, stable arrays of data that work much faster than python's lists.
Now, this is just a guideline, you will need to dig deeper in their documentation, then use what you need in your project.
Hope that this is what you were looking for!
Cheers

How to get code-completion for COM programming in PyCharm?

When using app = win32com.client.Dispatch('Some.Application'), is there any feasible way get code-completion in PyCharm? It is rather tedious having to retype (or copy-paste) everything from an API documentation, so would creating skeletons be. Is there no other way to let PyCharm know about the Interface provided via COM, especially if I can provide a .tlb file? Or is there at least some way automatically generate such a skeleton (or a wrapping module?) from the TypeLib?
Since there is no way for PyCharm to know the runtime type of app, you shouldn't expect to get code completion on app directly; at least not until they decide to add built-in support for generating code from type libraries.
However, you can exploit the fact that win32com implicitly generates code based on the type library as described in the first part of this answer, together with PyCharm's support for type hinting, to get code completion on COM methods.
Make sure that the Python types have been generated; their location is determined by the GUID of the COM object. For example, the types for Microsoft Word 2016 on my machine are available in
C:\Users\username\appdata\local\temp\gen_py\3.6\00020905-0000-0000-c000-000000000046x0x8x7\.
Add this folder to the path of your PyCharm Python interpreter; see e.g. this answer.
Import the modules for which you want code completion.
In the screenshots below, we use this approach with Word's Find:
Now, besides feeling dirty, this approach relies on the relevant types having been generated and the code completion is limited to the methods published by the object, so I imagine its usefulness in practice might be somewhat limited; in particular, anybody working on the code will have to generate the code, or the annotations will cause NameErrors. Personally, I would probably prefer using Jupyter for the exploratory part of the implementation process, and with minimal tweaks outlined in the answer mentioned above, Jupyter can be extended to have full code completion with win32com.

What Python accessible tools can you use to generate XSD from an XML document?

I'm looking for a tool that will play nicely with Python.
Except for my Python requirement, my question is the same as this one:
"I am looking for a tool which will take an XML instance document and output a corresponding XSD schema."
According to the PyCharm docs, PyCharm has a facility for this. This is not exactly accessible by a program as an API. You are probably better off using XML Schema Learner as a separate program since it is a command line program (subprocess friendly!).
Are you looking for something like pyxsd? (primarily used for validation against a schema) Or maybe PyXB? (can generate classes based on xml) Otherwise, I don't think there's a tool [yet] that will generate the schema from within Python. Can you do it on demand using something like xsd.exe? Does it have to be programmatic/repeatable?
Currently, there is no module that will run within your python program and do this conversion. But I see the problem of creating a XSD schema from XML as a tooling problem. It's the kind of functionality that I'll use once, to get a schema started but after that I'll be maintaining the schema myself. From reading a single XML file the XSD generator will create a starting point for a real schema, it cannot infer all the functionality and options offered by XSD.
Basically, I don't see the need to have this conversion run as a module inside of my code, generating new XSDs every time the XML changes. After all, it's the schema that defines the XML not the other way around.
As end-user pointed out you could use xsd.exe but you might also want to look at other tools such as trang (a bit old) for Java and stylusstudio (XML tool).

Visualize the structure of a Python module

Is there tool out there which can be used to graphically represent the structure of a python module?
I'm thinking that a graph of sub-modules and classes connected by arrows representing imports.
I think you want the Python library snakefood
sfood:
Given a set of input files or root directories, generate a list of dependencies between the files;
sfood-graph:
Read a list of dependencies and produce a Graphviz dot file. (This file can be run through the Graphviz dot tool to produce a viewable/printable PDF file);
Not sure if there's a tool, but you could always parse the package you're working with, walking the directory subtree and recording the classes for each module, and also recording the imports. (If you're working with a single module, then it'd just be checking the classes and inputs, no path-walking required.) Then you'd have to iteratively or recursively (Python recommends iteration) check each module imported for anything imported by them until there are no more imports to make (be careful about circular imports!).
You'll probably find pydot or something similar quite useful for graphically displaying the structure.
An IDE for Python, called ERIC used to have this very functionality
I know this question is very old. If someone is searching a tool for visualising the structure of a Python module, you can try Sourcetrail.
This tool helps you to visualise big source codes such as PyTorch or TensorFlow.
It starts with a nice overview of the project.
If you go inside the Classes button and select any class, it will show you the connection of a specific class with other classes and functions.
All the buttons are clickable and gives you a nice overview. This gives you a kickstart to understand the structure of unknown source code very easily.

Is it reasonable to save data as python modules?

This is what I've done for a project. I have a few data structures that are bascially dictionaries with some methods that operate on the data. When I save them to disk, I write them out to .py files as code that when imported as a module will load the same data into such a data structure.
Is this reasonable? Are there any big disadvantages? The advantage I see is that when I want to operate with the saved data, I can quickly import the modules I need. Also, the modules can be used seperate from the rest of the application because you don't need a separate parser or loader functionality.
By operating this way, you may gain some modicum of convenience, but you pay many kinds of price for that. The space it takes to save your data, and the time it takes to both save and reload it, go up substantially; and your security exposure is unbounded -- you must ferociously guard the paths from which you reload modules, as it would provide an easy avenue for any attacker to inject code of their choice to be executed under your userid (pickle itself is not rock-solid, security-wise, but, compared to this arrangement, it shines;-).
All in all, I prefer a simpler and more traditional arrangement: executable code lives in one module (on a typical code-loading path, that does not need to be R/W once the module's compiled) -- it gets loaded just once and from an already-compiled form. Data live in their own files (or portions of DB, etc) in any of the many suitable formats, mostly standard ones (possibly including multi-language ones such as JSON, CSV, XML, ... &c, if I want to keep the option open to easily load those data from other languages in the future).
It's reasonable, and I do it all the time. Obviously it's not a format you use to exchange data, so it's not a good format for anything like a save file.
But for example, when I do migrations of websites to Plone, I often get data about the site (such as a list of which pages should be migrated, or a list of how old urls should be mapped to new ones, aor lists of tags). These you typically get in Word och Excel format. Also the data often needs massaging a bit, and I end up with what for all intents and purposes are a dictionaries mapping one URL to some other information.
Sure, I could save that as CVS, and parse it into a dictionary. But instead I typically save it as a Python file with a dictionary. Saves code.
So, yes, it's reasonable, no it's not a format you should use for any sort of save file. It however often used for data that straddles the border to configuration, like above.
The biggest drawback is that it's a potential security problem since it's hard to guarantee that the files won't contains arbitrary code, which could be really bad. So don't use this approach if anyone else than you have write-access to the files.
A reasonable option might be to use the Pickle module, which is specifically designed to save and restore python structures to disk.
Alex Martelli's answer is absolutely insightful and I agree with him. However, I'll go one step further and make a specific recommendation: use JSON.
JSON is simple, and Python's data structures map well into it; and there are several standard libraries and tools for working with JSON. The json module in Python 3.0 and newer is based on simplejson, so I would use simplejson in Python 2.x and json in Python 3.0 and newer.
Second choice is XML. XML is more complicated, and harder to just look at (or just edit with a text editor) but there is a vast wealth of tools to validate it, filter it, edit it, etc.
Also, if your data storage and retrieval needs become at all nontrivial, consider using an actual database. SQLite is terrific: it's small, and for small databases runs very fast, but it is a real actual SQL database. I would definitely use a Python ORM instead of learning SQL to interact with the database; my favorite ORM for SQLite would be Autumn (small and simple), or the ORM from Django (you don't even need to learn how to create tables in SQL!) Then if you ever outgrow SQLite, you can move up to a real database such as PostgreSQL. If you find yourself writing lots of loops that search through your saved data, and especially if you need to enforce dependencies (such as if foo is deleted, bar must be deleted too) consider going to a database.

Categories