I am pretty new to post-processing using python scripts. But all these days I used ParaView to look at my results(images at different time steps). But as my mesh resolution increases the image of the next time step takes forever to load. Therefore, I would like to create a python script which can save the results at every timestep in image formats (png or jpeg) and also maybe merge the images as a video file.
I have a folder SavingsforParaview which contains a single .pvd file and 217 .vtu files, one for each time step. In ParaView, we load the pvd file and then visualize everything. Now, I would like to build a script to do the same. I don't want to use the inbuilt python script in ParaView, but create a separate file that I can run in a terminal using python commands.
The files can be found here.
https://filesender.renater.fr/?s=download&token=6aad92fb-dde3-41e0-966d-92284aa5884e
You can use the Python Trace, in Tool menu.
Usage:
start trace
use PV as usual (load files, setup filters and views, take screenshot ...)
stop trace
It generates the python version of your actions and display it. Then you can save it as a python file and manually modify it.
For instance, you can do the visu for the first 2 timesteps and then edit the trace file to add a loop and cover each timesteps.
Related
How to save print output as well as plots in python to a single file in whichever output file format, be it .txt, .html, .pdf, etc. in an automated fashion? Since I will be doing this for thousands of outputs and plots, is there a python command I can use.
I know we can save them separately using python commands, but is there a python command to save them together in the same order that they are outputted, for example how they appear in a Jupyter notebook together as shown below. The format of the file in which they are saved does not matter as long as there is a way to save both together (ideally file format should not be very memory intensive, but that is secondary).
This is so that I can open the file later in a folder and the output is saved for me to always access later. If there is a lot of output Jupyter notebook unfortunately crashes, corrupting the file and making the code irrecoverable.
jupyter notebook have option "file" - download as"
you can save as HTML and insert HTML code fragment for it.
So I found out that this is in fact a much bigger question and problem I had originally thought and the answer is much bigger and non-conventional than I had thought.
The open source platform MLflow (https://mlflow.org/) for machine learning lifecycle does this, and it does a better job than just keeping the plots and text output. It does this by storing the runs as well as saving the plots and outputs as artifacts. Further, a lot of my outputs were the different performance metrics and hyperparameters, which MLflow provides a simple method to store them for the different runs.
This is in fact what I was trying to do and kind of solves the underlying problem I was having of storing the output and the plots in one central location where they could be later accessed.
Thanks for all the help everyone. I appreciate it.
In jupyter notebook you can save it in multiple ways like pdf, ipynb etc. Saving your edits is simple. There is a disk icon in the upper left of the Jupyter tool bar. Click the save icon and your notebook edits are saved. It's important to realize that you will only be saving edits you've made to the text sections and to the coding windows.
On the course website in Chromium, right-click on the . ipynb file you want to download, and select Save link as...
In the Save File dialog that appears, make sure to save the . ipynb file
if you want to save in form of pdf then try opening the jupyter notebook in chrome and click right and print and save as pdf.
I currently have a python script using the freecad API to read a step file and send some data to an excel file (volume, dimensions, etc.) with xlwings. I would like to also add a view of the STEP file. Is there a way using python to get an image from step file?
I've seen in the freeCAD docs that you can use something like this:
FreeCADGui.ActiveDocument.ActiveView.saveImage(filePath)
But I am not running the GUI, so this option currently does not work for me.
I am using VBA-macros in order to automate serveral data processing steps in excel, such as data reduction and visualization. But since excel has no appropriate fit for my purposes I use a python script using scipy's least squares cubic b-spline function. The in and output is done via .txt files. Since I adjusted the script from a manual script I got from a friend.
VBA calls Python
Call common.callLSQCBSpline(targetrng, ThisWorkbook) #calls python which works
Call common.callLoadFitTxt(targetrng, ThisWorkbook) # loads pythons output
Now the funny business:
This works in debug mode but does not when running "full speed". The solution to the problem is simply wait for the directory were the .txt is written to refresh and to load the current and not the previous output file. My solution currently looks like this:
Call common.callLSQCBSpline(targetrng, ThisWorkbook)
Application.Wait (Now + 0.00005)
Call common.callLoadFitTxt(targetrng, ThisWorkbook)
This is slow and anoying but works. Is there a way to speed this up? The python script works fine and writes the output.txt file properly. VBA just needs a second or two before it can load it. The txts are very small under 1 kB.
Thanks in advance!
I am currently running evaluations with multiple parameter configurations in a medium sized project.
I set certain parameters and change some code parts and run the main file with python.
Since the execution will take several hours, after starting it I make changes to some files (comment out some lines and change parameter) and start it again in a new tmux session.
While doing this, I observed behaviour, where the first execution will use configuration options of the second execution, so it seems like python was not done parsing the code files or maybe lazy loads them.
Therefore I wonder how python loads modules / code files and if changing them after I started the execution will have an impact on the execution?
I'm using the Psychopy 1.82.01 Coder and its iohub functionality (on Ubuntu 14.04 LTS). It is working but I was wondering if there is a way to dynamically rename the hdf5 file it produces during an experiment (such that in the end, I know which participant it belongs to and two participants will get two files without overwriting one of them).
It seems to me that the filename is determined in this file: https://github.com/psychopy/psychopy/blob/df68d434973817f92e5df78786da313b35322ae8/psychopy/iohub/default_config.yaml
But is there a way to change this dynamically?
If you want to create a different hdf5 file for each experiment run, then the options depend on how you are starting the ioHub process. Assuming you are using the psychopy.iohub.launchHubServer() function to start ioHub, then you can pass the 'experiment_code' kwarg to the function and that will be used as the hdf5 file name.
For example, if you created a script with the following code and ran it:
import psychopy.iohub as iohub
io = iohub.launchHubServer(experiment_code="exp_sess_1")
# your experiment code here ....
# ...
io.quit()
An ioHub hdf5 file called 'exp_sess_1.hdf5' will be created in the same folder as the script file.
As a side note, you do not have to save each experiment sessions data into a separate hdf5 file. The ioHub hdf5 file structure is designed to save multiple participants / sessions data in a single file. Each time the experiment is run, a unique session code is required, and the data from each run is saved in the hdf5 file with a session id that is associated with the session code.