Good afternoon,
I am trying to use the filter "Plot Over Line" of Paraview in a Python script. Basically, I want to:
Open the file ".vtu";
Use the filter PlotOverLine for the velocity;
Save the data in a ".csv" file.
On internet, I found a possible way to do this, but it gives error if ran with pvpython (even if using the word "simple" before the commands):
from paraview import simple
import csv
flow = GetActiveSource()
plotOverLine1 = PlotOverLine(Input=flow, Source='High Resolution Line Source')
passArrays1 = PassArrays(Input=plotOverLine1)
passArrays1.PointDataArrays = ['U']
plotOverLine1.Source.Point1 = [0, 0, 0]
plotOverLine1.Source.Point2 = [0, 0.4, 0]
writer = CreateWriter('data.csv')
writer.UpdatePipeline()
First, you may report here you errors.
As you suggest, your script cannot work as is, you should change the import to from paraview.simple import *.
Also, your writer does not have explicit input. I recommend to use CreateWriter(filename='path', input=myInput), or, for a one shot write, SaveData(filename='path', input=myInput).
Finally, one way to produce such scripts is to use Tools / Start Trace menu option (with the default config). Then perform actions in the interface. Finally Tools / Stop Trace give you the python script corresponding to your actions.
Related
I’m using rhino.compute to calculate a mesh.
How could I convert the 3dm decoded mesh to an STL file?
Currently, I can only save it as a 3dm:
import compute_rhino3d.Grasshopper as gh
import rhino3dm
output = gh.EvaluateDefinition(definition_path, trees)
mesh = output['values'][0]['InnerTree']['{0}'][0]["data"]
mesh = rhino3dm.CommonObject.Decode(json.loads(mesh))
doc = rhino3dm.File3dm()
doc.Objects.AddMesh(mesh)
doc.Write("model.3dm", version=0)
Thank you very much!
You can use Rhino.RhinoDoc.WriteFile to write to all file types that rhino conventionally supports for exporting.
def ExportStl():
#Path to save File.
filepath = r"C:\Temp\TestExport.stl"
#Create write options to specify file info.
write_options = Rhino.FileIO.FileWriteOptions()
#Export all geometry, not just selected geometry.
write_options.WriteSelectedObjectsOnly = False
#Write File.
result = Rhino.RhinoDoc.ActiveDoc.WriteFile(filepath, write_options)
ExportStl()
In this case, I'm using the Rhino 'RunPythonScript' command with the open ActiveDoc but in your example, you could use doc.WriteFile(filepath, write_options) instead.
When you first run this, there is a .stl export dialogue that has export options. This window can be suppressed to the command line with write_options.SuppressDialogBoxes = True.
Or you can check the 'Always use these settings. Do not show this dialogue again.' option and it will not interrupt export in the future.
Your example suggests you may be working in a headless environment so I'm not sure how these dialogues would be handled in that scenario.
After building and installing the Python engine shipped with Matlab 2019b in Anaconda
(TestEnvironment) PS C:\Program Files\MATLAB\R2019b\extern\engines\python> C:\Users\USER\Anaconda3\envs\TestEnvironment\python.exe .\setup.py build -b C:\Users\USER\MATLAB\build_temp install
for Python 3.7 I wrote a simple script to test a couple of features I'm interested in:
import matlab.engine as ml_e
# Start Matlab engine
eng = ml_e.start_matlab()
# Load MAT file into engine. The result is a dictionary
mat_file = "samples/lena.mat"
lenaMat = eng.load("samples/lena.mat")
print("Variables found in \"" + mat_file + "\"")
for key in lenaMat.keys():
print(key)
# print(lenaMat["lena512"])
# Use the variable from the MAT file to display it as an image
eng.imshow(lenaMat["lena512"], [])
I have a problem with imshow() (or any similar function that displays a figure in the Matlab GUI on the screen) namely that it shows quickly and then disappears, which - I guess - at least confirms that it is possible to use it. The only possibility to keep it on the screen is to add an infinite loop at the end:
while True:
continue
For obvious reasons this is not a good solution. I am not looking for a conversion of Matlab data to NumPy or similar and displaying it using matplotlib or similar third party libraries (I am aware that SciPy can load MAT files for example). The reason is simple - I would like to use Matlab (including loading whole environments) and for debugging purposes I'd like to be able to show this and that result without having to go through loops and hoops of converting the data manually.
EDIT: Figured THIS part out, but see 2nd post below for another question.
(a little backstory here, skip ahead for the TLDR :) )
I'm currently trying to write a few scripts for Blender to help improve the level creation workflow for a game that I play (Natural Selection 2). Currently, to move geometry from the level editor to Blender, I have to 1) Save a file from the editor as an .obj 2) import obj into blender, and make my changes. Then I 3) export to the game's level format using an exporter script I wrote, and 4) re-open the file in a new instance of the editor. 5) copy the level data from the new instance. 6) paste into the main level file. This is quite a pain to do, and quite clearly discourages even using the tool at all but for major edits. My idea for an improved workflow: 1) Copy data to clipboard in editor 2) Run importer script in Blender to load data. 3) Run exporter script in blender to save data. 4) Paste back into original file. This not only cuts out two whole steps in the tedious process, but also eliminates the need for extra files cluttering up my desktop. Currently though, I haven't found a way to read in clipboard data from the Windows clipboard into Blender... at least not without having to go through some really elaborate installation steps (eg install python 3.1, install pywin32, move x,y,z to the blender directory, uninstall python 3.1... etc...)
TLDR
I need help finding a way to write/read BINARY data to/from the clipboard in Blender. I'm not concerned about cross-platform capability -- the game tools are Windows only.
Ideally -- though obviously beggars can't be choosers here -- the solution would not make it too difficult to install the script for the layman. I'm (hopefully) not the only person who is going to be using this, so I'd like to keep the installation instructions as simple as possible. If there's a solution available in the python standard library, that'd be awesome!
Things I've looked at already/am looking at now
Pyperclip -- plaintext ONLY. I need to be able to read BINARY data off the clipboard.
pywin32 -- Kept getting missing DLL file errors, so I'm sure I'm doing something wrong. Need to take another stab at this, but the steps I had to take were pretty involved (see last sentence above TLDR section :) )
TKinter -- didn't read too far into this one as it seemed to only read plain-text.
ctypes -- actually just discovered this in the process of writing this post. Looks scary as hell, but I'll give it a shot.
Okay I finally got this working. Here's the code for those interested:
from ctypes import *
from binascii import hexlify
kernel32 = windll.kernel32
user32 = windll.user32
user32.OpenClipboard(0)
CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
if user32.IsClipboardFormatAvailable(CF_SPARK):
data = user32.GetClipboardData(CF_SPARK)
size = kernel32.GlobalSize(data)
data_locked = kernel32.GlobalLock(data)
text = string_at(data_locked,size)
kernel32.GlobalUnlock(data)
else:
print('No spark data in clipboard!')
user32.CloseClipboard()
Welp... this is a new record for me (posting a question and almost immediately finding an answer).
For those interested, I found this: How do I read text from the (windows) clipboard from python?
It's exactly what I'm after... sort of. I used that code as a jumping-off point.
Instead of CF_TEXT = 1
I used CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
Here's where I got that function name from: http://msdn.microsoft.com/en-us/library/windows/desktop/ms649049(v=vs.85).aspx
The 'W' is there because for whatever reason, Blender doesn't see the plain-old "RegisterClipboardFormat" function, you have to use "...FormatW" or "...FormatA". Not sure why that is. If somebody knows, I'd love to hear about it! :)
Anyways, haven't gotten it actually working yet: still need to find a way to break this "data" object up into bytes so I can actually work with it, but that shouldn't be too hard.
Scratch that, it's giving me quite a bit of difficulty.
Here's my code
from ctypes import *
from binascii import hexlify
kernel32 = windll.kernel32
user32 = windll.user32
user32.OpenClipboard(0)
CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
if user32.IsClipboardFormatAvailable(CF_SPARK):
data = user32.GetClipboardData(CF_SPARK)
data_locked = kernel32.GlobalLock(data)
print(data_locked)
text = c_char_p(data_locked)
print(text)
print(hexlify(text))
kernel32.GlobalUnlock(data_locked)
else:
print('No spark data in clipboard!')
user32.CloseClipboard()
There aren't any errors, but the output is wrong. The line print(hexlify(text)) yields b'e0cb0c1100000000', when I should be getting something that's 946 bytes long, the first 4 of which should be 01 00 00 00. (Here's the clipboard data, saved out from InsideClipboard as a .bin file: https://www.dropbox.com/s/bf8yhi1h5z5xvzv/testLevel.bin?dl=1 )
I've seen some posts and answers about how to get the terminal size in numbers of columns and rows. Can I get the terminal size, or equivalently, the size of the font used in the terminal, in pixels?
(I wrote equivalently because terminal width[px] = font width[px]*number of columns. or that is what I mean by terminal width.)
I'm looking for a way that works with python 2 on linux, but I do appreciate answers that works only with python 3. Thanks!
Maybe. If your terminal software supports XTerm Control Sequences, then the sequence \e[14t will give you the size width*height in pixels.
Related:
xtermctl - Put standard xterm/dtterm window control codes in shell parameters for easy use. Note that some terminals do not support all combinations.
The data structure that stores terminal info in linux is terminfo. This is the structure that any general terminal query would be reading from. It does not contain pixel information, since that is not relevant for the text-only terminals it was designed to specify.
If you're running the code in an X compatible terminal, it is probably possible with control codes, but that would very likely not be portable.
So, you already know how to get terminal size (here) in characters.
I'm afraid it is not possible.
TTY is a text terminal and doesn't have control of where it is running.
So if your console program is executed in the terminal, you can't know where is it displaying.
However, you can use graphical mode to take control of fonts, display, etc.
But why terminal? You can use GUI for that.
Another possible approach, with limited support, is checking the ws_xpixel and ws_ypixel values of struct terminfo.
A python snippet to query these values:
import array, fcntl, termios
buf = array.array('H', [0, 0, 0, 0])
fcntl.ioctl(1, termios.TIOCGWINSZ, buf)
print(buf[2], buf[3])
This only works in certain terminal emulators, others always report 0 0. See e.g. the VTE feature request to set these fields for a support matrix.
tput cols tells you the number of columns.
tput lines tells you the number of rows.
so
from subprocess import check_output
cols = int(check_output(['tput', 'cols']))
lines = int(check_output(['tput', 'lines']))
I just started to use ArcPy to analyse geo-data with ArcGIS. The analysis has different steps, which are to be executed one after the other.
Here is some pseudo-code:
import arcpy
# create a masking variable
mask1 = "mask.shp"
# create a list of raster files
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
# step 1 (e.g. clipping of each raster to study extent)
for index, item in enumerate(files_to_process):
raster_i = "temp/ras_tem_" + str(index) + ".tif"
arcpy.Clip_management(item, '#', raster_i, mask1)
# step 2 (e.g. change projection of raster files)
...
# step 3 (e.g. calculate some statistics for each raster)
...
etc.
This code works amazingly well so far. However, the raster files are big and some steps take quite long to execute (5-60 minutes). Therefore, I would like to execute those steps only if the input raster data changes. From the GIS-workflow point of view, this shouldn't be a problem, because each step saves a physical result on the hard disk which is then used as input by the next step.
I guess if I want to temporarily disable e.g. step 1, I could simply put a # in front of every line of this step. However, in the real analysis, each step might have a lot of lines of code, and I would therefore prefer to outsource the code of each step into a separate file (e.g. "step1.py", "step2.py",...), and then execute each file.
I experimented with execfile(step1.py), but received the error NameError: global name 'files_to_process' is not defined. It seems that the variables defined in the main script are not automatically passed to scripts called by execfile.
I also tried this, but I received the same error as above.
I'm a total Python newbie (as you might have figured out by the misuse of any Python-related expressions), and I would be very thankful for any advice on how to organize such a GIS project.
I think what you want to do is build each step into a function. These functions can be stored in the same script file or in their own module that gets loaded with the import statement (just like arcpy). The pseudo code would be something like this:
#file 1: steps.py
def step1(input_files):
# step 1 code goes here
print 'step 1 complete'
return
def step2(input_files):
# step 2 code goes here
print 'step 2 complete'
return output # optionally return a derivative here
#...and so on
Then in a second file in the same directory, you can import and call the functions passing the rasters as your inputs.
#file 2: analyze.py
import steps
files_to_process = ["raster1.tif", "raster2.tif", "raster3.tif"]
steps.step1(files_to_process)
#steps.step2(files_to_process) # uncomment this when you're ready for step 2
Now you can selectively call different steps of your code and it only requires commenting/excluding one line instead of a whle chunk of code. Hopefully I understood your question correctly.