I'm new in this world of python. Recently I have been asked to make an interface between XFoil (an aerodynamics program) and python. After researching a little bit, I found the subprocess module. As the documentation says it's used to "Spawn new processes, connect to their input/output/error pipes, and obtain their return codes."
The problem is that I need some output archives that XFoil creates while its running. If I close the program, the archives are accesible, but if I try to open or read them while the subprocess is still opened it gives me the following error (Although I can see the archive in the folder):
OSError: save not found.
Here the code:
import subprocess
import numpy as np
import os
process = subprocess.Popen(['<xfoil_path>'], stdin=subprocess.PIPE, universal_newlines=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
airfoil_path = '<path to airfoil>'
process.stdin.write(f'\nload\n{airfoil_path}')
process.stdin.write('\n\n\noper\nalfa\n2\ncpwr\nsave\n')
process.stdin.tell()
print(os.listdir())
c = np.loadtxt('save', skiprows=1)
print(c)
process.stdin.write('\n\n\noper\nalfa\n3\ncpwr\nsave2\n')
stdin.tell is used to get this output archives, but they are not accesible.
Someone knows why this could be happening?
Why do you imagine process.stdin.tell() should "get this output archives"? It retrieves the file pointer's position.
I'm imagining that the actual problem here is that the subprocess doesn't write the files immediately. Maybe just time.sleep(1) before you try to open them, or figure out a way for it to tell you when it's done writing (some OSes let you tell whether another process has a file open for writing, but I have no idea whether this is possible on Windows, let alone reliable).
Sleeping for an arbitrary time is obviously not very robust; you can't predict how long it takes for the subprocess to write out the files. But if that solves your immediate problem, at least you have some idea of what caused it and maybe where to look next.
As an aside, maybe look into the input= keyword parameter for subprocess.run(). If having the subprocess run in parallel is not crucial, that might be more pleasant as well as more robust.
(Converted into an answer from a comment thread.)
Related
I am trying to write a little program in Python (version 3.7.3) with which I can get the out stream of another program while it is running. To emulate this condition I write a very trivial program in python that print a string every 10 seconds.
writecycle.py
import time
while(1):
print("test process")
time.sleep(10)
In the program that I am trying to write I run this process and I try to get the output
mainproc.py
import time
import subprocess
proc=subprocess.Popen(["python","writecycle.py","&"],stdout=subprocess.PIPE,encoding='UTF-8')
print("start reading output")
while(1):
strout=proc.stdout.read()
print("_"+strout)
time.sleep(10)
but I cannot get further the "start reading output" message. The program get "stuck" on the proc.stdout.read() command.
I red some solution that suggest to use subprocess.communicate() but I think that this command does not fit my needs since it wait the process to be terminate for reading the out stream.
Someone else suggest to use subprocess.poll() but I still get stuck on the proc.stdout.read() command.
I tried using bufsize=1 or 0 in the Popen command with no results, or using readline() but nothing.
I don't know if this helps but I am using a Raspberry Pi4 with Raspbian Buster.
I have to the conclusion that this problem in unsolvable. I give myself an explenation of this but I don't know if is the right answer.
The idea comes to me when I tried to redirect the out stream into a file and then read the file. The problem with this approach is that you cannot read the file if it is still open and I cannot close the file if the process is still running. If I understood correctly Linux (and so Raspbian) is a file-based OS so reading from an open "stdout" is like to reading from file opened from another process.
Again, this is the explenation that I give to myself and I do not know if is correct. Maybe one that have more knowledge about Linux OS can tell if this explenation make sense or if it is wrong.
So far I don't think this is actually possible, but basically what I am trying to do is have one python program call another and run it, like how you would use import.
But then I need to be able to go from the second file back to the beginning of the first.
Doing this with import doesn't work because the first program never closed and will be still running, so running it again will only return to where it left off when it ran the second file.
Without understanding a bit more about what you want to do, I would suggest looking into the threading or multiprocessing libraries. These should allow you to create multiple instances of a program or function.
This is vague and I'm not quite sure what you're trying to do, but you can also explore the Subprocess module for Python. It will allow you to spawn new processes similarly to if you were starting them from the command-line, and your processes will also be able to talk to the child processes via stdin and stdout.
If you don't want to import any modules:
exec("file.py")
Otherwise:
import os
os.system('file.py')
Or:
import subprocess
subprocess.call('file.py')
I am using Python 2.7.3 in Ubuntu 12.04 OS. I have an external program say 'xyz' whose input is a single file and two files say 'abc.dat' and 'gef.dat' are its output.
When I used os.system or subprocess.check_output or os.popen none of them printed the output files in the working directory.
I need these output files for further calculations.
Plus I've to keep calling the 'xyz' program 'n' times and have to keep getting the output 'abc.dat' and 'gef.dat' every time from it. Please help.
Thank you
I can not comment on your question because my reputation is too low.
If you use os.system or subprocess.check_output or os.popen, you will just get the standard output of your xyz program (if it is printing something in the screen). To see the files in some directory, you can use os.listdir(). Then you can use these files in your script afterwards. It may also be worth using subprocess.check_call.
There may be other better and more efficient solutions.
First, run the program which you invoked in python script directly and see if it generates those two files.
Assume it does, the problem is in your python script. Try using subprocess.Popen then call communicate().
Here's an example:
from subprocess import Popen
p = Popen(["xyz",])
p.communicate()
communicate waits for process to terminate. you should be able to get output files when executing code after p.communicate().
Thank you for answering my question but the answer to my question is this -
import subprocess
subprocess.call("/path/to/software/xyz abc.dat", shell=True)
which gave me the desired the output.
I tried the subprocess-related commands but they returned error " No such file or directory". The 'shell=True' worked like a charm.
Thank you all again for taking your time to answer my question.
I'm working on a project where I will be running potentially malicious code. It's basic organization is that there is a master and a slave process. The slave process runs the potentially malicious code, and has seccomp enabled.
import prctl
prctl.set_seccomp(True)
This is how seccomp is turned on. I can communicate fine FROM the slave TO the master, but not the other way around. When I don't turn on seccomp, I can use:
import sys
lines = sys.stdin.read()
Or something along those lines. I found this quite odd, I should have access to read and write given the default parameters of seccomp, especially for stdin/out. I have even tried opening stdin before I turn on seccomp. For example.
stdinFile = sys.stdin
prctl.set_seccomp(True)
lines = stdinFile.read()
But still to no avail. I have also tried readlines() which doesn't work. A friend suggested that I try Unix Domain Sockets, opening it before seccomp goes on, and then just using the write() call. This didn't work either. If anyone has any suggestions on how to combat this problem, please post them! I have seen some code in C for something like
seccomp_add_rule(stuff)
But I have been unsuccessful at using this in Python with the cffi module.
sys.stdin is not a file handle, you need to open it and get a file handle before calling set_seccomp. You could use os.fdopen for this. The file descriptor for stdin / stdout is available as sys.stdin.fileno().
There is a program written and compiled in C, with typical data input from a Unix shell; on the other hand, I'm using Windows.
I need to send input to this program from the output of my own code written in Python.
What is the best way to go about doing this? I've read about pexpect, but not sure really how to implement it; can anyone explain the best way to go about this?
i recommend you use the python subprocess module.
it is the replacement of the os.popen() function call, and it allows to execute a program while interacting with its standard input/output/error streams through pipes.
example use:
import subprocess
process = subprocess.Popen("test.exe", stdin=subprocess.PIPE, stdout=subprocess.PIPE)
input,output = process.stdin,process.stdout
input.write("hello world !")
print(output.read().decode('latin1'))
input.close()
output.close()
status = process.wait()
If you don't need to deal with responding to interactive questions and prompts, don't bother with pexpect, just use subprocess.communicate, as suggested by Adrien Plisson.
However, if you do need pexpect, the best way to get started is to look through the examples on its home page, then start reading the documentation once you've got a handle on what exactly you need to do.