This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 4 years ago.
I have a Python program, it generates a string, say fileName. In fileName I hold something like "video1.avi", or "sound1.wav". Then I make an os call to start a program ! program arg1 arg2, where my fileName is arg2. How can I achieve that on the fly, without making the whole program return a single string (the fileName) and then pass it to the shell line? How can I make that during execution. The script executes in Jupyter.
P.S. I am looping and changing the file name and I have to run that script at every loop.
If you want your script to run some outside program, passing in an argument, the way to do that is the subprocess module.
Exactly which function to call depends on what exactly do you want to do. Just start it in the background and ignore the result? Wait for it to finish, ignore any output, but check that it returned success? Collect output so you can log it? I'm going to pick one of the many options arbitrarily, but read the linked docs to see how to do whichever one you actually want.
for thingy in your_loop:
fileName = your_filename_creating_logic(thingy)
try:
subprocess.run(['program', 'arg1', fileName],
check=True)
print(f'program ran on {filename} successfully')
except subprocess.CalledProcessError as e:
print(f'program failed on {filename} with #{e.returncode}')
Notice that I'm passing a list of arguments, with the program name (or full path) as the first one. You can throw in hardcoded strings like arg1 or --breakfast=spam, and variables like fileName. Because it's a list of strings, not one big string, and because it's not going through the shell at all (at least on Mac and Linux; things are a bit more complicated on Windows, but mostly it "just works" anyway), I don't have to worry about quoting filename in case it has spaces or other funky characters.
If you're using Python 3.4 or 2.7, you won't have that run function; just change it to check_call (and without that check=True argument).
Related
So I was messing around with a script that is supposed to restart itself using os.execl.
It is working a few times but after ~ 30 or 40 calls it crashes;
Traceback (most recent call last):
File "C:\Users\Admin#\PycharmProjects\DiscordBot_Fred_the_Financier\test_suit.py", line 9, in <module>
os.execl(sys.executable, sys.executable, *(*sys.argv, code))
File "C:\Users\Admin#\AppData\Local\Programs\Python\Python37\lib\os.py", line 540, in execl
execv(file, args)
OSError: [Errno 12] Not enough space
So this is the whole code I'm running actually:
import sys
import os
print(sys.argv) # print args
code = "" # placeholder for mutable args
os.execl(sys.executable, sys.executable, *(*sys.argv, code)) # passing new args and replacing process
I have literally no idea why and how this error occurs.
All my drives have >200 GB free storage and my RAM more than 17 GB as well.
I'm running this code via terminal on Win10 64bit python 3.7.
Thank you very much for your help!
P.S. I apologize if there is already an answer to this problem but I could not find one.
Are you opening any huge files in your script? Most likely you are not closing those file handles and they keep accumulating. Once the script crashes all the handles will be released and you see the 200GB
While running the script, can you keep an eye on disk usage? Do you see it rising continuously? (at least after the subsequent calls of exec)
EDIT: I see now that the question was asked in 'Mar 28 '20 at 18:58'. I do not know why I saw it in main list. Question can be closed if OP does not reply or gives more info.
I haven't found why this error happens, but I am assuming it has something to do with the parent process not being able to fully close (maybe due to references left or something similar). I actually found a workaround which I am leaving here in case it helps you or someone else with the same problem.
The workaround is using subprocess.Popen instead of os.execl, and immediately adding os._exit(1). What the latter does is immediately shut down the parent process and thus free all resources. So if you do this, you won't get the 'Not enough space' error, no matter how many times you restart the process. To verify this, I made a test, by running these two lines of code in an infinite loop (although in retrospective, it's not necessary as a second iteration is impossible due to os._exit(1) statement), and leaving it running for a while, and writing to a .txt file per each process replacement. After some considerable amount of time, I stopped the final process by interrupting it with Ctrl+C, and when I opened the file, I found the value 15127, which means that the process had been replaced 15127 times before I interrupted it and I never had any exceptions raised due to space issues.
Additionally, as with os.execl, you can pass command-line arguments by passing sys.argv, for example by typing:
import os
import sys
import subprocess
print(sys.argv)
code = ""
subprocess.Popen([sys.executable, *(*sys.argv, code)])
os._exit(1)
I omitted the second sys.executable because I am not sure of what you are trying to do and if it would work in subprocess.Popen, but given the similarity of these two commands, I'd say it would work the same way if you typed subprocess.Popen(sys.executable, sys.executable, *(*sys.argv, code) instead.
EDIT: Actually, with subprocess.Popen, you need to pass all arguments as a list, as otherwise the second argument you pass will be considered a bufsize according to the method signature (see https://docs.python.org/3/library/subprocess.html#popen-constructor for further details), or as a string concatenation, if possible (for example, subprocess.Popen(sys.executable + " " + code). Also, I added a missing parenthesis.
This question already has answers here:
Using greater than operator with subprocess.call
(2 answers)
Closed 7 years ago.
I want to redirect the output of python script to the file using greater than operator. I have below code which is not working properly. Can someone please help me on this?
proc= subprocess.Popen(['python', 'countmapper.py',file],cwd="C:\pythonPrograms\\",stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
countReducer= subprocess.Popen(['python', 'countreducer.py'],cwd="C:\pythonPrograms\\",stdout=subprocess.PIPE,stdin=proc.stdout, stderr=subprocess.STDOUT)
countpostprocesser= subprocess.Popen(['python','countpostprocesser.py','>','output.json'],cwd="C:\pythonPrograms\\",stdout=subprocess.PIPE,stdin=countReducer.stdout,stderr=subprocess.STDOUT)
'file' is name of the log file that I want to process. Last line (starting with countpostprocesser...) is failing.
Your call is failing because the redirection operator is being passed to your script as an argument, not being acted on by a shell to redirect your output to a file. See the Popen documentation.
This answer to another SO question shows a good example of opening a file, then redirecting the subprocess' output to the file.
Also, as shx2 mentioned in another answer, passing the shell=True argument to your Popen constructor should accomplish what you're looking for as well. It will cause the process to be opened in it's own shell, allowing the shell program to interpret the arguments you pass. Note an important line in the Popen documentation though: "If shell is True, it is recommended to pass args as a string rather than as a sequence."
Use the shell=True flag of Popen.
Also, as I mentioned in the comments, your task can be done simply and elegantly using plumbum.
In Linux. I have a c program that reads a 2048Byte text file as an input. I'd like to launch the c program from a Python script. I'd like the Python script to hand the c program the text string as an argument, instead of writing the text string to a file for the c program to then read.
How can a Python program launch a c program handing it a ~2K (text) data structure?
Also note, I cannot use "subprocess.check_output()". I have to use "os.system()". That's because the latter allows my c-program direct access to terminal input/output. The former does not.
You can pass it as an argument by just… passing it as an argument. Presumably you want to quote it rather than passing it as an arbitrary number of arguments that need to be escaped and so on, but that's easy with shlex.quote. For example:
with open('bigfile.txt', 'rb') as infile:
biginput = infile.read(2048)
os.system('cprogram {}'.format(shlex.quote(biginput)))
If you get an error about the argument or the command line being too long for the shell… then you can't do it. Python can't make the shell do things it can't do, and you refuse to go around the shell (I think because of a misunderstanding, but let's ignore that for the moment). So, you will need some other way to pass the data.
But that doesn't mean you have to store it in a file. You can use the shell from subprocess just as easily as from os.system, which means you can pass it to your child process's stdin:
with subprocess.Popen('cprogram {}'.format(shlex.quote(biginput)),
shell=True, stdin=subprocess.PIPE) as p:
p.communicate(biginput)
Since you're using shell=True, and not replacing either stdout or stderr, it will get the exact same terminal that it would get with os.system. So, for example, if it's doing, say, isatty(fileno(stdout)), it will be true if your Python script is running in a tty, false otherwise.
As a side note, storing it in a tempfile.NamedTemporaryFile may not cost nearly as much as you expect it to. In particular, the child process will likely be able to read the data you wrote right out of the in-memory disk cache instead of waiting for it to be flushed to disk (and it may never get flushed to disk).
I suspect that the reason you thought you couldn't use subprocess is that you were using check_output when you wanted check_call.
If you use check_output (or if you explicit pass stdout=PIPE to most other subprocess functions), the child process's stdout is the pipe that you're reading from, so it's obviously not a tty.
This makes sense: either you want to capture the output, in which case the C program can't output to the tty, or you want to let the C program output to the tty, in which case you can't capture it.* So, just don't capture the output, and everything will be fine.
If I'm right, this means you have no reason to use the shell in the first place, which makes everything a whole lot easier. Of course your data might still be larger than the maximum system argument size** or resource limits***, even without the shell. On most modern systems, you can count on at least 64KB, so definitely try it first:
subprocess.check_call(['cprogram', biginput])
But if you get an E2BIG error:
with subprocess.Popen(['cprogram', biginput], stdin=subprocess.PIPE) as p:
p.communicate(biginput)
* Unless, of course, you want to fake a tty for your child process, in which case you need to look at os.forkpty and related functions, or the pty module.
** On most *BSD and related systems, sysctl kern.argmax and/or getconf ARG_MAX will give you the system limit, or sysconf(_SC_ARG_MAX) from C. There may also be a constant ARG_MAX accessible through <limits.h>. On linux, things are a bit more complicated, because there are a number of different limits (most of which are very, very high) rather than just one single limit. Check your platform's manpage for execve for the details.
*** On some platforms, including recent linux, RLIMIT_STACK affects the max arg size that you can pass. Again, see your platform's execve manpage.
I am attempting to wrap a program that is routinely used at work. When called with an insufficient number of arguments, or with a misspelled argument, the program issues a prompt to the user, asking for the needed input. As a consequence, when calling the routine with subprocess.Popen, the routine never sends any information to stdout or stderr when wrong parameters are passed. subprocess.Popen.communicate() and subprocess.Popen.read(1) both wait for a newline character before any information becomes available.
Is there any way to retrieve information from subprocess.Popen.stdout before the newline character is issued? If not, is there any method that can be used to determine whether the subprocess is waiting for input?
First thing to try: use the bufsize argument to Popen, and set it to 0:
subprocess.Popen(args, bufsize=0, ...)
Unfortunately, whether or not this works also depends upon how the subprocess flushes its output, and I presume you don't have much control over that.
On some platforms, when data written to stdout is flushed will actually change depending on whether the underlying I/O library detects an interactive terminal or a pipe. So while you might think the data is there waiting to be read — because that's how it works in a terminal window — it might actually be line buffered when you're running the same program as a subprocess from another within Python.
Added: I just realised that bufsize=0 is the default anyway. Nuts.
After asking around quite a bit, someone pointed me to the solution. Use pexpect.spawn and pexpect.expect. For example:
Bash "script" in a file titled prompt.sh to emulate the problem - read cannot be called directly from pexpect.spawn.
#!/bin/bash
read -p "This is a prompt: "
This will hang when called by subprocess.Popen. It can be handled by pexpect.spawn, though:
import pexpect
child = pexpect.spawn('./prompt.sh')
child.expect(search)
>>> 0
print child.after #Prints the matched text
>>> 'This is a prompt: '
A list, compiled regex, or list of compiled regex can also be used in place of the string in pexpect.expect to deal with differing prompts.
I have one piece of Cocoa code I wrote that takes in an XML file containing bounding boxes that are then drawn on top of a video (each box has an associated frame). The Cocoa program is meant to be run from the command line (and takes in all its parameters as command line arguments)
I can run program just fine with any XML document. However, I run into problems when I try to run the program from within a Python script. For example:
with file("test.xml") as temp:
temp.write(doc.toprettyxml())
# cval is my cocoa program to call, the other arguments are given to the Python script and parsed with optparser
command = ["./cval", "-o", options.output, "-i", str(options.interval), "-s", "%dx%d" % (options.width, options.height), "-f", str(options.frames), "-x", temp.name]
subprocess.call(command)
Sometimes this will cause my 'cval' to fail, other times not (changing one number in the XML document can change its behavior). I can also verify it's breaking when trying to read an XML element that isn't there. Only, I can open up 'test.xml', and verify the element does in fact exist.
However, if I then run 'cval' myself (outside of the Python script) with 'test.xml', it works fine. This leads me to believe that there is something strange happening when I do 'subprocess.call', but I'm not sure what it could be. I have other Cocoa/Python mixes that do completely different tasks (i.e. not using XML) that also arbitrarily exhibit weird behavior, but are more complex in nature.
I was hoping someone might have run into this problem as well, or might know the next step in debugging this weirdness.
Because the code originally used temporary files, I couldn't close the file before passing it to the subprocess. However, what I should have done instead is to flush the file before subprocess.call was invoked. The inconsistent behavior likely resulted from the size of input causing automatic flushing at different thresholds.
The code should read:
with file("test.xml") as temp:
temp.write(doc.toprettyxml())
temp.flush()
command = ["./cval", "-o", options.output, "-i", str(options.interval), "-s", "%dx%d" % (options.width, options.height), "-f", str(options.frames), "-x", temp.name]
subprocess.call(command)
Perhaps try placing a "print command" statement in there, when the return code of subprocess.call indicates an error. On failure, see if there's any difference between what's being executed by subprocess and what you might run from the command line. Also, try calling subprocess.call(command, shell=True), so your command is being executed as it would in the shell (with string formatting, etc).