I have been scripting with windows for many years and have only started to look at python as an alternative in the last few weeks. I'm trying to write a native python script to backup a mysql database with mysqldump. I normally do this with a command line piping the output > without issue.
I see many answers with subprocess.popen and shell=True, equally I see many statements say I should avoid shell=True
So I'm trying to get the following code to redirect my stdout to a file, all without success
sys.stdout=open("mysqldump.txt",'w')
print("testing line1")
subprocess.check_output(["mysqldump", "-u", "usernmae", "-ppassword", "-h", "dbserver_name", database_name])
If I comment out the sys.sdout line I see the sqldump outputting to my screen so I know I have the syntax correct for this part. I added the print statement and can see this gets written to the file mysqldump.txt. But when run in full there is no dump to the screen or the file
Any ideas? I'm trying to avoid using shell solution
What you tried to do doesn't work because modifying sys.stdout only affects Python-level statements such as print, not lower-level writes from C, and particularly not those performed by an external program. You want to tell subprocess to create a pipe, like you did with the > redirection, which goes like this:
with open("mysqldump.txt",'w') as out:
subprocess.check_call(["mysqldump", "-u", "usernmae", "-ppassword",
"-h", "dbserver_name", database_name],
stdout=out)
Could you use the --result-file=file argument for mysqldump?
might have to change check_output to subprocess.call for this to complete.
or
subprocess.call(["mysqldump", "-u", "usernmae", "-ppassword", "-h", "dbserver_name", database_name],stdout=open('myfile.txt','w'))
edit: myfile.txt will close after subprocess is done.
Related
I am running a Python code where I am executing a command to run another python file by using subprocess.call() method.
The file I execute inside the subprocess.call() has some print statements in it but those statements are not printed on the console when I run this code.
Is there a way I can get them printed on the console ? Everything works fine when I execute this file standalone and not through the subprocess.call().
If I redirect the output of this command to a file then it gets printed in the file, but could not find a way to see it on console at runtime.
Any help would be appreciated
subprocess call has default params: stdout=None, stderr=None which you need to configure or use capture_output=True, see following docs: https://docs.python.org/3/library/subprocess.html
You might want to use subprocess.PIPE:
p = subprocess.call(..., stdout=subprocess.PIPE)
output, _ = p.communicate()
print(output)
(Written in the browser, but should work)
Btw, it's hard to understand your problem and suggest a solution when you don't provide any code (neither which you run with subprocess nor which uses it). Please, add it to get more specific and/or relevant answers
Normally you can automate answers to an interactive prompt by piping stdin:
import subprocess as sp
cmd = 'rpmbuild --sign --buildroot {}/BUILDROOT -bb {}'.format(TMPDIR, specfile)
p = sp.Popen(cmd, stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE, universal_newline=True, shell=True)
for out in p.communicate(input='my gpg passphrase\n'):
print(out)
For whatever reason, this is not working for me. I've tried writing to p.stdin, before executing p.communicate(), I've tried flushing the buffer, I've tried using bytes without universal_newlines=True, I've hard coded things, etc. In all scenarios, the command is executed and hangs on:
Enter pass phrase:
My first hunch was that stdin was not the correct file descriptor and that rpmbuild was internally calling a gpg command, and maybe my input isn't piped. But when I do p.stdin.close() I get an OSerror about subprocess trying to write to the closed descriptor.
What is the rpmbuild command doing to stdin that prevents me from writing to it?
Is there a hack I can do? I tried echo "my passphrase" | rpmbuild .... as the command but that doesn't work.
I know I can do something with gpg like command and sign packages without a passphrase but I kind of want to avoid that.
EDIT:
After some more reading, I realize this is issue is common to commands that require password input, typically using a form of getpass.
I see a solution would be to use a library like pexpect, but I want something from the standard library. I am going to keep looking, but I think maybe i can try writing to something similar /dev/tty.
rpm uses getpass(3) which reopens /dev/tty.
There are 2 approaches to automating:
1) create a pseudotty
2) (linux) find the reopened file descriptor in /proc
If scripting, expect(1) has (or had) a short example with pseudotty's that can be used.
I need to run a external exe file inside a python script. I need two things out of this.
Get whatever the exe outputs to the stdout (stderr).
exe stops executing only after I press the enter Key. I can't change this behavior. I need the script the pass the enter Key input after it gets the output from the previous step.
This is what I have done so far and I am not sure how to go after this.
import subprocess
first = subprocess.Popen(["myexe.exe"],shell=True,stdout=subprocess.PIPE)
from subprocess import Popen, PIPE, STDOUT
first = Popen(['myexe.exe'], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
while first.poll() is None:
data = first.stdout.read()
if b'press enter to' in data:
first.stdin.write(b'\n')
first.stdin.close()
first.stdout.close()
This pipes stdin as well, do not forget to close your open file handles (stdin and stdout are also file handles in a sense).
Also avoid shell=True if at all possible, I use it a lot my self but best practices say you shouldn't.
I assumed python 3 here and stdin and stdout assumes bytes data as input and output.
first.poll() will poll for a exit code of your exe, if none is given it means it's still running.
Some other tips
one tedious thing to do can be to pass arguments to Popen, one neat thing to do is:
import shlex
Popen(shlex.split(cmd_str), shell=False)
It preserves space separated inputs with quotes around them, for instance python myscript.py debug "pass this parameter somewhere" would result in three parameters from sys.argv, ['myscript.py', 'debug', 'pass this parameter somewhere'] - might be useful in the future when working with Popen
Another thing that would be good is to check if there's output in stdout before reading from it, otherwise it might hang the application. To do this you could use select.
Or you could use pexpect which is often used with SSH since it lives in another user space than your application when it asks for input, you need to either fork your exe manually and read from that specific pid with os.read() or use pexpect.
I have one piece of Cocoa code I wrote that takes in an XML file containing bounding boxes that are then drawn on top of a video (each box has an associated frame). The Cocoa program is meant to be run from the command line (and takes in all its parameters as command line arguments)
I can run program just fine with any XML document. However, I run into problems when I try to run the program from within a Python script. For example:
with file("test.xml") as temp:
temp.write(doc.toprettyxml())
# cval is my cocoa program to call, the other arguments are given to the Python script and parsed with optparser
command = ["./cval", "-o", options.output, "-i", str(options.interval), "-s", "%dx%d" % (options.width, options.height), "-f", str(options.frames), "-x", temp.name]
subprocess.call(command)
Sometimes this will cause my 'cval' to fail, other times not (changing one number in the XML document can change its behavior). I can also verify it's breaking when trying to read an XML element that isn't there. Only, I can open up 'test.xml', and verify the element does in fact exist.
However, if I then run 'cval' myself (outside of the Python script) with 'test.xml', it works fine. This leads me to believe that there is something strange happening when I do 'subprocess.call', but I'm not sure what it could be. I have other Cocoa/Python mixes that do completely different tasks (i.e. not using XML) that also arbitrarily exhibit weird behavior, but are more complex in nature.
I was hoping someone might have run into this problem as well, or might know the next step in debugging this weirdness.
Because the code originally used temporary files, I couldn't close the file before passing it to the subprocess. However, what I should have done instead is to flush the file before subprocess.call was invoked. The inconsistent behavior likely resulted from the size of input causing automatic flushing at different thresholds.
The code should read:
with file("test.xml") as temp:
temp.write(doc.toprettyxml())
temp.flush()
command = ["./cval", "-o", options.output, "-i", str(options.interval), "-s", "%dx%d" % (options.width, options.height), "-f", str(options.frames), "-x", temp.name]
subprocess.call(command)
Perhaps try placing a "print command" statement in there, when the return code of subprocess.call indicates an error. On failure, see if there's any difference between what's being executed by subprocess and what you might run from the command line. Also, try calling subprocess.call(command, shell=True), so your command is being executed as it would in the shell (with string formatting, etc).
I am running a sub-program using subprocess.popen. When I start my Python program from the command window (cmd.exe), the program writes some info and dates in the window as the program evolves.
When I run my Python code not in a command window, it opens a new command window for this sub-program's output, and I want to avoid that. When I used the following code, it doesn't show the cmd window, but it also doesn't print the status:
p = subprocess.Popen("c:/flow/flow.exe", shell=True, stdout=subprocess.PIPE)
print p.stdout.read()
How can I show the sub-program's output in my program's output as it occurs?
Use this:
cmd = subprocess.Popen(["c:/flow/flow.exe"], stdout=subprocess.PIPE)
for line in cmd.stdout:
print line.rstrip("\n")
cmd.wait() # you may already be handling this in your current code
Note that you will still have to wait for the sub-program to flush its stdout buffer (which is commonly buffered differently when not writing to a terminal window), so you may not see each line instantaneously as the sub-program prints it (this depends on various OS details and details of the sub-program).
Also notice how I've removed the shell=True and replaced the string argument with a list, which is generally recommended.
Looking for a recipe to process Popen data asynchronously I stumbled upon http://code.activestate.com/recipes/576759-subprocess-with-async-io-pipes-class/
This looks quite promising, however I got the impression that there might be some typos in it. Not tried it yet.
It is an old post, but a common problem with a hard to find solution. Try this: http://code.activestate.com/recipes/440554-module-to-allow-asynchronous-subprocess-use-on-win/