Passing output from subprocess.Popen to a if elif statement - python

Im creating a python script to check for suPHP i'm trying to create an if else statement to declare if suPHP is on the server using output from subprocess.Popen
I've tested the output of the variable with print before i created this post and it pass's the correct output to the variable suphp. This is what i have so far:
# check for suPHP
suphp = subprocess.Popen("/usr/local/cpanel/bin/rebuild_phpconf --current", shell=True, stdout=subprocess.PIPE,).communicate()[0]
if suphp = "/bin/sh: /usr/local/cpanel/bin/rebuild_phpconf: No such file or directory"
print "suPHP is not installed on the server"
elif
print suphp
Please note I am new to coding and python and decided to try to use python to admin some servers.

You don't appear to be doing anything useful with the shell=True, and so you can probably safely skip it alltogether:
try:
suphp = subprocess.Popen(["/usr/local/cpanel/bin/rebuild_phpconf", "--current"],
stdout=subprocess.PIPE,).communicate()[0]
except OSError:
print "Couldn't start subprocess, suPHP is not installed on the server"
note that you'll have to split the command into each of its separate arguments, since you won't have a shell to do it for you. You should always avoid using the shell for subprocesses unless you absolutely require it (say, because you have to set your environment by sourcing a script)

Out of my head:
the comparison operator is == not = and output is almost always followed by a newline character.
so try something like this:
if "No such file or directory" in suphp:
...

In Unix, you sometimes need to consider that subprocesses can output text to two different output streams. When there are no problems, like with echo hello, the text gets sent to the "standard output" stream.
On the other hand, it's considered good manners for a process to send all of its error messages to the "standard error" stream; for example stat /this-file-does-not-exist. You can verify this by sending all standard output to /dev/null.
When you run this command, you'll get no output on your console:
stat . > /dev/null
When you run this, an error message will appear on your console (because the text is from the standard error stream):
sh /this-program-does-not-exist > /dev/null
Getting back to your question, the "standard error" stream is sometimes called "stderr". The text from this stream can be captured using Python's subprocess library using the POpen.stderr property.

Related

Unix: why I need to close the input FIFO to my program before I can read from its output FIFO

I've got a problem: I have one program running in a shell that does some calculations based on user input, and I can launch this program in an interactive way so it will keep asking for input and it outputs its calculations after user press enter. So it remains open inside the shell till user types the exit-word.
What I want to do is to create an interface in such a way that user has to type input somewhere else off the shell and using pipe, fifo and so on, input is carried away to that program, and its output goes to this interface.
In a few word: I have a long running process and I need to attach, when needed, my interface to its stdin and stdout.
For this kind of problem, I was thinking to use a FIFO file made by mkfifo command (we are in Unix, especially for Mac user) and redirect program stdin and stdout to this file:
my_program < fifofile > fifofile
But I've found some difficulties about reading and writing to this fifo file. So I decided to use 2 fifo files, one for input and one for output. So:
exec my_program < fifofile_in > fifofile_out
(don't know why I use exec for redirection, but it works... and I'm okay with exec ;) )
If I launch this command in a shell, and in another one I wrote:
echo -n "date()" > fifofile_in
The echoing process is succesful, and if I do:
cat fifofile_out
I'm able to see my_program output. Ok! But I don't want to deal with shell, instead I want to deal with a program written by me, like this python script:
import os, time;
text="";
OUT=os.open("sample_out",os.O_RDONLY | os.O_NONBLOCK)
out=os.fdopen(OUT);
while(1):
#IN=open("sample_in",'w' );
IN=os.open("sample_in",os.O_WRONLY)
#OUT=os.fdopen(os.open("sample_out",os.O_RDONLY| os.O_NONBLOCK |os.O_APPEND));
#OUT=open("sample_out","r");
print "Write your mess:";
text=raw_input();
if (text=="exit"):
break;
os.write(IN,text);
os.close(IN);
#os.fsync(IN);
time.sleep(0.05);
try:
while True:
#c=os.read(OUT,1);
c=out.readline();
print "Read: ", c#, " -- ", ord(c);
if not c:
print "End of file";
quit();
#break;
except OSError as e:
continue;
#print "OSError"
except IOError as e:
continue;
#print "IOError"
Where:
sample_in, sample_out are respectively fifo files used for redirections to stdin and stdout (so I write to stdin in order to give input to my_program and I read from stdout in order to get my_program output)
out is my os.fdopen file descriptor used for getting lines with out.readline() instead of using OUT.read(1) (char by char)
time.sleep(0.05) is for delay some time before go reading my_program output (needed for calculations, else I got nothing to read).
Whit this script and my_program running in background from the shell, I'm able to write to stdin and read from stdout correctly, but the journey to achieve this code wasn't easy: after have read all posts about fifo and reading/writing from/to fifo files, I came with this solution of closing the IN fd before reading from OUT even if the fifo files are different! From what I read around internet and in Stackoverflow articles, I thought that this procedure was for handle only one fifo file, but here I deal with two (different!). I think it is something related to how I write into sample_in: I tried to flush to look like echo -n command, but it seems useless.
So I would like to ask you if this behaviour is normal, and how can achieve the same thing with echo -n "...." > sample_in and in other shell cat sample_out? In particular, cat is outputting data continuously as soon as I echo input in sample_in, but my way of reading is with data blocks.
Thanks so much, I hope everything it's clear enough!

Start a subprocess, wait for it to complete and then retrieve data in Python

I'm struggling to get some python script to start a subprocess, wait until it completes and then retrieve the required data. I'm quite new to Python.
The command I wish to run as a subprocess is
./bin.testing/Eva -t --suite="temp0"
Running that command by hand in the Linux terminal produces:
in terminal mode
Evaluation error = 16.7934
I want to run the command as a python sub-process, and receive the output back. However, everything I try seems to skip the second line (ultimately, it's the second line that I want.) At the moment, I have this:
def job(self,fen_file):
from subprocess import Popen, PIPE
from sys import exit
try:
eva=Popen('{0}/Eva -t --suite"{0}"'.format(self.exedir,fen_file),shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr=eva.communicate()
except:
print ('Error running test suite '+fen_file)
exit("Stopping")
print(stdout)
.
.
.
return 0
All this seems to produce is
in terminal mode
0
with the important line missing. The print statement is just so I can see what I am getting back from the sub-process -- the intention is that it will be replaced with code that processes the number from the second line and returns the output (here I'm just returning 0 just so I can get this particular bit to work first. The caller of this function prints the result, which is why there is a zero at the end of the output.) exedir is just the directory of the executable for the sub-process, and fen-file is just an ascii file that the sub-process needs. I have tried removing the 'in terminal mode' from the source code of the sub-process and re compiling it, but that doesn't work -- it still doesn't return the important second line.
Thanks in advance; I expect what I am doing wrong is really very simple.
Edit: I ought to add that the subprocess Eva can take a second or two to complete.
Since the 2nd line is an error message, it's probably stored in your stderr variable!
To know for sure you can print your stderr in your code, or you can run the program on the command line and see if the output is split into stdout and stderr. One easy way is to do ./bin.testing/Eva -t --suite="temp0" > /dev/null. Any messages you get are stderr since stdout is redirected to /dev/null.
Also, typically with Popen the shell=True option is discouraged unless really needed. Instead pass a list:
[os.path.join(self.exedir, 'Eva'), '-t', '--suite=' + fen_file], shell=False, ...
This can avoid problems down the line if one of your arguments would normally be interpreted by the shell. (Note, I removed the ""'s, because the shell would normally eat those for you!)
Try using subprocess check_output.
output_lines = subprocess.check_output(['./bin.testing/Eva', '-t', '--suite="temp0"'])
for line in output_lines.splitlines():
print(line)

Save a Python script's output with > in bash while using raw_input()

This is a weird one that's so general I can't to properly narrow the search terms to find an answer.
My python script has a raw_input to prompt the user for values. But, when I try to run the script and funnel it into a file, it crashes.
Something like "script.py > save.txt"
wont work. It doesn't even properly prompt me at the command line for my input. There doesn't seem to be anything indicating why this doesn't work as intuitively as it should.
raw_output prints its prompt to stdout, which you are redirecting to a file. So your prompt will end up in the file and the program does not appear to show a prompt. One solution is to output your prompt to stderr.
import sys
sys.stderr.write('prompt> ')
value = raw_input()
print('value was: ', value)
You could also avoid using both pipes and interactive input with the same script. Either take input from command line flags using argparse and use pipes, or create an interactive program that saves output to a file itself.
Depending on your program's logic, you can also check whether stdout is connected to a live console or not:
is_tty = os.isatty(sys.stdout.fileno())
Dolda2000 also has a good point about writing to /dev/tty, which will write to the controlling terminal of the script being run even if both stdin and stderr are redirected. The deal there, though, is that you can't use it if you're not running in a terminal.
import errno
try:
with open('/dev/tty', 'w') as tty:
tty.write('prompt> ')
except IOError as exc:
if exc.errno == errno.ENXIO:
pass # no /dev/tty available
else:
pass # something else went wrong

Subprocess Popen not capturing wget --spider command result

My understanding of capturing the output of a subprocess command as a string was to set stdout=sucprocess.PIPE and use command.communicate() to capture result, error.
For example, typing the following:
command = subprocess.Popen(["nmcli", "con"], stdout=subprocess.PIPE)
res, err = command.communicate()
produces no output to the terminal and stores all my connection information as a byte literal in the variable res. Simple.
It falls apart for me here though:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = subprocess.Popen(["wget", "--spider", url], stdout=subprocess.PIPE)
This prints the output of the command to the terminal, then pauses execution until a keystroke is input by user. Subsequently running command.communicate() returns an empty bytes literal, b''.
Particularly odd to me is the pause in execution as issuing the command in bash just prints the command result and directly returns to the prompt.
All my searches just find Q&A about how to capture subprocess results in general, not anything about certain commands having to be captured in a different manner or anything particular about wget and subprocess.
Additional note, I have been able to use the wget command with subprocess to download files (no --spider option) without issue.
Any help greatly appreciated, this one has me stumped.
stderr is capturing the output so because you are not piping stderr you are seeing the output when you run the command and stdout is empty:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = Popen(["wget", "--spider", url],stdout=PIPE,stderr=PIPE)
out,err = command.communicate()
print("This is stdout: {}".format(out))
print("This is stderr: {}".format(err))
This is stdout: b''
This is stderr: b'Spider mode enabled. Check if remote file exists.\n--2015-02-09 18:00:28-- http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent\nResolving torrent.ubuntu.com (torrent.ubuntu.com)... 91.189.95.21\nConnecting to torrent.ubuntu.com (torrent.ubuntu.com)|91.189.95.21|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37429 (37K) [application/x-bittorrent]\nRemote file exists.\n\n'
I've never been asked anything by wget before, but some processes (e.g. ssh) do capture the terminal device (tty) directly to get a password, short-cutting the process pipe you've set up.
To automate cases like this, you need to fake a terminal instead of a normal pipe. There are recipes out there using termios and stuff, but my suggestion would be to use the module "pexpect" which is written to do exactly that.

How to get two python processes talking over pipes?

I'm having troubles getting this to work. Basically I have a python program that expect some data in stdin, that is reading it as sys.stdin.readlines() I have tested this and it is working without problems with things like echo "" | myprogram.py
I have a second program that using the subprocess module calls on the first program with the following code
proc = subprocess.Popen(final_shell_cmd,
stderr=subprocess.PIPE, stdout=subprocess.PIPE,
shell=False), env=shell_env)
f = ' '.join(shell_cmd_args)
#f.append('\4')
return proc.communicate(f)
The second program is a daemon and i have discovered that the second program works well as long as I hit ctrl-d after calling it from the first program.
So it seems there is something wrong with subprocess not closing the file and my first program expecting more input when nothing more should be sending.
anyone has any idea how I can get this working?
The main problem here is that "shell_cmd_args" may contain passwords and other sensitive information that we do not want to pass in as the command name as it will show in tools like "ps".
You want to redirect the subprocess's stdin, so you need stdin=subprocess.PIPE.
You should not need to write Control-D ('\4') to the file object. Control-D tells the shell to close the standard input that's connected to the program. The program doesn't see a Control-D character in that context.

Categories