Change DenyHosts report: Call external command from Python - python

To begin with, I don't know the first thing about Python ... so I can really use any pointers you have. I do know some Perl, Bash scripting and a bit C++.
I'm running DenyHosts (http://denyhosts.sourceforge.net/) which every now and then sends me a message through email that an IP address was added to /etc/deny.hosts. Eg.:
Added the following hosts to /etc/hosts.deny:
87.215.133.109 (unknown)
----------------------------------------------------------------------
So far so good, but I want to add the country of the IP address to this message. To do this I created a small Perl script that spits out the country:
/usr/local/bin/geo-ip.pl --short 87.215.133.109
Netherlands
So all I want to do is to call this Perl script from Python and then fill the result in the message string. I located the source code which I suspect I need to change, but as announced at the top of this message, I don't know the first thing about Python.
This is a sniplet from the main program calling a subroutine in report.py
deny_hosts.py:
#print deny_hosts
new_denied_hosts, status = self.update_hosts_deny(deny_hosts)
if new_denied_hosts:
if not status:
msg = "WARNING: Could not add the following hosts to %s" % self.__prefs.get('HOSTS_DENY')
else:
msg = "Added the following hosts to %s" % self.__prefs.get('HOSTS_DENY')
self.__report.add_section(msg, new_denied_hosts)
if self.__sync_server: self.sync_add_hosts(new_denied_hosts)
plugin_deny = self.__prefs.get('PLUGIN_DENY')
if plugin_deny: plugin.execute(plugin_deny, new_denied_hosts)
I think the change should go somewhere in here.
report.py defines the add_section:
def add_section(self, message, iterable):
self.report += "%s:\n\n" % message
for i in iterable:
if type(i) in (TupleType, ListType):
extra = ": %d\n" % i[1]
i = i[0]
else:
extra = ""
if self.hostname_lookup:
hostname = self.get_hostname(i)
debug("get_host: %s", hostname)
else: hostname = i
self.report += "%s%s\n" % (hostname, extra)
if self.use_syslog:
syslog.syslog("%s - %s%s" %(message, hostname, extra))
self.report += "\n" + "-" * 70 + "\n"
Please help me change the code in such a way that it'll spit out a message like:
Added the following hosts to /etc/hosts.deny:
87.215.133.109 (Netherlands, unknown)
----------------------------------------------------------------------
EDIT3:
This is how I solved it. The output is identical to the original message. After changing the sources, the daemon needs to be restarted (sudo /etc/init.d/denyhosts restart)
def add_section(self, message, iterable):
# added geo-ip
# moving this from statement to the top of the file makes pycheck generate
# a lot of errors, so I left it here.
from subprocess import Popen, PIPE
# end geo-ip hack import
self.report += "%s:\n\n" % message
for i in iterable:
if type(i) in (TupleType, ListType):
extra = ": %d\n" % i[1]
i = i[0]
else:
extra = ""
if self.hostname_lookup:
hostname = self.get_hostname(i)
debug("get_host: %s", hostname)
else: hostname = i
# self.report += "%s%s\n" % (hostname, extra)
# JPH: added geo-ip
geocmd = "/usr/local/bin/geo-ip.pl --short %s" % i
country = Popen( geocmd, shell=True, stdout=PIPE).communicate()[0]
country = country.strip()
self.report += "%s%s\n%s\n" % (hostname, extra, country)
# end geo-ip hack
if self.use_syslog:
syslog.syslog("%s - %s%s" %(message, hostname, extra))
self.report += "\n" + "-" * 70 + "\n"
Also help me understand what I change, so I can learn a bit Python today too.
EDIT2: For the sake of sharing a link to the geo-ip.pl script http://wirespeed.xs4all.nl/mediawiki/index.php/Geo-ip.pl
EDIT1: Recompilation is done automatically when the source changes, so that answers the question below.
The second problem I have with this is that I found two matching files on my system:
/usr/share/denyhosts/DenyHosts/report.py
/usr/share/denyhosts/DenyHosts/report.pyc
where the .py is the source code and I suspect .pyc actually being executed. So when I change the source code, I wouldn't be surprised nothing changes if I don't somehow compile it afterwards.

I'm only going to answer the specific part of your question about how to call your perl script via python and get the output. The part about where to slot in this info is a little too vague for me to guess from your snippets...
from subprocess import Popen, PIPE
hostIP = "87.215.133.109"
cmd = "/usr/local/bin/geo-ip.pl --short %s" % hostIP
output = Popen(cmd, shell=True, stdout=PIPE).communicate()[0]
## alternate form ##
# cmd = ["/usr/local/bin/geo-ip.pl, "--short", hostIP]
# output = Popen(cmd, stdout=PIPE).communicate()[0]
print output.strip()
# Netherlands
Update
Since I am doing a few things at once on that Popen line, and you are new to python (based on your comments below), I wanted to break down that line a bit for you...
# this call to Popen actually returns a
# Popen object with a number of methods and attributes
# to interact with the process that was just created
p = Popen(cmd, shell=True, stdout=PIPE)
# communicate() is a method of a Popen object which
# allows you to wait for the return output of the pipes
# that you named (or send data to stdin)
# It blocks until data is ready and returns a tuple (stdout, stderr)
stdout, stderr = p.communicate()
# We only wanted the stdout in this case, so we took the first index
output = p.communicate()[0]
# output is a string, and strings have the strip() method to remove
# surrounding whitespace
stripped_output = output.strip()

This could do the trick:
import subprocess
country = subprocess.check_output(
["/usr/local/bin/geo-ip.pl", "--short", "87.215.133.109"])

Related

Catch prints in Python from a long process that activated via os.system [duplicate]

I am trying to find a way in Python to run other programs in such a way that:
The stdout and stderr of the program being run can be logged
separately.
The stdout and stderr of the program being run can be
viewed in near-real time, such that if the child process hangs, the
user can see. (i.e. we do not wait for execution to complete before
printing the stdout/stderr to the user)
Bonus criteria: The
program being run does not know it is being run via python, and thus
will not do unexpected things (like chunk its output instead of
printing it in real-time, or exit because it demands a terminal to
view its output). This small criteria pretty much means we will need
to use a pty I think.
Here is what i've got so far...
Method 1:
def method1(command):
## subprocess.communicate() will give us the stdout and stderr sepurately,
## but we will have to wait until the end of command execution to print anything.
## This means if the child process hangs, we will never know....
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
stdout, stderr = proc.communicate() # record both, but no way to print stdout/stderr in real-time
print ' ######### REAL-TIME ######### '
######## Not Possible
print ' ########## RESULTS ########## '
print 'STDOUT:'
print stdout
print 'STDOUT:'
print stderr
Method 2
def method2(command):
## Using pexpect to run our command in a pty, we can see the child's stdout in real-time,
## however we cannot see the stderr from "curl google.com", presumably because it is not connected to a pty?
## Furthermore, I do not know how to log it beyond writing out to a file (p.logfile). I need the stdout and stderr
## as strings, not files on disk! On the upside, pexpect would give alot of extra functionality (if it worked!)
proc = pexpect.spawn('/bin/bash', ['-c', command])
print ' ######### REAL-TIME ######### '
proc.interact()
print ' ########## RESULTS ########## '
######## Not Possible
Method 3:
def method3(command):
## This method is very much like method1, and would work exactly as desired
## if only proc.xxx.read(1) wouldn't block waiting for something. Which it does. So this is useless.
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
print ' ######### REAL-TIME ######### '
out,err,outbuf,errbuf = '','','',''
firstToSpeak = None
while proc.poll() == None:
stdout = proc.stdout.read(1) # blocks
stderr = proc.stderr.read(1) # also blocks
if firstToSpeak == None:
if stdout != '': firstToSpeak = 'stdout'; outbuf,errbuf = stdout,stderr
elif stderr != '': firstToSpeak = 'stderr'; outbuf,errbuf = stdout,stderr
else:
if (stdout != '') or (stderr != ''): outbuf += stdout; errbuf += stderr
else:
out += outbuf; err += errbuf;
if firstToSpeak == 'stdout': sys.stdout.write(outbuf+errbuf);sys.stdout.flush()
else: sys.stdout.write(errbuf+outbuf);sys.stdout.flush()
firstToSpeak = None
print ''
print ' ########## RESULTS ########## '
print 'STDOUT:'
print out
print 'STDERR:'
print err
To try these methods out, you will need to import sys,subprocess,pexpect
pexpect is pure-python and can be had with
sudo pip install pexpect
I think the solution will involve python's pty module - which is somewhat of a black art that I cannot find anyone who knows how to use. Perhaps SO knows :)
As a heads-up, i recommend you use 'curl www.google.com' as a test command, because it prints its status out on stderr for some reason :D
UPDATE-1:
OK so the pty library is not fit for human consumption. The docs, essentially, are the source code.
Any presented solution that is blocking and not async is not going to work here. The Threads/Queue method by Padraic Cunningham works great, although adding pty support is not possible - and it's 'dirty' (to quote Freenode's #python).
It seems like the only solution fit for production-standard code is using the Twisted framework, which even supports pty as a boolean switch to run processes exactly as if they were invoked from the shell.
But adding Twisted into a project requires a total rewrite of all the code. This is a total bummer :/
UPDATE-2:
Two answers were provided, one of which addresses the first two
criteria and will work well where you just need both the stdout and
stderr using Threads and Queue. The other answer uses select, a
non-blocking method for reading file descriptors, and pty, a method to
"trick" the spawned process into believing it is running in a real
terminal just as if it was run from Bash directly - but may or may not
have side-effects. I wish I could accept both answers, because the
"correct" method really depends on the situation and why you are
subprocessing in the first place, but alas, I could only accept one.
The stdout and stderr of the program being run can be logged separately.
You can't use pexpect because both stdout and stderr go to the same pty and there is no way to separate them after that.
The stdout and stderr of the program being run can be viewed in near-real time, such that if the child process hangs, the user can see. (i.e. we do not wait for execution to complete before printing the stdout/stderr to the user)
If the output of a subprocess is not a tty then it is likely that it uses a block buffering and therefore if it doesn't produce much output then it won't be "real time" e.g., if the buffer is 4K then your parent Python process won't see anything until the child process prints 4K chars and the buffer overflows or it is flushed explicitly (inside the subprocess). This buffer is inside the child process and there are no standard ways to manage it from outside. Here's picture that shows stdio buffers and the pipe buffer for command 1 | command2 shell pipeline:
The program being run does not know it is being run via python, and thus will not do unexpected things (like chunk its output instead of printing it in real-time, or exit because it demands a terminal to view its output).
It seems, you meant the opposite i.e., it is likely that your child process chunks its output instead of flushing each output line as soon as possible if the output is redirected to a pipe (when you use stdout=PIPE in Python). It means that the default threading or asyncio solutions won't work as is in your case.
There are several options to workaround it:
the command may accept a command-line argument such as grep --line-buffered or python -u, to disable block buffering.
stdbuf works for some programs i.e., you could run ['stdbuf', '-oL', '-eL'] + command using the threading or asyncio solution above and you should get stdout, stderr separately and lines should appear in near-real time:
#!/usr/bin/env python3
import os
import sys
from select import select
from subprocess import Popen, PIPE
with Popen(['stdbuf', '-oL', '-e0', 'curl', 'www.google.com'],
stdout=PIPE, stderr=PIPE) as p:
readable = {
p.stdout.fileno(): sys.stdout.buffer, # log separately
p.stderr.fileno(): sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
data = os.read(fd, 1024) # read available
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
finally, you could try pty + select solution with two ptys:
#!/usr/bin/env python3
import errno
import os
import pty
import sys
from select import select
from subprocess import Popen
masters, slaves = zip(pty.openpty(), pty.openpty())
with Popen([sys.executable, '-c', r'''import sys, time
print('stdout', 1) # no explicit flush
time.sleep(.5)
print('stderr', 2, file=sys.stderr)
time.sleep(.5)
print('stdout', 3)
time.sleep(.5)
print('stderr', 4, file=sys.stderr)
'''],
stdin=slaves[0], stdout=slaves[0], stderr=slaves[1]):
for fd in slaves:
os.close(fd) # no input
readable = {
masters[0]: sys.stdout.buffer, # log separately
masters[1]: sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
try:
data = os.read(fd, 1024) # read available
except OSError as e:
if e.errno != errno.EIO:
raise #XXX cleanup
del readable[fd] # EIO means EOF on some systems
else:
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
for fd in masters:
os.close(fd)
I don't know what are the side-effects of using different ptys for stdout, stderr. You could try whether a single pty is enough in your case e.g., set stderr=PIPE and use p.stderr.fileno() instead of masters[1]. Comment in sh source suggests that there are issues if stderr not in {STDOUT, pipe}
If you want to read from stderr and stdout and get the output separately, you can use a Thread with a Queue, not overly tested but something like the following:
import threading
import queue
def run(fd, q):
for line in iter(fd.readline, ''):
q.put(line)
q.put(None)
def create(fd):
q = queue.Queue()
t = threading.Thread(target=run, args=(fd, q))
t.daemon = True
t.start()
return q, t
process = Popen(["curl","www.google.com"], stdout=PIPE, stderr=PIPE,
universal_newlines=True)
std_q, std_out = create(process.stdout)
err_q, err_read = create(process.stderr)
while std_out.is_alive() or err_read.is_alive():
for line in iter(std_q.get, None):
print(line)
for line in iter(err_q.get, None):
print(line)
While J.F. Sebastian's answer certainly solves the heart of the problem, i'm running python 2.7 (which wasn't in the original criteria) so im just throwing this out there to any other weary travellers who just want to cut/paste some code.
I havent tested this throughly yet, but on all the commands i have tried it seems to work perfectly :)
you may want to change .decode('ascii') to .decode('utf-8') - im still testing that bit out.
#!/usr/bin/env python2.7
import errno
import os
import pty
import sys
from select import select
import subprocess
stdout = ''
stderr = ''
command = 'curl google.com ; sleep 5 ; echo "hey"'
masters, slaves = zip(pty.openpty(), pty.openpty())
p = subprocess.Popen(command, stdin=slaves[0], stdout=slaves[0], stderr=slaves[1], shell=True, executable='/bin/bash')
for fd in slaves: os.close(fd)
readable = { masters[0]: sys.stdout, masters[1]: sys.stderr }
try:
print ' ######### REAL-TIME ######### '
while readable:
for fd in select(readable, [], [])[0]:
try: data = os.read(fd, 1024)
except OSError as e:
if e.errno != errno.EIO: raise
del readable[fd]
finally:
if not data: del readable[fd]
else:
if fd == masters[0]: stdout += data.decode('ascii')
else: stderr += data.decode('ascii')
readable[fd].write(data)
readable[fd].flush()
except:
print "Unexpected error:", sys.exc_info()[0]
raise
finally:
p.wait()
for fd in masters: os.close(fd)
print ''
print ' ########## RESULTS ########## '
print 'STDOUT:'
print stdout
print 'STDERR:'
print stderr

subprocess.Popen communicate method open automaticaly the file, stops the program execution until I manually close the file

I have one problem with the python subprocess module.
import os, subprocess
BLEU_SCRIPT_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'multi-bleu.perl')
command = BLEU_SCRIPT_PATH + ' %s < %s'
ref = "ref.en-fr.test.txt"
hyp = "hyp100.en-fr.test.txt"
p = subprocess.Popen(command % (ref, hyp), stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0].decode("utf-8")
# ...
# ...
The multi-bleu.perl file does the evaluation and returns a real number or an error if any; but that's not my concern.
The last line of code automatically opens the multi-bleu.perl file with my default text editor, stops the program execution until I manually close the file.
How can I disable this behavior?
I don't think subprocess.Popen interprets any shebang in the file. You need to specify the executable in the command to be executed. Also, Popen requires a list as first argument, so you need to "lift up" your string formatting into the command-list.
command = [
'/path/to/perl',
BLEU_SCRIPT_PATH + ' %s < %s' % (ref, hyp)
]
p = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
You might also want to have a look at subprocess.check_output which will make the code a bit easier.

Python 3 - Issues writing to files using the stdout argument of subprocess.call module

I'm trying to automate running snmpwalk against several hosts on my penetration testing lab. Basically what I want to do is to give my python script a list of target IPs (in the form of a text file), have it run snmpwalk against them, and store the results in separate files that I create (one per target IP). Here's a portion of my code that runs the tool against the target IPs contained in the live_list object file:
def run_snmpwalk(selection):
# Rewind file
live_list.seek(0)
if selection == '1':
i = 0
for line in live_list:
tgt_host = line.strip("/\n")
file_obj_array[i].write('[+] SNMPWalk user enumeration for IP: ' + tgt_host + ' \n')
print('[+] Attempting to enumerate users from IP: ' + tgt_host)
exit_code = subprocess.call(['snmpwalk', '-c', 'public', '-v1', tgt_host, '1.3.6.1.4.1.77.1.2.25'], stdout=file_obj_array[i])
i += 1
if exit_code == 0:
print('[+] Success')
else:
print('[+] Something went wrong while executing snmpwalk ')
As crappy as it might be, the code above works as I intended to, except for one little detail that I can't seem to fix.
The line below uses the subprocess.call module with the stdoutparameter set to the file I previously created to contain the output of the command:
subprocess.call(['snmpwalk', '-c', 'public', '-v1', tgt_host, '1.3.6.1.4.1.77.1.2.25'], stdout=file_obj_array[i])
And this next line is supposed to write a header in the file to which the output of the previous command is being dumped to:
file_obj_array[i].write('[+] SNMPWalk user enumeration for IP: ' + tgt_host + ' \n')
However, instead of ending up with a header, the line above ends up at the bottom of the file, despite it being executed before the subprocess.call line. Here's a sample output file of the function above:
iso.3.6.1.4.1.77.1.2.25.1.1.5.71.117.101.115.116 = STRING: "Guest"
iso.3.6.1.4.1.77.1.2.25.1.1.6.97.117.115.116.105.110 = STRING: "austin"
iso.3.6.1.4.1.77.1.2.25.1.1.9.73.85.83.82.95.83.82.86.50 = STRING: "IUSR_SRV2"
iso.3.6.1.4.1.77.1.2.25.1.1.9.73.87.65.77.95.83.82.86.50 = STRING: "IWAM_SRV2"
iso.3.6.1.4.1.77.1.2.25.1.1.13.65.100.109.105.110.105.115.116.114.97.116.111.114 = STRING: "Administrator"
iso.3.6.1.4.1.77.1.2.25.1.1.14.84.115.73.110.116.101.114.110.101.116.85.115.101.114 = STRING: "TsInternetUser"
[+] SNMPWalk user enumeration for IP: 10.11.1.128
I can't figure out why subprocess.call manages to write lines to the file before file_obj_array[i].write, even though it comes after it in the for loop.
Any ideas would help.
Thanks!
You have to flush buffers:
def run_snmpwalk(selection, live_list, file_obj_array):
# Rewind file
live_list.seek(0)
if selection == '1':
for line, file_obj in zip(live_list, file_obj_array):
tgt_host = line.strip("/\n")
file_obj.write('[+] SNMPWalk user enumeration for IP: {}\n'.format(tgt_host))
file_obj.flush()
print('[+] Attempting to enumerate users from IP: {}'.format(tgt_host))
exit_code = subprocess.call(['snmpwalk', '-c', 'public', '-v1', tgt_host, '1.3.6.1.4.1.77.1.2.25'], stdout=file_obj)
if exit_code == 0:
print('[+] Success')
else:
print('[+] Something went wrong while executing snmpwalk ')

python subprocess.popen read only whats returned

im faily new to python. need to understand more about subprocess.popen.
i have a script that executes another python script. below is the part where my script will try to execute another script.
cmd = ['python %s %s %s %s %s'%(runscript, steps, part_number, serial_number, self.operation)]
p = subprocess.Popen(cmd, shell = True, stdout=subprocess.PIPE)
p.wait()
result = p.stdout.readline()
the problem is, the script that gets executed, i have to print out the result in order to read results through "result = p.stdout.readline()". below is the script that gets executed
def Main():
if sys.argv[1] == "Initiate" :
doFunc = Functions_obj.Initiate()
if doFunc != 0 :
print doFunc
else :
print "Initiate PASS"
elif sys.argv[1] == "Check" :
getDrive = Functions_obj.initialize()
if getDrive == "NoDevice" :
print getDrive
sys.exit()
doFunc = Functions_obj.Identify_Drive()
if doFunc != 0 :
print doFunc
else :
print "Check PASS"
my question is, i want to "return" results from the script that gets executed and not print. how do i do this with subprocess.popen? and how do i use subprocess to get whats returned rather than whats printed
You can't return data through a pipe like a function call. You have to use one of the many IPC mechanisms like pipes, shared memory, message passing, or sockets. Pipes are generally the simplest, and that's what you're doing here. You can send binary data through the pipe, though. You could try pickling your data, assuming it's pickleable.

Getting output from and giving commands to a python subprocess

I am trying to get output from a subprocess and then give commands to that process based on the preceding output. I need to do this a variable number of times, when the program needs further input. (I also need to be able to hide the subprocess command prompt if possible).
I figured this would be an easy task given that I have seen this problem being discussed in posts from 2003 and it is nearly 2012 and it appears to be a pretty common need and really seems like it should be a basic part of any programming language. Apparently I was wrong and somehow almost 9 years later there is still no standard way of accomplishing this task in a stable, non-destructive, platform independent way!
I don't really understand much about file i/o and buffering or threading so I would prefer a solution that is as simple as possible. If there is a module that accomplishes this that is compatible with python 3.x, I would be very willing to download it. I realize that there are multiple questions that ask basically the same thing, but I have yet to find an answer that addresses the simple task that I am trying to accomplish.
Here is the code I have so far based on a variety of sources; however I have absolutely no idea what to do next. All my attempts ended in failure and some managed to use 100% of my CPU (to do basically nothing) and would not quit.
import subprocess
from subprocess import Popen, PIPE
p = Popen(r'C:\postgis_testing\shellcomm.bat',stdin=PIPE,stdout=PIPE,stderr=subprocess.STDOUT shell=True)
stdout,stdin = p.communicate(b'command string')
In case my question is unclear I am posting the text of the sample batch file that I demonstrates a situation in which it is necessary to send multiple commands to the subprocess (if you type an incorrect command string the program loops).
#echo off
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
If anyone can help me I would very much appreciate it, and I'm sure many others would as well!
EDIT here is the functional code using eryksun's code (next post) :
import subprocess
import threading
import time
import sys
try:
import queue
except ImportError:
import Queue as queue
def read_stdout(stdout, q, p):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output: %s' % outdata)
if __name__ == '__main__':
#setup
p = subprocess.Popen(['shellcomm.bat'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
bufsize=0, shell=True) # I put shell=True to hide prompt
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q, p))
t.daemon = True
t.start()
#command loop
while p.poll() is None:
printout(q)
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
time.sleep(0.1) # I added this to give some time to check for closure (otherwise it doesn't work)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('Return Code: %d' % rc)
However when the script is run from a command prompt the following happens:
C:\Users\username>python C:\postgis_testing\shellcomm7.py
Input: sth
Traceback (most recent call last):
File "C:\postgis_testing\shellcomm7.py", line 51, in <module>
p.stdin.write(cmd)
IOError: [Errno 22] Invalid argument
It seems that the program closes out when run from command prompt. any ideas?
This demo uses a dedicated thread to read from stdout. If you search around, I'm sure you can find a more complete implementation written up in an object oriented interface. At least I can say this is working for me with your provided batch file in both Python 2.7.2 and 3.2.2.
shellcomm.bat:
#echo off
echo Command Loop Test
echo.
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
Here's what I get for output based on the sequence of commands "wrong", "still wrong", and "command string":
Output:
Command Loop Test
Type the correct command string:
Input: wrong
Output:
Type the correct command string:
Input: still wrong
Output:
Type the correct command string:
Input: command string
Output:
you are correct
Return Code: 0
For reading the piped output, readline might work sometimes, but set /P INPUT in the batch file naturally isn't writing a line ending. So instead I used lambda: stdout.read(1) to read a byte at a time (not so efficient, but it works). The reading function puts the data on a queue. The main thread gets the output from the queue after it writes a a command. Using a timeout on the get call here makes it wait a small amount of time to ensure the program is waiting for input. Instead you could check the output for prompts to know when the program is expecting input.
All that said, you can't expect a setup like this to work universally because the console program you're trying to interact with might buffer its output when piped. In Unix systems there are some utility commands available that you can insert into a pipe to modify the buffering to be non-buffered, line-buffered, or a given size -- such as stdbuf. There are also ways to trick the program into thinking it's connected to a pty (see pexpect). However, I don't know a way around this problem on Windows if you don't have access to the program's source code to explicitly set the buffering using setvbuf.
import subprocess
import threading
import time
import sys
if sys.version_info.major >= 3:
import queue
else:
import Queue as queue
input = raw_input
def read_stdout(stdout, q):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output:\n%s' % outdata)
if __name__ == '__main__':
ARGS = ["shellcomm.bat"] ### Modify this
#setup
p = subprocess.Popen(ARGS, bufsize=0, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q))
t.daemon = True
t.start()
#command loop
while 1:
printout(q)
if p.poll() is not None or p.stdin.closed:
break
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('\nReturn Code: %d' % rc)

Categories