testing interactive python programs - python

I would like to know which testing tools for python support the testing of interactive programs. For example, I have an application launched by:
$ python dummy_program.py
>> Hi whats your name? Joseph
I would like to instrument Joseph so I can emulate that interactive behaviour.

If you are testing an interactive program, consider using expect. It's designed specifically for interacting with console programs (though, more for automating tasks rather than testing).
If you don't like the language expect is based on (tcl) you can try pexpect which also makes it easy to interact with a console program.

Your best bet is probably dependency injection, so that what you'd ordinarily pick up from sys.stdin (for example) is actually an object passed in. So you might do something like this:
import sys
def myapp(stdin, stdout):
print >> stdout, "Hi, what's your name?"
name = stdin.readline()
print >> stdout "Hi,", name
# This might be in a separate test module
def test_myapp():
mock_stdin = [create mock object that has .readline() method]
mock_stdout = [create mock object that has .write() method]
myapp(mock_stdin, mock_stdout)
if __name__ == '__main__':
myapp(sys.stdin, sys.stdout)
Fortunately, Python makes this pretty easy. Here's a more detailed link for an example of mocking stdin: http://konryd.blogspot.com/2010/05/mockity-mock-mock-some-love-for-mock.html

A good example might be the file test_embed.py of the IPython package.
There two different approaches are used:
subprocess
import subprocess
# ...
subprocess.Popen(cmd, env=env, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate(_exit_cmd_string)
pexpect (as already mentioned by Brian Oakley
import pexpect
# ...
child = pexpect.spawn(sys.executable, ['-m', 'IPython', '--colors=nocolor'],
env=env)
# ...
child.sendline("some_command")
child.expect(ipy_prompt)

Related

Python subprocess always waits for programm [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Controlling a terminal application with Python

I have a server program that runs through console. (Specifically, Bukkit MineCraft server) I'd like to be able to control this program and read the output. There is no GUI, so it shouldn't be too hard, right?
Anyway, I have never controlled a console in python and am totally stuck. Any suggestions?
P.S. I'm using Debian Linux, so that should simplify things a bit.
I've gotten a pretty good answer, but I also need one more thing. I want to have some way to print the FULL output of the program to the python console (Line by line is fine) and I need some way for commands in the console to be forwarded to the program.
The canonical answer for a task like this is to use Pexpect.
Pexpect is a Python module for spawning child applications and controlling
them automatically. Pexpect can be used for automating interactive applications
such as ssh, ftp, passwd, telnet, etc. It can be used to a automate setup
scripts for duplicating software package installations on different servers. It
can be used for automated software testing. Pexpect is in the spirit of Don
Libes' Expect, but Pexpect is pure Python. Other Expect-like modules for Python
require TCL and Expect or require C extensions to be compiled. Pexpect does not
use C, Expect, or TCL extensions. It should work on any platform that supports
the standard Python pty module. The Pexpect interface focuses on ease of use so
that simple tasks are easy.
You can try to create an interactive shell inside python, something like:
import sys
import os
import subprocess
from subprocess import Popen, PIPE
import threading
class LocalShell(object):
def __init__(self):
pass
def run(self):
env = os.environ.copy()
p = Popen('open -a Terminal -n', stdin=PIPE, stdout=PIPE, stderr=subprocess.STDOUT, shell=True, env=env)
sys.stdout.write("Started Local Terminal...\r\n\r\n")
def writeall(p):
while True:
# print("read data: ")
data = p.stdout.read(1).decode("utf-8")
if not data:
break
sys.stdout.write(data)
sys.stdout.flush()
writer = threading.Thread(target=writeall, args=(p,))
writer.start()
try:
while True:
d = sys.stdin.read(1)
if not d:
break
self._write(p, d.encode())
except EOFError:
pass
def _write(self, process, message):
process.stdin.write(message)
process.stdin.flush()

Interactive Python script output stored in some file

How do I perform logging of all activities that are done by a Python script and all scripts that are called from it?
I had several Bash scripts but now wrote a Python script which call all of these Bash scripts. I would like to have all output produced from these scripts stored in some file.
The script is interactive Python script, i.e contains raw_input lines, so I couldn't do like 'python script.py | tee log.txt' for overall the Python script since for some reasons questions are not seen on the screen.
Here is an excerpt from the script which calls one of the shell scripts.
cmd = "somescript.sh"
try:
retvalue = subprocess.check_call(cmd, shell=True)
except subprocess.CalledProcessError:
print ("script command has been failed")
sys.exit("exit from script")
What do you think could be done here?
Edit
Two subquestions based on Alex's answer:
How to make the answers on the questions stored in the output file as well? For example on line ok = raw_input(prompt) the user will be asked for the question and I would like to the answer logged as well.
I read about Popen and communicate and didn't use since it buffers the data in memory. Here the amount of output is big and I need to care about standard-error with standard-output as well. Do you know if this is possible to handle with Popen and communicate method as well?
Making Python's own prints go to both the terminal and a file is not hard:
>>> import sys
>>> class tee(object):
... def __init__(self, fn='/tmp/foo.txt'):
... self.o = sys.stdout
... self.f = open(fn, 'w')
... def write(self, s):
... self.o.write(s)
... self.f.write(s)
...
>>> sys.stdout = tee()
>>> print('hello world!')
hello world!
>>>
$ cat /tmp/foo.txt
hello world!
This should work both in Python 2 and Python 3.
To similarly direct the output from subcommands, don't use
retvalue = subprocess.check_call(cmd, shell=True)
which lets cmd's output go to its regular "standard output", but rather grab and re-emit it yourself, as follows:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
so, se = p.communicate()
print(so)
retvalue = p.returncode
assuming you don't care about standard-error (only standard-output) and the amount of output from cmd is reasonably small (since .communicate buffers that data in memory) -- it's easy to tweak if either assumption doesn't correspond to what you exactly want.
Edit: the OP has now clarified the specs in a long comment to this answer:
How to make the answers on the
questions stored in the output file
as well? For example on line ok =
raw_input(prompt) the user will be
asked for the question and I would
like to the answer logged as well.
Use a function such as:
def echoed_input(prompt):
response = raw_input(prompt)
sys.stdout.f.write(response)
return response
instead of just raw_input in your application code (of course, this is written specifically to cooperate with the tee class I showed above).
I read about Popen and communicate
and didn't use since it buffers the
data in memory. Here amount of output
is big and I need to care about
standard-error with standard-output
as well. Do you know if this is
possible to handle with Popen and
communicate method as well?
communicate is fine as long as you don't get more output (and standard-error) than comfortably fits in memory, say a few gigabytes at most depending on the kind of machine you're using.
If this hypothesis is met, just recode the above as, instead:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
so, se = p.communicate()
print(so)
retvalue = p.returncode
i.e., just redirect the subcommand's stderr to get mixed into its stdout.
If you DO have to worry about gigabytes (or whatever) coming at you, then
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in p.stdout:
sys.stdout.write(p)
p.wait()
retvalue = p.returncode
(which gets and emits one line at a time) may be preferable (this depends on cmd not expecting anything from its standard input, of course... because, if it is expecting anything, it's not going to get it, and the problem starts to become challenging;-).
Python has a tracing module: trace. Usage: python -m trace --trace file.py
If you want to capture the output of any script, then on a *nix-y system you can redirect stdout and stderr to a file:
./script.py >> /tmp/outputs.txt 2>> /tmp/outputs.txt
If you want everything done by the scripts, not just what they print, then the python trace module won't trace things done by external scripts that your python executes. The only thing that can trace every action done by a program would be something like DTrace, if you are lucky enough to have a system that supports it. (OS X Instruments are based on DTrace)

How to call a Perl script from Python, piping input to it?

I'm hacking some support for DomainKeys and DKIM into an open source email marketing program, which uses a python script to send the actual emails via SMTP. I decided to go the quick and dirty route, and just write a perl script that accepts an email message from STDIN, signs it, then returns it signed.
What I would like to do, is from the python script, pipe the email text that's in a string to the perl script, and store the result in another variable, so I can send the email signed. I'm not exactly a python guru, however, and I can't seem to find a good way to do this. I'm pretty sure I can use something like os.system for this, but piping a variable to the perl script is something that seems to elude me.
In short: How can I pipe a variable from a python script, to a perl script, and store the result in Python?
EDIT: I forgot to include that the system I'm working with only has python v2.3
Use subprocess. Here is the Python script:
#!/usr/bin/python
import subprocess
var = "world"
pipe = subprocess.Popen(["./x.pl", var], stdout=subprocess.PIPE)
result = pipe.stdout.read()
print result
And here is the Perl script:
#!/usr/bin/perl
use strict;
use warnings;
my $name = shift;
print "Hello $name!\n";
os.popen() will return a tuple with the stdin and stdout of the subprocess.
from subprocess import Popen, PIPE
p = Popen(['./foo.pl'], stdin=PIPE, stdout=PIPE)
p.stdin.write(the_input)
p.stdin.close()
the_output = p.stdout.read()
"I'm pretty sure I can use something like os.system for this, but piping a variable to the perl script is something that seems to elude me."
Correct. The subprocess module is like os.system, but provides the piping features you're looking for.
I'm sure there's a reason you're going down the route you've chosen, but why not just do the signing in Python?
How are you signing it? Maybe we could provide some assitance in writing a python implementation?
I tried also to do that only configure how to make it work as
pipe = subprocess.Popen(
['someperlfile.perl', 'param(s)'],
stdin=subprocess.PIPE
)
response = pipe.communicate()[0]
I wish this will assist u to make it work.

How to capture Python interpreter's and/or CMD.EXE's output from a Python script?

Is it possible to capture Python interpreter's output from a Python script?
Is it possible to capture Windows CMD's output from a Python script?
If so, which librar(y|ies) should I look into?
If you are talking about the python interpreter or CMD.exe that is the 'parent' of your script then no, it isn't possible. In every POSIX-like system (now you're running Windows, it seems, and that might have some quirk I don't know about, YMMV) each process has three streams, standard input, standard output and standard error. Bu default (when running in a console) these are directed to the console, but redirection is possible using the pipe notation:
python script_a.py | python script_b.py
This ties the standard output stream of script a to the standard input stream of script B. Standard error still goes to the console in this example. See the article on standard streams on Wikipedia.
If you're talking about a child process, you can launch it from python like so (stdin is also an option if you want two way communication):
import subprocess
# Of course you can open things other than python here :)
process = subprocess.Popen(["python", "main.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = process.stderr.readline()
y = process.stdout.readline()
process.wait()
See the Python subprocess module for information on managing the process. For communication, the process.stdin and process.stdout pipes are considered standard file objects.
For use with pipes, reading from standard input as lassevk suggested you'd do something like this:
import sys
x = sys.stderr.readline()
y = sys.stdin.readline()
sys.stdin and sys.stdout are standard file objects as noted above, defined in the sys module. You might also want to take a look at the pipes module.
Reading data with readline() as in my example is a pretty naïve way of getting data though. If the output is not line-oriented or indeterministic you probably want to look into polling which unfortunately does not work in windows, but I'm sure there's some alternative out there.
I think I can point you to a good answer for the first part of your question.
1. Is it possible to capture Python interpreter's output from a Python
script?
The answer is "yes", and personally I like the following lifted from the examples in the PEP 343 -- The "with" Statement document.
from contextlib import contextmanager
import sys
#contextmanager
def stdout_redirected(new_stdout):
saved_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout.close()
sys.stdout = saved_stdout
And used like this:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
A nice aspect of it is that it can be applied selectively around just a portion of a script's execution, rather than its entire extent, and stays in effect even when unhandled exceptions are raised within its context. If you re-open the file in append-mode after its first use, you can accumulate the results into a single file:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
print "screen only output again"
with stdout_redirected(open("filename.txt", "a")):
print "Hello world2"
Of course, the above could also be extended to also redirect sys.stderr to the same or another file. Also see this answer to a related question.
Actually, you definitely can, and it's beautiful, ugly, and crazy at the same time!
You can replace sys.stdout and sys.stderr with StringIO objects that collect the output.
Here's an example, save it as evil.py:
import sys
import StringIO
s = StringIO.StringIO()
sys.stdout = s
print "hey, this isn't going to stdout at all!"
print "where is it ?"
sys.stderr.write('It actually went to a StringIO object, I will show you now:\n')
sys.stderr.write(s.getvalue())
When you run this program, you will see that:
nothing went to stdout (where print usually prints to)
the first string that gets written to stderr is the one starting with 'It'
the next two lines are the ones that were collected in the StringIO object
Replacing sys.stdout/err like this is an application of what's called monkeypatching. Opinions may vary whether or not this is 'supported', and it is definitely an ugly hack, but it has saved my bacon when trying to wrap around external stuff once or twice.
Tested on Linux, not on Windows, but it should work just as well. Let me know if it works on Windows!
You want subprocess. Look specifically at Popen in 17.1.1 and communicate in 17.1.2.
In which context are you asking?
Are you trying to capture the output from a program you start on the command line?
if so, then this is how to execute it:
somescript.py | your-capture-program-here
and to read the output, just read from standard input.
If, on the other hand, you're executing that script or cmd.exe or similar from within your program, and want to wait until the script/program has finished, and capture all its output, then you need to look at the library calls you use to start that external program, most likely there is a way to ask it to give you some way to read the output and wait for completion.

Categories