I am writing a program to interact with a linux machine through the serial port, and I am using pexpect.spawn as my main communication channel as follows:
proc = pexpect.spawn("cu dir -l /dev/ttyUSB0 -s 115200", logfile = *someFile*)
and I am sending commands to the machine with the sendline("cmd") method, and at the end of each session I parse the log file to see how the commands behaved.
I would like to be able to distinguish between lines that were printed to stdout and stderr from my log file, but currently I have no way of doing that.
Is that a way to globally prepend each line printed to stderr with a given string?
You don't mention how you capture stdout and stderr, but one simple way distinguish the stdout and stderr is to simply place stdout and stderr in different files. For example:
./command.py >stdout-log 2>stderr-log
I think this is a limitation of pexpect. You're basically dealing with a black box command prompt, so pexpect has no knowledge about whether a string returned to the console (effectively) is stdout or stderr, just that something came back. Can you safely assume a limited set of message and error formats in your system so that you could write some regex-based post-processor?
Related
A standard feature Python's subprocess API is to combine STDERR and STDOUT by using the keyword argument
stderr = subprocess.STDOUT
But currently I need the opposite: Redirect the STDOUT of a command to STDERR. Right now I am doing this manually using subprocess.getoutput or subprocess.check_output and printing the result, but is there a more concise option?
Ouroborus mentions in the comments that we ought to be able to do something like
subprocess.run(args, stdout = subprocess.STDERR)
However, the docs don't mention the existence of subprocess.STDERR, and at least on my installation (3.8.10) that doesn't actually exist.
According to the docs,
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object with a valid file descriptor, and None.
Assuming you're on a UNIX-type system (and if you're not, I'm not sure what you're planning on doing with stdout / stderr anyway), the file descriptors for stdin/stdout/stderr are always the same:
0 is stdin
1 is stdout
2 is stderr
3+ are used for fds you create in your program
So we can do something like
subprocess.run(args, stdout = 2)
to run a process and redirect its stdout to stderr.
Of course I would recommend you save that as a constant somewhere instead of just leaving a raw number 2 there. And if you're on Windows or something you may have to do a little research to see if things work exactly the same.
Update:
A subsequent search suggests that this numbering convention is part of POSIX, and that Windows explicitly obeys it.
Update:
#kdb points out in the comments that sys.stderr will typically satisfy the "an existing file object with a valid file descriptor" condition, making it an attractive alternative to using a raw fd here.
To elaborate on what I'm doing:
I want to create a web-based CLI for my Raspberry Pi. I want to take a websocket and connect it to this Raspberry Pi script, so that the text I type into the webpage will get entered directly into the CLI on the raspberry pi, and the response will return to me on the webpage.
My first goal is creating the python script that can properly send a user-inputted command to the CLI and return all responses in the CLI back.
If you just need the return value you can use os.system, but then you won't get the output of stdout and stderr. So you probably have to use the subprocess module, which requires you to split the input text into command and parameters first.
Sounds like you are looking for the python subprocess module in the standard library. This will allow you to interact with the CLI from a python script.
The subprocess module will do this for you but has a few quirks. You can pass in file objects to the various calls to bind to stderr and stdout, but they have to be real file objects. StringIO doesn't cut it.
The below uses check_output() as it grabs stdout for us and saves us opening a file. I'm sure there's fancier way of doing this.
from tempfile import TemporaryFile
from subprocess import check_output, CalledProcessError
def shell(command):
stdout = None
with TemporaryFile('rw') as fh:
try:
stdout = check_output(command, shell=True, stderr=fh)
except CalledProcessError:
pass
# Rewind the file handle to read from the beginning
fh.seek(0)
stderr = fh.read()
return stdout, stderr
print shell("echo hello")[0]
# hello
print shell("not_a_shell_command")[1]
# /bin/sh: 1: not_a_shell_command: not found
As one of the other posters mentions, you should really cleanse your input to prevent security exploits (and drop the shell=true). To be honest though, your project sounds like you are purposefully building a remote execution exploit for yourself, so it probably doesn't matter.
I need to run a external exe file inside a python script. I need two things out of this.
Get whatever the exe outputs to the stdout (stderr).
exe stops executing only after I press the enter Key. I can't change this behavior. I need the script the pass the enter Key input after it gets the output from the previous step.
This is what I have done so far and I am not sure how to go after this.
import subprocess
first = subprocess.Popen(["myexe.exe"],shell=True,stdout=subprocess.PIPE)
from subprocess import Popen, PIPE, STDOUT
first = Popen(['myexe.exe'], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
while first.poll() is None:
data = first.stdout.read()
if b'press enter to' in data:
first.stdin.write(b'\n')
first.stdin.close()
first.stdout.close()
This pipes stdin as well, do not forget to close your open file handles (stdin and stdout are also file handles in a sense).
Also avoid shell=True if at all possible, I use it a lot my self but best practices say you shouldn't.
I assumed python 3 here and stdin and stdout assumes bytes data as input and output.
first.poll() will poll for a exit code of your exe, if none is given it means it's still running.
Some other tips
one tedious thing to do can be to pass arguments to Popen, one neat thing to do is:
import shlex
Popen(shlex.split(cmd_str), shell=False)
It preserves space separated inputs with quotes around them, for instance python myscript.py debug "pass this parameter somewhere" would result in three parameters from sys.argv, ['myscript.py', 'debug', 'pass this parameter somewhere'] - might be useful in the future when working with Popen
Another thing that would be good is to check if there's output in stdout before reading from it, otherwise it might hang the application. To do this you could use select.
Or you could use pexpect which is often used with SSH since it lives in another user space than your application when it asks for input, you need to either fork your exe manually and read from that specific pid with os.read() or use pexpect.
My understanding of capturing the output of a subprocess command as a string was to set stdout=sucprocess.PIPE and use command.communicate() to capture result, error.
For example, typing the following:
command = subprocess.Popen(["nmcli", "con"], stdout=subprocess.PIPE)
res, err = command.communicate()
produces no output to the terminal and stores all my connection information as a byte literal in the variable res. Simple.
It falls apart for me here though:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = subprocess.Popen(["wget", "--spider", url], stdout=subprocess.PIPE)
This prints the output of the command to the terminal, then pauses execution until a keystroke is input by user. Subsequently running command.communicate() returns an empty bytes literal, b''.
Particularly odd to me is the pause in execution as issuing the command in bash just prints the command result and directly returns to the prompt.
All my searches just find Q&A about how to capture subprocess results in general, not anything about certain commands having to be captured in a different manner or anything particular about wget and subprocess.
Additional note, I have been able to use the wget command with subprocess to download files (no --spider option) without issue.
Any help greatly appreciated, this one has me stumped.
stderr is capturing the output so because you are not piping stderr you are seeing the output when you run the command and stdout is empty:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = Popen(["wget", "--spider", url],stdout=PIPE,stderr=PIPE)
out,err = command.communicate()
print("This is stdout: {}".format(out))
print("This is stderr: {}".format(err))
This is stdout: b''
This is stderr: b'Spider mode enabled. Check if remote file exists.\n--2015-02-09 18:00:28-- http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent\nResolving torrent.ubuntu.com (torrent.ubuntu.com)... 91.189.95.21\nConnecting to torrent.ubuntu.com (torrent.ubuntu.com)|91.189.95.21|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37429 (37K) [application/x-bittorrent]\nRemote file exists.\n\n'
I've never been asked anything by wget before, but some processes (e.g. ssh) do capture the terminal device (tty) directly to get a password, short-cutting the process pipe you've set up.
To automate cases like this, you need to fake a terminal instead of a normal pipe. There are recipes out there using termios and stuff, but my suggestion would be to use the module "pexpect" which is written to do exactly that.
I have this little script that puts your wireless device into monitor mode. It does an airodump scan and then after terminating the scan dumps the output to file.txt or a variable, so then I can scrape the BSSID and whatever other info I may need.
I feel I haven't grasped the concept or difference between subprocess.call() and subprocess.Popen().
This is what I currently have:
def setup_device():
try:
output = open("file.txt", "w")
put_device_down = subprocess.call(["ifconfig", "wlan0", "down"])
put_device_mon = subprocess.call(["iwconfig", "wlan0", "mode", "monitor"])
put_device_up = subprocess.call(["iwconfig", "wlano", "up"])
start_device = subprocess.call(["airmon-ng", "start", "wlan0"])
scanned_networks = subprocess.Popen(["airodump-ng", "wlan0"], stdout = output)
time.sleep(10)
scanned_networks.terminate()
except Exception, e:
print "Error:", e
I am still clueless about where and when and in which way to use subprocess.call() and subprocess.Popen()
The thing that I think is confusing me most is the stdout and stderr args. What is PIPE?
Another thing that I could possibly fix myself once I get a better grasp is this:
When running subprocess.Popen() and running airodump, the console window pops up showing the scan. Is there a way to hide this from the user to sort of clean things up?
You don't have to use Popen() if you don't want to. The other functions in the module, such as .call() use Popen(), give you a simpler API to do what you want.
All console applications have 3 'file' streams: stdin for input, and stdout and stderr for output. The application decides what to write where; usually error and diagnostic information to stderr, the rest to stdout. If you want to capture the output for either of these outputs in your Python program, you specify the subprocess.PIPE argument so that the 'stream' is redirected into your program. Hence the name.
If you want to capture the output of the airodump-ng wlan0 command, it's easiest to use the subprocess.check_output() function; it takes care of the PIPE argument for you:
scanned_networks = subprocess.check_output(["airodump-ng", "wlan0"])
Now output contains whatever airodump-ng wrote to its stdout stream.
If you need to have more control over the process, then you do need to use the Popen() class:
proc = subprocess.Popen(["airodump-ng", "wlan0"], stdout=subprocess.PIPE)
for line in proc.stdout:
# do something with line
proc.terminate()