Popen subprocessing problems - python

I'm trying to learn about the subprocessing module and am therefore making a hlds server administrator.
My goal is to be able to start server instances and send all commands through dispatcher.py to administrate multiple servers, e.g. send commands to subprocesses stdin.
what I've got so far for some initial testing, but got stuck already :]
#dispatcher.py
import subprocess
RUN = '/home/daniel/hlds/hlds_run -game cstrike -map de_dust2 -maxplayers 11'
#RUN = "ls -l"
hlds = subprocess.Popen(RUN.split(), stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
print hlds.communicate()[0]
print hlds.communicate()[1]
hlds.communicate('quit')
I am not getting any stdout from the hlds server, but it works fine if i dont set stdout to PIPE. And the hlds.communicate('quit') does not seem to be sent to the hlds process stdin either. The ls -l command returns stdout correctly but not hlds.
All help appreciated! :)

See the Popen.communicate docs (emphasis mine):
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
So you can only call communicate once per run of a process, since it waits for the process to terminate. That's why ls -l seems to work -- it terminates immediately, while hlds doesn't.
You'd need to do:
out, error = hlds.communicate('quit')
if you want to send in quit and get all output until it terminates.
If you need more interactivity, you'll need to use hlds.stdout, hlds.stdin, and hlds.stderr directly.

Related

python sending argument to a running process

I have a flutter project called zed, my goal is to monitor the output of flutter run, as long as pressing r, the output will increase.
To automatically implement this workflow, my implementation is
import subprocess
bash_commands = f'''
cd ../zed
flutter run --device-id web-server --web-hostname 192.168.191.6 --web-port 8352
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
output, err= process.communicate(bash_commands.encode('utf-8'))
print(output, err)
output, _ = process.communicate('r'.encode('utf-8'))
print(output)
It's not working as I expected, there is nothing printed on the screen.
Use process.stdin.write() instead of process.communicate()
process.stdin.write(bash_commands)
process.stdin.flush()
But why you ask
Popen.communicate(input=None, timeout=None)
Interact with process:
Send data to stdin. Read data from stdout and stderr, until
end-of-file is reached
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
communicate(...) doesn't return until the pipe is closed which typically happens when the subprocess closes. Great for ls -l not so good for long running subprocesses.

No stdout from killed subprocess

i have a homework assignment to capture a 4way handshake between a client and AP using scapy. im trying to use "aircrack-ng capture.pcap" to check for valid handshakes in the capture file i created using scapy
i launch the program using Popen. the program waits for user input so i have to kill it. when i try to get stdout after killing it the output is empty.
i've tried stdout.read(), i've tried communicate, i've tried reading stderr, and i've tried it both with and without shells
check=Popen("aircrack-ng capture.pcap",shell=True,stdin=PIPE,stdout=PIPE,stderr=PIPE)
check.kill()
print(check.stdout.read())
While you shouldn't do this (trying to rely on hardcoded delays is inherently race-condition-prone), that the issue is caused by your kill() being delivered while sh is still starting up can be demonstrated by the problem being "solved" (not reliably, but sufficient for demonstration) by tiny little sleep long enough let the shell start up and the echo run:
import time
from subprocess import Popen, PIPE
check=Popen("echo hello && sleep 1000", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
time.sleep(0.01) # BAD PRACTICE: Race-condition-prone, use one of the below instead.
check.kill()
print(check.stdout.read())
That said, a much better-practice solution would be to close the stdin descriptor so the reads immediately return 0-byte results. On newer versions of Python (modern 3.x), you can do that with DEVNULL:
import time
from subprocess import Popen, PIPE, DEVNULL
check=Popen("echo hello && read input && sleep 1000",
shell=True, stdin=DEVNULL, stdout=PIPE, stderr=PIPE)
print(check.stdout.read())
...or, with Python 2.x, a similar effect can be achieved by passing an empty string to communicate(), thus close()ing the stdin pipe immediately:
import time
from subprocess import Popen, PIPE
check=Popen("echo hello && read input && sleep 1000",
shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE)
print(check.communicate('')[0])
Never, and I mean, never kill a process as part of normal operation. There's no guarantee whatsoever how far it has proceeded by the time you kill it, so you cannot expect any specific results from it in such a case.
To explicitly pass nothing to a subprocess as input to prevent hanging when it tries to read stdin:
connect its stdin to /dev/null (nul in Windows) as per run a process to /dev/null in python :
p=Popen(<...>, stdin=open(os.devnull)) #or stdin=subprocess.DEVNULL in Python 3.3+
or use stdin=PIPE and <process>.communicate() without arguments -- this will pass an empty stream
Use <process>.communicate(), or use subprocess.check_output() instead of Popen to read output reliably
A process, in the general case, is not guaranteed to output any data at any particular moment due to I/O buffering. So you need to read the output stream after the process completes to be sure you've got everything.
At the same time, you need to keep reading the stream in the meantime if the process can produce enough output to fill an I/O buffer1. Otherwise, it will hang waiting for you to read the buffered data. If both stdout and stderr are PIPEs, you need to read them both, in parallel -- i.e. in different threads.
communicate() and check_output (that uses the former under the hood) achieve this by reading stdout and stderr in two separate threads.
Prefer convenience functions to Popen for common use cases -- in your case, check_output -- as they take care of all the aforementioned caveats for you.
1Pipes are fully buffered and a typical buffer size is 64KB

Running a nohup command through SSH with Python's Popen then logging out

I'm trying to use this answer to issue a long-running process with nohup using subprocess.Popen, then exit the connection.
From the login shell, if I run
$ nohup sleep 20 &
$ exit
I get the expected behavior, namely that I can log out while the sleep process is still running, and connect back later to check on its status. If I try to the same through Python, however, it seems as though the exit command does not get executed until the sleeping time is over.
import subprocess
HOST=<host>
sshProcess = subprocess.Popen(['ssh', HOST],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines = True,
bufsize = 0)
sshProcess.stdin.write("nohup sleep 20 &\n")
sshProcess.stdin.write("exit\n")
sshProcess.stdin.close()
What am I missing?
From the docs: Python Docs
Warning Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
Alright pretty sure i got it now:
About communicate(): [...]Wait for process to terminate
So i guess your earlier Solution was better. But if you call it at the end if shouldn't be a problem or just don't call it at all if you don't need stdin or stderr as output
However according to this:StackOverflow Comment If you set preexec_fn=os.setpgrp in your Popen call it should work.

Python subprocesses (ffmpeg) only start once I Ctrl-C the program?

I'm trying to run a few ffmpeg commands in parallel, using Cygwin and Python 2.7.
This is roughly what I have:
import subprocess
processes = set()
commands = ["ffmpeg -i input.mp4 output.avi", "ffmpeg -i input2.mp4 output2.avi"]
for cmd in commands:
processes.add(
subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
)
for process in processes:
if process.poll() is None:
process.wait()
Now, once I am at the end of this code, the whole program waits. All the ffmpeg processes are created, but they're idle, i.e., using 0% CPU. And the Python program just keeps waiting. Only when I hit Ctrl-C, it suddenly starts encoding.
What am I doing wrong? Do I have to "send" something to the processes to start them?
This is only a guess, but ffmpeg usually produces a lot of status messages and output on stderr or stdout. You're using subprocess.PIPE to redirect stdout and stderr to a pipe, but you never read from those, so if the pipe buffer is full, the ffmpeg process will block when trying to write data to it.
When you kill the parent process the pipe is closed on its end, and probably (i haven't checked) ffmpeg handles the error by just not writing to the pipe anymore and is therefore unblocked and starts working.
So eiter consume the process.stdout and process.stderr pipes in your parent process, or redirect the output to os.devnull if you don't care about it.
In addition to what #mata says, ffmpeg may also be asking you if you want to overwrite output.avi and waiting on you to type "y". To force-overwrite, use the "-y" command-line option (ffmpeg -i $input -y $output).

python subprocess and reading stdout

What is the proper way of reading subprocess and the stdout
Here are my files:
traffic.sh
code.py
traffic.sh:
sudo tcpdump -i lo -A | grep Host:
code.py:
proc = subprocess.Popen(['./traffic.sh'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
# Do some network stuff like ping places, send an email, open a few web pages and wait for them to finish loading
# Stop all traffic and make sure its over
data = proc.stdout.read()
proc.kill()
The code above sometimes works and sometimes doesnt.
The times that it fails, its is due to getting stuck on the proc.stdout.read().
I have followed a bunch of examples that recommend to setup a thread and queue for the proc and read the queue as the proc writes. However, this turnout to be intermittent as to how it works.
I feel like im doing something wrong with the kill and the read. because I can guarantee that there is no communication happening on lo when I make that call and therefore, traffic.sh should not be printing out anything at all.
Then why is the read blocking.
Any clean alternative to the thread?
Edit
I have also tried this, in the hope that the read would no longer block since the process would is terminated
proc = subprocess.Popen(['./traffic.sh'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
# Do some network stuff like ping places, send an email, open a few web pages and wait for them to finish loading
# Stop all traffic and make sure its over
proc.kill()
data = proc.stdout.read()

Categories