Since input and raw_input() stop the program from running anymore, I want to use a subprocess to run this program...
while True: print raw_input()
and get its output.
This is what I have as my reading program:
import subprocess
process = subprocess.Popen('python subinput.py', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
output=process.stdout.read(12)
if output=='' and process.poll()!=None:
break
if output!='':
sys.stdout.write(output)
sys.stdout.flush()
When I run this, the subprocess exits almost as fast as it started. How can I fix this?
I'm afraid it won't work this way.
You assume, that subprocess will attach your console (your special
case of stdin). This does not work, the module only has two
options for specifying that: PIPE and STDOUT.
When nothing is specified, the subprocess won't be able to use
the corresponding stream - it's output will go nowhere or it will
receive no input. The raw_input() ends because of EOF.
The way to go is to have your input in the "main" program,
and the work done in a subprocess.
EDIT:
Here's an example in multiprocessing
from multiprocessing import Process, Pipe
import time
def child(conn):
while True:
print "Processing..."
time.sleep(1)
if conn.poll(0):
output = conn.recv()
print output
else:
print "I got nothing this time"
def parent():
parent_conn, child_conn = Pipe()
p = Process(target=child, args=(child_conn,))
p.start()
while True:
data = raw_input()
parent_conn.send(data)
# p.join() - you have to find some way to stop all this...
# like a specific message to quit etc.
if __name__ == '__main__':
parent()
You of course need to make it more robust by finding a way too stop
this cooperation. In my example both processes are in the same file,
but you may organize it differently.
This example works on Linux, you may have some problems with pipes on Windows,
but it should be altogether solvable.
The "Processing" is the part where you want to do something else, not just
wait for the data from the parent.
I think the problem is that subprocesses are not directly hooked up to stdout and stdin, and therefore cannot receive keyboard input. Presumably raw_input() is throwing an exception.
If this is a practical issue and not an experiment, I recommend you use a library such as curses or pygame to handle your input. If you're experimenting and want to do it yourself, then I suppose you'll have to look at threads instead of subprocesses, though this is a fairly complex thing to try to do so you're certain to run into other issues.
Well, try different architecture. You can use zeromq.
Producer produces all the items(here output which to be sent via stdout) and broadcasted via zmq.
Consumer should listen to the port no which is being broadcasted by the producer and process them accordingly.
Here is the Example http://code.saghul.net/implementing-a-pubsub-based-application-with
Note
Use gevent or multiprocessing to spawn these process.
You will have master program which takes care of spawning producer and consumer
Related
I'm trying to manage a game server (a server for players to join, I didn't create the game) through a Python module. I noticed, however, that the server stops when the Python script stops to ask for input (from input()). Is there any way around this?
The server is ran as a subprocess:
server = subprocess.Popen("D:\Windows\System32\cmd.exe", stdin=subprocess.PIPE, stdout=subprocess.PIPE) followed by server.stdin.write calls to run the server exe file
The server seems to work fine if ran without a stdout pipe, but I still need to receive output from it without it stopping if possible.
I apologize for the vague question and my lack of python knowledge.
It sounds like you want to do two things:
Service a subprocess's stdout.
Wait for user input on input.
And you need to do them both simultaneously, and in something close to real time—while you block reading from the subprocess, the user can't enter any commands, and while you block reading from user input, the subprocess hangs on stalled pipe.
The simplest way to do this is to just use a thread for each.
Without seeing any code, it's hard to show a good example, but something like this:
def service_proc_stdout(proc):
while True:
buf = proc.stdout.read()
do_proc_stuff(buf)
proc = subprocess.Popen(…)
t = threading.Thread(target=service_proc_stdout, args=(proc,))
t.start()
while True:
command = input()
do_command_stuff(command)
It sounds like your do_command_stuff is writing to proc.stdin. That may just work, but it's possible that proc.stdin may block if you push input into it too fast, preventing you from reading user input. If you need to solve that, just start a third thread:
def service_proc_stdin(q, proc):
while True:
msg = q.get()
proc.stdin.write(msg)
q = queue.Queue()
tstdin = threading.Thread(target=service_proc_stdin, args=(q, proc))
tstdin.start()
… and now, instead of directly calling proc.stdin.write(…), you call q.put(…).
Threads aren't the only way to handle the concurrency here. For example, you could use an asyncio event loop, or a manual selectors loop around non-blocking pipes. But it's probably the simplest change, at least if you don't need to share or pass anything between the threads beyond messages you push onto a queue.
Is there an easy way of gathering the output of a subprocess without actually waiting for it?
I can think of creating a subprocess.Popen() with capturing its stdout, then call p.communicate(), but that would block until the subprocess terminates.
I can think of using subprocess.check_output() or similar, but that also would block.
I need something which I can start, then do other stuff, then check the subprocess for being terminated, and in case it is, takes its output.
I can think of two rather complicated ways to achieve this:
Redirect the output into a file, then after termination I can read the output from that file.
Implement and start a handler thread(!) which constantly tries to read data from the stdout of the subprocess and adds it to a buffer.
The first one needs temporary files and disk I/O which I do not really like in my case. The second one means implementing quite a bit.
I guess there might be a simpler way I couldn't think of yet, or a ready-to-be-used solution in some library I didn't find yet.
What's wrong with calling check_output in a thread?
import threading,subprocess
output = ""
def f():
global output
output = subprocess.check_output("ls") # ["cmd","/c","dir"] for windows
t = threading.Thread(target=f)
t.start()
print('Started')
t.join()
print(output)
note that one could be tempted to use p = subprocess.Popen(cmd,stdout=subprocess.PIPE), wait for p.poll() to be != None and try to read p.stdout afterwards: that only works when the output is small, else you get a deadlock because stdout buffer is full and you have to read it from time to time.
Using p.stdout.readline() would work but would also block if the process doesn't print on a regular basis. If your application prints to the output all the time, then you can consider it as non-blocking and the solution is acceptable.
I think what you want is an unbuffered stdout stream.
With that you will be able to capture the output of your process without waiting for it to finish.
You can achieve that with the subprocess.Popen() function and the parameter stdout=subprocess.PIPE.
Try something like this
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
line = proc.stdout.readline()
while line:
print line
line = proc.stdout.readline()
Update 2: So I piped the output of stderr and it looks like when I include shell=True, i just get the help file for omx player (it lists all the command line switches and such). Is it possible that shell=True might not play nicely with omxplayer?
Update: I came across that link before but it failed on me so I moved on without digging deeper. After Tshepang suggested it again I looked into it further. I have two problems, and I'm hoping the first is caused by the second. The first problem is that when I include shell=True as an arg, the video never plays. If I don't include it, the video plays, but is not ever killed. Updated code below.
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video). My eventual goal is to be able to use vid1 as a sort of "screensaver" and have vid2 play when a user interacts with the system, but for now im simply trying to kill vid1 on keyboard input and running into quite the hard time doing it. I'm hoping someone can tell me where my code is falling down.
Forewarning that I'm extremely new to Python, and linux based systems in general, so if im doing this terribly wrong, please feel free to redirect me, but this seemed to be the fastest way to get there.
Here is my code as it stands:
import subprocess
import os
import signal
vid1 = ['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4']
while True:
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
vid = subprocess.Popen(vid1, stdout=subprocess.PIPE, preexec_fn=os.setsid)
print 'SID is: ', preexec_fn
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'])
id = raw_input()
if not id:
break
os.killpg(vid.pid, signal.SIGTERM)
print "your input: ", id
print "While loop has exited"
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video).
By default, SIGINT is propagated to all processes in the foreground process group, see "How Ctrl+C works". preexec_fn=os.setsid (or os.setpgrp) actually prevents it: use it only if you do not want omxplayer to receive Ctrl+C i.e., use it if you manually call os.killpg when you need to kill a process tree (assuming omxplayer children do not change their process group).
"keyboard interrupt" (sigint signal) is visible as KeyboardInterrupt exception in Python. Your code should catch it:
#!/usr/bin/env python
from subprocess import call, check_call
try:
rc = call(['omxplayer', 'first file'])
except KeyboardInterrupt:
check_call(['omxplayer', 'second file'])
else:
if rc != 0:
raise RuntimeError('omxplayer failed to play the first file, '
'return code: %d' % rc)
The above assumes that omxplayer exits on Ctrl+C.
You could see the help message due to several reasons e.g., omxplayer does not support --loop option (run it manually to check) or you mistakenly use shell=True and pass the command as a list: always pass the command as a single string if you need shell=True and in reverse: always (on POSIX) pass the command as a list of arguments if shell=False (default).
I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.
Most of the examples I've seen with os.fork and the subprocess/multiprocessing modules show how to fork a new instance of the calling python script or a chunk of python code. What would be the best way to spawn a set of arbitrary shell command concurrently?
I suppose, I could just use subprocess.call or one of the Popen commands and pipe the output to a file, which I believe will return immediately, at least to the caller. I know this is not that hard to do, I'm just trying to figure out the simplest, most Pythonic way to do it.
Thanks in advance
All calls to subprocess.Popen return immediately to the caller. It's the calls to wait and communicate which block. So all you need to do is spin up a number of processes using subprocess.Popen (set stdin to /dev/null for safety), and then one by one call communicate until they're all complete.
Naturally I'm assuming you're just trying to start a bunch of unrelated (i.e. not piped together) commands.
I like to use PTYs instead of pipes. For a bunch of processes where I only want to capture error messages I did this.
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
logfile = open("myprocess.log", "a", 1)
REALSTDERR = sys.stderr
sys.stderr = logfile
This next part was in a loop spawning about 30 processes.
sys.stderr = REALSTDERR
master, slave = pty.openpty()
self.subp = Popen(self.parsed, shell=False, stdin=RNULL, stdout=WNULL, stderr=slave)
sys.stderr = logfile
After this I had a select loop which collected any error messages and sent them to the single log file. Using PTYs meant that I never had to worry about partial lines getting mixed up because the line discipline provides simple framing.
There is no best for all possible circumstances. The best depends on the problem at hand.
Here's how to spawn a process and save its output to a file combining stdout/stderr:
import subprocess
import sys
def spawn(cmd, output_file):
on_posix = 'posix' in sys.builtin_module_names
return subprocess.Popen(cmd, close_fds=on_posix, bufsize=-1,
stdin=open(os.devnull,'rb'),
stdout=output_file,
stderr=subprocess.STDOUT)
To spawn multiple processes that can run in parallel with your script and each other:
processes, files = [], []
try:
for i, cmd in enumerate(commands):
files.append(open('out%d' % i, 'wb'))
processes.append(spawn(cmd, files[-1]))
finally:
for p in processes:
p.wait()
for f in files:
f.close()
Note: cmd is a list everywhere.
I suppose, I could just us subprocess.call or one of the Popen
commands and pipe the output to a file, which I believe will return
immediately, at least to the caller.
That's not a good way to do it if you want to process the data.
In this case, better do
sp = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
and then sp.communicate() or read directly from sp.stdout.read().
If the data shall be processed in the calling program at a later time, there are two ways to go:
You can retrieve the data ASAP, maybe via a separate thread, reading them and storing them somewhere where the consumer can get them.
You can have the producing subprocess have block and retrieve the data from it when you need them. The subprocess produces as many data as fit in the pipe buffer (usually 64 kiB) and then blocks on further writes. As soon as you need the data, you read() from the subprocess object's stdout (maybe stderr as well) and use them - or, again, you use sp.communicate() at that later time.
Way 1 would the way to go if producing the data needs much time, so that your wprogram would have to wait.
Way 2 would be to be preferred if the size of the data is quite huge and/or the data is produced so fast that buffering would make no sense.
See an older answer of mine including code snippets to do:
Uses processes not threads for blocking I/O because they can more reliably be p.terminated()
Implements a retriggerable timeout watchdog that restarts counting whenever some output happens
Implements a long-term timeout watchdog to limit overall runtime
Can feed in stdin (although I only need to feed in one-time short strings)
Can capture stdout/stderr in the usual Popen means (Only stdout is coded, and stderr redirected to stdout; but can easily be separated)
It's almost realtime because it only checks every 0.2 seconds for output. But you could decrease this or remove the waiting interval easily
Lots of debugging printouts still enabled to see whats happening when.
For spawning multiple concurrent commands, you would need to alter the class RunCmd to instantiate mutliple read output/write input queues and to spawn mutliple Popen subprocesses.