I have a ROS code rostopic pub toggle_led std_msgs/Empty that basically starts once and keeps running until CTRL+C is pressed.
Now, I would like to automate this command from Python. I checked Calling an external command in Python but it only shows how to start the command.
How would I start and stop running this process as and when I want?
How would I start and stop running this process as and when I want?
Well, you already know how to start it, as you said in the previous sentence.
How do you stop it? If you want to stop it exactly like a Ctrl-C,* you do that by calling send_signal on it, using CTRL_C_EVENT on Windows, or SIGTERM on Unix.** So:
import signal
import subprocess
try:
sig = signal.CTRL_C_EVENT
except NameError:
sig = signal.SIGTERM
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.send_signal(sig)
If you only care about Linux (or *nix in general), you can make this even simpler: terminate is guaranteed to do the same thing as send_signal(SIGTERM). So:
import subprocess
p = subprocess.Popen(['/path/to/prog', '-opt', '42', 'arg'])
# ... later
p.terminate()
Since you asked in a comment "Could you please explain the various parameters to subprocess.Popen()": Well, there are a whole lot of them (see Popen Constructor and Frequently Used Arguments in the docs, but I'm only using one, the args parameter.
Normally, you pass a list to args, with the name of the program as the first element in the list, and each separate command-line argument as a separate element. But if you want to use the shell, you pass a string for args, and add a shell=True as another argument.
* Note that "exactly like a Ctrl-C" may not actually be what you want on Windows, unless the program has a console and is a process group owner. This may mean you'll need to add creationflags=subprocess.CREATE_NEW_PROCESS_GROUP to the Popen call. Or it may not—e.g.., if you use shell=True.
** In Python, you can usually ignore the platform differences between CTRL_C_EVENT and SIGTERM and always use the latter, but subprocess.send_signal is one of the few places you can't. On Windows, send_signal(SIGTERM) will call terminate instead of sending a Ctrl-C. If you don't actually care exactly how the process gets stopped, just that it gets stopped somehow, then of course you can use SIGTERM… but in that case, you might as well just call terminate.
Related
Update 2: So I piped the output of stderr and it looks like when I include shell=True, i just get the help file for omx player (it lists all the command line switches and such). Is it possible that shell=True might not play nicely with omxplayer?
Update: I came across that link before but it failed on me so I moved on without digging deeper. After Tshepang suggested it again I looked into it further. I have two problems, and I'm hoping the first is caused by the second. The first problem is that when I include shell=True as an arg, the video never plays. If I don't include it, the video plays, but is not ever killed. Updated code below.
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video). My eventual goal is to be able to use vid1 as a sort of "screensaver" and have vid2 play when a user interacts with the system, but for now im simply trying to kill vid1 on keyboard input and running into quite the hard time doing it. I'm hoping someone can tell me where my code is falling down.
Forewarning that I'm extremely new to Python, and linux based systems in general, so if im doing this terribly wrong, please feel free to redirect me, but this seemed to be the fastest way to get there.
Here is my code as it stands:
import subprocess
import os
import signal
vid1 = ['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4']
while True:
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
vid = subprocess.Popen(vid1, stdout=subprocess.PIPE, preexec_fn=os.setsid)
print 'SID is: ', preexec_fn
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'])
id = raw_input()
if not id:
break
os.killpg(vid.pid, signal.SIGTERM)
print "your input: ", id
print "While loop has exited"
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video).
By default, SIGINT is propagated to all processes in the foreground process group, see "How Ctrl+C works". preexec_fn=os.setsid (or os.setpgrp) actually prevents it: use it only if you do not want omxplayer to receive Ctrl+C i.e., use it if you manually call os.killpg when you need to kill a process tree (assuming omxplayer children do not change their process group).
"keyboard interrupt" (sigint signal) is visible as KeyboardInterrupt exception in Python. Your code should catch it:
#!/usr/bin/env python
from subprocess import call, check_call
try:
rc = call(['omxplayer', 'first file'])
except KeyboardInterrupt:
check_call(['omxplayer', 'second file'])
else:
if rc != 0:
raise RuntimeError('omxplayer failed to play the first file, '
'return code: %d' % rc)
The above assumes that omxplayer exits on Ctrl+C.
You could see the help message due to several reasons e.g., omxplayer does not support --loop option (run it manually to check) or you mistakenly use shell=True and pass the command as a list: always pass the command as a single string if you need shell=True and in reverse: always (on POSIX) pass the command as a list of arguments if shell=False (default).
I asked a question related to this several weeks ago on here:
Python, mpg123 and subprocess not properly using stdin.write or communicate
Thanks to help from there I was able to do what I needed at the time. (Didn't call q, but terminated the subprocess to stop it).## Heading ##
Now though I seem to be in another bit of a mess.
from subprocess import Popen, PIPE, STDOUT
p = Popen(["mpg123", "-C", "test.mp3"], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
#wait a few seconds to enter this, "q" without a newline is how the controls for the player work to quit out if it were ran like "mpg123 -C test.mp3" on the command line
p.communicate(input='q')[0]
much like before, I need this to be able to quit out of mpg123 like it would be with it's standard controls (like press 'q' to quit, or '-' to turn volume down, '+' to turn volume up, etc), now I use the code above, which should theoretically work, and it works with similar programs. Does anyone know of a way I can use the controls built into mpg123 (the one accessible by using "mpg123 -C whatever.mp3") using a subprocess? terminate isn't enough anymore as I will need the controls ^_^
EDIT: Many thanks to abarnert for the amazing answer =)
ok, so the new code is simply a slightly modified version of abarnert's answer, however mpg123 doesn't seem to be accepting the commands
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'b') #this should've restarted the file
time.sleep(5)
os.write(fd, 'q') #unfortunately doesn't quit here =(
time.sleep(5) # quits after this is finished executing
else:
os.spawnl(os.P_WAIT, '/usr/bin/mpg123', '-C', 'TEST file.mp3')
If you really need the controls, you can't just use Popen.
mpg123 only enables terminal control if its stdin is a tty, not if it's a file or pipe. That's why you get this line in the banner:
Terminal control enabled, press 'h' for listing of keys and functions.
And the whole point of Popen (and subprocess, and the POSIX APIs it's built on) is pipes.
So, what can you do about it?
On linux, you can use the pty module. It may also work on other *nix platforms, but it may not—even if it gets built and included in your stdlib. As the docs say:
Because pseudo-terminal handling is highly platform dependent, there is code to do it only for Linux. (The Linux code is supposed to work on other platforms, but hasn’t been tested yet.)
It definitely runs on *BSD platforms on 2.7 and 3.3, and the example in the docs seem to work on both Mac OS X and FreeBSD… but that's as far as I've checked.
Meanwhile, most POSIX platforms will at least have os.forkpty, and that's not much harder, so here's a trivial program that plays the first 5 seconds of a song passed as its first arg:
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'q')
else:
os.spawnl(os.P_WAIT, # mode
'/usr/local/bin/mpg123', # path
'/usr/local/bin/mpg123', '-C', sys.argv[1]) # args
Note that I used os.spawnl above. This is probably not what you want in a real program; it's for pedagogic purposes, to encourage you to read the docs (and the corresponding manpages) and understand this family of functions.
As the docs explain, this does not use the PATH environment variable, so you need to specify the full path to the program. You can just use spawnlp instead of spawnl to fix this.
Also, spawn may (in fact, always does, although the docs aren't entirely clear) do another fork to execute the child. This really isn't necessary, but spawn does things that you would need to do manually if you just called exec. If you know what you're doing, you may well want to use execl (or execlp) instead of spawnl.
You can even use most of the functionality in subprocess as long as you're careful (do not create any pipes, and remember that you'll end up doing two forks, so make sure to set up the parent/child relationship properly).
Also notice that you need to pass the path to mpg123 twice—once as the path, and then once as the child program's argv[0]. You could also just pass mpg123 the second time. Or, ideally, look at what ps says when you run it from the shell, and pass that. At any rate, you have to pass something as the argv[0]; otherwise, -C ends up being the argv[0], which means mpg123 won't think you gave it a -C flag to enable control keys, but rather than you renamed it to -C and ran it with no flags…
Anyway, you really do need to read the docs to understand what each of these functions does, instead of just treating it like magic code that you don't understand. So, I intentionally used the simplest possible solution to encourage that.
On Windows, there is no such thing as a pty, and no way to do this at all with the facilities built in to Python. You will need to use one of the various third-party libraries for controlling a cmd.exe console (aka DOS prompt) instead.
Based on abarnert's idea, we can open a pseudo-terminal and pass it to subprocess.
import os
import pty
import subprocess
import time
master, slave = os.openpty()
p = subprocess.Popen(['mpg123', '-C', 'music.mp3'], stdin=master)
time.sleep(3)
os.write(slave, 's')
time.sleep(3)
os.write(slave, 's')
time.sleep(6)
os.write(slave, 'q')
I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.
I am attempting to wrap a program that is routinely used at work. When called with an insufficient number of arguments, or with a misspelled argument, the program issues a prompt to the user, asking for the needed input. As a consequence, when calling the routine with subprocess.Popen, the routine never sends any information to stdout or stderr when wrong parameters are passed. subprocess.Popen.communicate() and subprocess.Popen.read(1) both wait for a newline character before any information becomes available.
Is there any way to retrieve information from subprocess.Popen.stdout before the newline character is issued? If not, is there any method that can be used to determine whether the subprocess is waiting for input?
First thing to try: use the bufsize argument to Popen, and set it to 0:
subprocess.Popen(args, bufsize=0, ...)
Unfortunately, whether or not this works also depends upon how the subprocess flushes its output, and I presume you don't have much control over that.
On some platforms, when data written to stdout is flushed will actually change depending on whether the underlying I/O library detects an interactive terminal or a pipe. So while you might think the data is there waiting to be read — because that's how it works in a terminal window — it might actually be line buffered when you're running the same program as a subprocess from another within Python.
Added: I just realised that bufsize=0 is the default anyway. Nuts.
After asking around quite a bit, someone pointed me to the solution. Use pexpect.spawn and pexpect.expect. For example:
Bash "script" in a file titled prompt.sh to emulate the problem - read cannot be called directly from pexpect.spawn.
#!/bin/bash
read -p "This is a prompt: "
This will hang when called by subprocess.Popen. It can be handled by pexpect.spawn, though:
import pexpect
child = pexpect.spawn('./prompt.sh')
child.expect(search)
>>> 0
print child.after #Prints the matched text
>>> 'This is a prompt: '
A list, compiled regex, or list of compiled regex can also be used in place of the string in pexpect.expect to deal with differing prompts.
I want to get screenshots of a webpage in Python. For this I am using http://github.com/AdamN/python-webkit2png/ .
newArgs = ["xvfb-run", "--server-args=-screen 0, 640x480x24", sys.argv[0]]
for i in range(1, len(sys.argv)):
if sys.argv[i] not in ["-x", "--xvfb"]:
newArgs.append(sys.argv[i])
logging.debug("Executing %s" % " ".join(newArgs))
os.execvp(newArgs[0], newArgs)
Basically calls xvfb-run with the correct args. But man xvfb says:
Note that the demo X clients used in the above examples will not exit on their own, so they will have to be killed before xvfb-run will exit.
So that means that this script will <????> if this whole thing is in a loop, (To get multiple screenshots) unless the X server is killed. How can I do that?
The documentation for os.execvp states:
These functions all execute a new
program, replacing the current
process; they do not return. [..]
So after calling os.execvp no other statement in the program will be executed. You may want to use subprocess.Popen instead:
The subprocess module allows you to
spawn new processes, connect to their
input/output/error pipes, and obtain
their return codes. This module
intends to replace several other,
older modules and functions, such as:
Using subprocess.Popen, the code to run xlogo in the virtual framebuffer X server becomes:
import subprocess
xvfb_args = ['xvfb-run', '--server-args=-screen 0, 640x480x24', 'xlogo']
process = subprocess.Popen(xvfb_args)
Now the problem is that xvfb-run launches Xvfb in a background process. Calling process.kill() will not kill Xvfb (at least not on my machine...). I have been fiddling around with this a bit, and so far the only thing that works for me is:
import os
import signal
import subprocess
SERVER_NUM = 99 # 99 is the default used by xvfb-run; you can leave this out.
xvfb_args = ['xvfb-run', '--server-num=%d' % SERVER_NUM,
'--server-args=-screen 0, 640x480x24', 'xlogo']
subprocess.Popen(xvfb_args)
# ... do whatever you want to do here...
pid = int(open('/tmp/.X%s-lock' % SERVER_NUM).read().strip())
os.kill(pid, signal.SIGINT)
So this code reads the process ID of Xvfb from /tmp/.X99-lock and sends the process an interrupt. It works, but does yield an error message every now and then (I suppose you can ignore it, though). Hopefully somebody else can provide a more elegant solution. Cheers.