I have a .jar file that I'm running with arguments via Popen. This server takes about 4 seconds to start up and then dumps out "Server Started" on the terminal and then runs until the user quits the terminal. However, the print and webbrowser.open execute immediately because of Popen and if I use call, they never run at all. Is there a way to ensure that the print and webbrowser don't run until after the server is started other than using wait? Maybe grep for server started?
from subprocess import Popen
import glob
import sys
import webbrowser
reasoner = glob.glob("reasoner*.jar")
reasoner = reasoner.pop()
port = str(input("Enter connection port: "))
space = ""
portArg = ("-p", port)
portArg = space.join(portArg)
print "Navigate to the Reasoner at http://locahost:" + port
reasoner_process = Popen(["java", "-jar", reasoner, "-i", "0.0.0.0", portArg, "--dbconnect", "jdbc:h2:tcp://localhost//tmp/UXDemo;user=sa;password=admin"])
# I want the following to execute after the .jar process above
print "Opening http://locahost:" + port + "..."
webbrowser.open("http://locahost:" + port)
What you're looking to do is a very simple, special version of interacting with a CLI app. So, you have two options.
First, you can use a library like pexpect that's designed to handle driving almost any CLI application. It may be overkill, and there is a bit of a learning curve, but once you get the basics down this will make your problem trivial: you launch the JAR, block expecting "Server Started", then close.
Alternatively, you can do this manually with the Popen pipes. In general this has a lot of problems, but when you know there's going to exactly one output that fits easily into 128 bytes and you don't want to do anything but block on that output and then close the pipe, none of those problems comes up. So:
reasoner_process = Popen(args, stdout=PIPE)
line = reasoner_process.stdout.readline()
if line.strip() != 'Server Started':
# error handling
# Any code that you want to do while the server is running goes here
reasoner_process.stdout.close()
reasoner_process.kill()
reasoner_process.wait()
But first make sure you actually have to kill it; often closing the pipe is sufficient, in which case you can and should leave out the kill(), in which case you can also check the exit code and raise if it's not 0.
Also, you probably want a with contextlib.closing(…) or whatever's appropriate, or just a try/finally to make sure you can raise an exception for error handling and not leak the child. (Python 3.2+ makes this a lot simpler, because it guarantees that both the pipes and the Popen itself are usable as context managers.)
Finally, I was assuming that "runs until the user quits the terminal" means you want to wait for it to start, then leave it running while you do other stuff, then kill it. If your workflow is different, you obviously need to change the order in which you do things.
Related
Update 2: So I piped the output of stderr and it looks like when I include shell=True, i just get the help file for omx player (it lists all the command line switches and such). Is it possible that shell=True might not play nicely with omxplayer?
Update: I came across that link before but it failed on me so I moved on without digging deeper. After Tshepang suggested it again I looked into it further. I have two problems, and I'm hoping the first is caused by the second. The first problem is that when I include shell=True as an arg, the video never plays. If I don't include it, the video plays, but is not ever killed. Updated code below.
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video). My eventual goal is to be able to use vid1 as a sort of "screensaver" and have vid2 play when a user interacts with the system, but for now im simply trying to kill vid1 on keyboard input and running into quite the hard time doing it. I'm hoping someone can tell me where my code is falling down.
Forewarning that I'm extremely new to Python, and linux based systems in general, so if im doing this terribly wrong, please feel free to redirect me, but this seemed to be the fastest way to get there.
Here is my code as it stands:
import subprocess
import os
import signal
vid1 = ['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4']
while True:
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
vid = subprocess.Popen(vid1, stdout=subprocess.PIPE, preexec_fn=os.setsid)
print 'SID is: ', preexec_fn
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'])
id = raw_input()
if not id:
break
os.killpg(vid.pid, signal.SIGTERM)
print "your input: ", id
print "While loop has exited"
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video).
By default, SIGINT is propagated to all processes in the foreground process group, see "How Ctrl+C works". preexec_fn=os.setsid (or os.setpgrp) actually prevents it: use it only if you do not want omxplayer to receive Ctrl+C i.e., use it if you manually call os.killpg when you need to kill a process tree (assuming omxplayer children do not change their process group).
"keyboard interrupt" (sigint signal) is visible as KeyboardInterrupt exception in Python. Your code should catch it:
#!/usr/bin/env python
from subprocess import call, check_call
try:
rc = call(['omxplayer', 'first file'])
except KeyboardInterrupt:
check_call(['omxplayer', 'second file'])
else:
if rc != 0:
raise RuntimeError('omxplayer failed to play the first file, '
'return code: %d' % rc)
The above assumes that omxplayer exits on Ctrl+C.
You could see the help message due to several reasons e.g., omxplayer does not support --loop option (run it manually to check) or you mistakenly use shell=True and pass the command as a list: always pass the command as a single string if you need shell=True and in reverse: always (on POSIX) pass the command as a list of arguments if shell=False (default).
I want to open a Python script using subprocess in my main python program. I want these two programs to be able to chat with one another as they are both running so I can monitor the activity in the slave script, i.e. I need them to send strings between each other.
The main program will have a function similar to this that will communicate with and monitor the slave script:
Script 1
import subprocess
import pickle
import sys
import time
import os
def communicate(clock_speed, channel_number, frequency):
p = subprocess.Popen(['C:\\Python27\\pythonw','test.py'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
data = pickle.dumps([clock_speed, channel_number, frequency]).replace("\n", "\\()")
print data
p.stdin.write("Start\n")
print p.stdout.read()
p.stdin.write(data + "\n")
p.poll()
print p.stdout.readline()
print "return:" + p.stdout.readline()
#p.kill()
if __name__ == '__main__':
print "GO"
communicate(clock_speed = 400, channel_number = 0, frequency = 5*1e6)
The test.py script looks similar to this:
Script 2
import ctypes
import pickle
import time
import sys
start = raw_input("")
sys.stdout.write("Ready For Data")
data = raw_input("")
data = pickle.loads(data.replace("\\()", "\n"))
sys.stdout.write(str(data))
###BUNCH OF OTHER STUFF###
What I want these scripts to do is the following:
Script 1 to open Script 2 using Popen
Script 1 sends the string "Start\n"
Script 2 reads this string and sends the string "Ready For Data"
Script 1 reads this string and sends the pickled data to Script 2
Then whatever...
The main question is how to do parts 2-4. Then the rest of the communication between the two scripts should follow. As of now, I have only been able to read the strings from Script 2 after it has been terminated.
Any help is greatly appreciated.
UPDATE:
Script 1 must be run using 32-bit Python, while Script 2 must be run using 64-bit Python.
The problem with pipes is that if you call a read operation and there is nothing to read, your code is stalled until the other party writes something for you to read. Also if you write too much, your next write operation might block until the other party reads something out of the pipe and frees it.
There are "non-blocking calls" you can make, that will return an error in these cases instead of blocking, but your application will still need to deal with the errors sensibly.
In any case, you need to set up some kind of protocol. Think of HTTP, or any other protocol you know well: there are requests and responses, and while you are reading either of the two the protocol always tells you if there is something else you need to read or not. That way you can always make an informed decision on whether to wait for more data or not.
Here is an example that works. It works because there is the following protocol:
p1 sends a single line, ending with '\n';
p2 does the same;
p1 sends another line;
p2 does the same;
both are happy and exit.
In order to write a line to the pipe (on either side) and make sure it gets onto the pipe, I call write() and then flush().
In order to read a single line from the pipe (on either side) but not a single byte more, thus blocking my code until the line is ready and no longer than that, I use readline().
There are other calls you could make and other protocols, including ready-made ones, but the single-line protocol works well for simple things and for a demo like this.
p1.py:
import subprocess
p = subprocess.Popen(['python', 'p2.py'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write("Hello\n")
p.stdin.flush()
print 'got', p.stdout.readline().strip()
p.stdin.write("How are you?\n")
p.stdin.flush()
print 'got', p.stdout.readline().strip()
p2.py:
import sys
data = sys.stdin.readline()
sys.stdout.write("Hm.\n")
sys.stdout.flush()
data = sys.stdin.readline()
sys.stdout.write("Whatever.\n")
sys.stdout.flush()
I also had a problem similar to this, where there was no way to send general Python objects between different processes without running into the problem of knowing either when the other side hasn't sent an object or is closed. Also trying to use multiprocessing.Queue usually means that the process needs to have been started by the current process which is not always the case when two processes want to collaborate.
To combat this I use the picklepipe module, which defines a generic object serialization pipe interface as well as a pipe that uses the pickle protocol called the PicklePipe (also one that uses the marshal protocol called MarshalPipe). It can send more than just strings, it can send any pickleable object to it's peer.
The pipes are even selectable, meaning they can be used by the selectors module (or selectors2, selectors34) as file objects when a new object is ready to be received. This makes waiting for many different pipes to be ready very efficient.
Supports Python 2.7+ (and probably 2.6) and all major platforms. Can even send objects between two different versions of Python! Check out the project documentation or view the source on Github.
Disclosure: I am the author of picklepipe. I would love to hear your feedback. :)
I'm trying to launch a background process from a CGI scripts. Basically, when a form is submitted the CGI script will indicate to the user that his or her request is being processed, while the background script does the actual processing (because the processing tends to take a long time.) The problem I'm facing is that Apache won't send the output of the parent CGI script to the browser until the child script terminates.
I've been told by a colleague that what I want to do is impossible because there is no way to prevent Apache from waiting for the entire process tree of a CGI script to die. However, I've also seen numerous references around the web to a "double fork" trick which is supposed to do the job. The trick is described succinctly in this Stack Overflow answer, but I've seen similar code elsewhere.
Here's a short script I wrote to test the double-fork trick in Python:
import os
import sys
if os.fork():
print 'Content-type: text/html\n\n Done'
sys.exit(0)
if os.fork():
os.setsid()
sys.exit(0)
# Second child
os.chdir("/")
sys.stdout.close()
sys.stderr.close()
sys.stdin.close()
f = open('/tmp/lol.txt', 'w')
while 1:
f.write('test\n')
If I run this from the shell, it does exactly what I'd expect: the original script and first descendant die, and the second descendant keeps running until it's killed manually. But if I access it through CGI, the page won't load until I kill the second descendant or Apache kills it because of the CGI timeout. I've also tried replacing the second sys.exit(0) with os._exit(0), but there is no difference.
What am I doing wrong?
Don't fork - run batch separately
This double-forking approach is some kind of hack, which to me is indication it shouldn't be done :). For CGI anyway. Under the general principle that if something is too hard to accomplish, you are probably approaching it the wrong way.
Luckily you give the background info on what you need - a CGI call to initiate some processing that happens independently and to return back to the caller. Well sure - there are unix commands that do just that - schedule command to run at specific time (at) or whenever CPU is free (batch). So do this instead:
import os
os.system("batch <<< '/home/some_user/do_the_due.py'")
# or if you don't want to wait for system idle,
# os.system("at now <<< '/home/some_user/do_the_due.py'")
print 'Content-type: text/html\n'
print 'Done!'
And there you have it. Keep in mind that if there is some output to stdout/stderr, that will be mailed to the user (which is good for debugging but otherwise script probably should keep quiet).
PS. i just remembered that Windows also has version of at, so with minor modification of the invocation you can have that work under apache on windows too (vs fork trick that won't work on windows).
PPS. make sure the process running CGI is not excluded in /etc/at.deny from scheduling batch jobs
I think there are two issues: setsid is in the wrong place and doing buffered IO operations in one of the transient children:
if os.fork():
print "success"
sys.exit(0)
if os.fork():
os.setsid()
sys.exit()
You've got the original process (grandparent, prints "success"), the middle parent, and the grandchild ("lol.txt").
The os.setsid() call is being performed in the middle parent after the grandchild has been spawned. The middle parent can't influence the grandchild's session after the grandchild has been created. Try this:
print "success"
sys.stdout.flush()
if os.fork():
sys.exit(0)
os.setsid()
if os.fork():
sys.exit(0)
This creates a new session before spawning the grandchild. Then the middle parent dies, leaving the session without a process group leader, ensuring that any calls to open a terminal will fail, making sure there's never any blocking on terminal input or output, or sending unexpected signals to the child.
Note that I've also moved the success to the grandparent; there's no guarantee of which child will run first after calling fork(2), and you run the risk that the child would be spawned, and potentially try to write output to standard out or standard error, before the middle parent could have had a chance to write success to the remote client.
In this case, the streams are closed quickly, but still, mixing standard IO streams among multiple processes is bound to give difficulty: keep it all in one process, if you can.
Edit I've found a strange behavior I can't explain:
#!/usr/bin/python
import os
import sys
import time
print "Content-type: text/plain\r\n\r\npid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.stdout.flush()
if os.fork():
print "\nfirst fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.exit(0)
os.setsid()
print "\nafter setsid pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.stdout.flush()
if os.fork():
print "\nsecond fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.exit(0)
#os.sleep(1) # comment me out, uncomment me, notice following line appear and dissapear
print "\nafter second fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
The last line, after second fork pid, only appears when the os.sleep(1) call is commented out. When the call is left in place, the last line never appears in the browser. (But otherwise all the content is printed to the browser.)
I wouldn't suggets going about the problem this way. If you need to execute some task asynchronously, why not use a work queue like beanstalkd instead of trying to fork off the tasks from the request? There are client libraries for beanstalkd available for python.
I needed to break the stdout as well as the stderr like this:
sys.stdout.flush()
os.close(sys.stdout.fileno()) # Break web pipe
sys.sterr.flush()
os.close(sys.stderr.fileno()) # Break web pipe
if os.fork(): # Get out parent process
sys.exit()
#background processing follows here
Ok, I'm adding a simpler solution, if you don't need to start another script but continue in the same one to do the long process in background. This will let you give a waiting message instantly seen by the client and continue your server processing even if the client kill the browser session:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import time
import datetime
print "Content-Type: text/html;charset=ISO-8859-1\n\n"
print "<html>Please wait...<html>\n"
sys.stdout.flush()
os.close(sys.stdout.fileno()) # Break web pipe
if os.fork(): # Get out parent process
sys.exit()
# Continue with new child process
time.sleep(1) # Be sure the parent process reach exit command.
os.setsid() # Become process group leader
# From here I cannot print to Webserver.
# But I can write in other files or do any long process.
f=open('long_process.log', 'a+')
f.write( "Starting {0} ...\n".format(datetime.datetime.now()) )
f.flush()
time.sleep(15)
f.write( "Still working {0} ...\n".format(datetime.datetime.now()) )
f.flush()
time.sleep(300)
f.write( "Still alive - Apache didn't scalped me!\n" )
f.flush()
time.sleep(150)
f.write( "Finishing {0} ...\n".format(datetime.datetime.now()) )
f.flush()
f.close()
I have read half the Internet for one week without success on this one, finally I tried to test if there is a difference between sys.stdout.close() and os.close(sys.stdout.fileno()) and there is an huge one: The first didn't do anything while the second closed the pipe from the web server and completly disconnected from the client. The fork is only necessary because the webserver will kill its processes after a while and your long process probably needs more time to complete.
As other answers have noted, it is tricky to start a persistent process from your CGI script because the process must cleanly dissociate itself from the CGI program. I have found that a great general-purpose program for this is daemon. It takes care of the messy details involving open file handles, process groups, root directory, etc etc for you. So the pattern of such a CGI program is:
#!/bin/sh
foo-service-ping || daemon --restart foo-service
# ... followed below by some CGI handler that uses the "foo" service
The original post describes the case where you want your CGI program to return quickly, while spawning off a background process to finish handling that one request. But there is also the case where your web application depends on a running service which must be kept alive. (Other people have talked about using beanstalkd to handle jobs. But how do you ensure that beanstalkd itself is alive?) One way to do this is to restart the service (if it's down) from within the CGI script. This approach makes sense in an environment where you have limited control over the server and can't rely on things like cron or an init.d mechanism.
There are situations where passing work off to a daemon or cron is not appropriate. Sometimes you really DO need to fork, let the parent exit (to keep Apache happy) and let something slow happen in the child.
What worked for me: When done generating web output, and before the fork:
fflush(stdout), close(0), close(1), close(2); // in the process BEFORE YOU FORK
Then fork() and have the parent immediately exit(0);
The child then AGAIN does
close(0), close(1), close(2);
and also a
setsid();
...and then gets on with whatever it needs to do.
Why you need to close them in the child even though they were closed in the primordial process in advance is confusing to me, but this is what worked. It didn't without the 2nd set of closes. This was on Linux (on a raspberry pi).
I haven't tried using fork but I have accomplished what you're asking by executing a sys.stdout.flush() after the original message, before calling the background process.
i.e.
print "Please wait..."
sys.stdout.flush()
output = some_processing() # put what you want to accomplish here
print output # in my case output was a redirect to a results page
My head still hurting on that one. I tried all possible ways to use your code with fork and stdout closing, nulling or anything but nothing worked. The uncompleted process output display depends on webserver (Apache or other) config, and in my case it wasn't an option to change it, so tries with "Transfer-Encoding: chunked;chunk=CRLF" and "sys.stdout.flush()" didn't worked either. Here is the solution that finally worked.
In short, use something like:
if len(sys.argv) == 1: # I'm in the parent process
childProcess = subprocess.Popen('./myScript.py X', bufsize=0, stdin=open("/dev/null", "r"), stdout=open("/dev/null", "w"), stderr=open("/dev/null", "w"), shell=True)
print "My HTML message that says to wait a long time"
else: # Here comes the child and his long process
# From here I cannot print to Webserver, but I can write in files that will be refreshed in my web page.
time.sleep(15) # To verify the parent completes rapidly.
I use the "X" parameter to make the distinction between parent and child because I call the same script for both, but you could do it simpler by calling another script. If a complete example would be useful, please ask.
For thous that have "sh: 1: Syntax error: redirection unexpected" with the at/batch solution try using something like this:
Make sure that the at command is installed and the user running the application ins't in /etc/at.deny
os.system("echo sudo /srv/scripts/myapp.py | /usr/bin/at now")
I have a python script, which I daemonise using this code
def daemonise():
from os import fork, setsid, umask, dup2
from sys import stdin, stdout, stderr
if fork(): exit(0)
umask(0)
setsid()
if fork(): exit(0)
stdout.flush()
stderr.flush()
si = file('/dev/null', 'r')
so = file('daemon-%s.out'%os.getpid(), 'a+')
se = file('daemon-%s.err'%os.getpid(), 'a+')
dup2(si.fileno(), stdin.fileno())
dup2(so.fileno(), stdout.fileno())
dup2(se.fileno(), stderr.fileno())
print 'this file has the output from daemon%s'%os.getpid()
print >> stderr, 'this file has the errors from daemon%s'%os.getpid()
The script is in
while True: try: funny_code(); sleep(10); except:pass;
loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.
[Edit]
Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)
You really should use python-daemon for this which is a library that implements PEP 3141 for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.
Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:
while True:
try:
funny_code()
sleep(10)
except BaseException, e:
print e.__class__, e.message
pass
Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.
I recommend using supervisord (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your daemonise function.
What I've used in my clients is daemontools. It is a proven, well tested tool to run anything daemonized.
You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.
It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.
I want to get screenshots of a webpage in Python. For this I am using http://github.com/AdamN/python-webkit2png/ .
newArgs = ["xvfb-run", "--server-args=-screen 0, 640x480x24", sys.argv[0]]
for i in range(1, len(sys.argv)):
if sys.argv[i] not in ["-x", "--xvfb"]:
newArgs.append(sys.argv[i])
logging.debug("Executing %s" % " ".join(newArgs))
os.execvp(newArgs[0], newArgs)
Basically calls xvfb-run with the correct args. But man xvfb says:
Note that the demo X clients used in the above examples will not exit on their own, so they will have to be killed before xvfb-run will exit.
So that means that this script will <????> if this whole thing is in a loop, (To get multiple screenshots) unless the X server is killed. How can I do that?
The documentation for os.execvp states:
These functions all execute a new
program, replacing the current
process; they do not return. [..]
So after calling os.execvp no other statement in the program will be executed. You may want to use subprocess.Popen instead:
The subprocess module allows you to
spawn new processes, connect to their
input/output/error pipes, and obtain
their return codes. This module
intends to replace several other,
older modules and functions, such as:
Using subprocess.Popen, the code to run xlogo in the virtual framebuffer X server becomes:
import subprocess
xvfb_args = ['xvfb-run', '--server-args=-screen 0, 640x480x24', 'xlogo']
process = subprocess.Popen(xvfb_args)
Now the problem is that xvfb-run launches Xvfb in a background process. Calling process.kill() will not kill Xvfb (at least not on my machine...). I have been fiddling around with this a bit, and so far the only thing that works for me is:
import os
import signal
import subprocess
SERVER_NUM = 99 # 99 is the default used by xvfb-run; you can leave this out.
xvfb_args = ['xvfb-run', '--server-num=%d' % SERVER_NUM,
'--server-args=-screen 0, 640x480x24', 'xlogo']
subprocess.Popen(xvfb_args)
# ... do whatever you want to do here...
pid = int(open('/tmp/.X%s-lock' % SERVER_NUM).read().strip())
os.kill(pid, signal.SIGINT)
So this code reads the process ID of Xvfb from /tmp/.X99-lock and sends the process an interrupt. It works, but does yield an error message every now and then (I suppose you can ignore it, though). Hopefully somebody else can provide a more elegant solution. Cheers.