Running pexpect subprocesses in background - python

I have the below code that I am running
try:
child = pexpect.spawn(
('some command --path {0} somethingmore --args {1}').format(
<iterator-output>,something),
timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
#child.read()
child.interact()
time.sleep(15)
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
Now, the command in pexpect creates subprocess by taking the one of the input from a loop, now everytime it spins up a subprocess I try to capture the logs via the child.read, in this case it waits for that subprocess to complete before going to the loop again, how do I make it to keep running it in the background(I get the logs of command input/output that I enter dynamically, but not of the process that runs thereafter unless I use the read or interact? I used this How do I make a command to run in background using pexpect.spawn? but it uses interact which again waits for that subprocess to complete .. since the loop will be iterated alomst more than 100 times I cannot wait on one to complete before moving to other, as the command in pexpect is an AWS lambda call, all I need to make sure is the command is triggered but I am not able to capture the process output of that call without waiting for it to complete.... Please let me know your suggestions

If you don't actually want to interact with lots of processes in parallel, but instead want to interact with each process briefly, then just ignore it while it runs and move onto interacting with the next one…
# Do everything up to the final `interact`. After that, the child
# won't be writing to us anymore, but it will still be running for
# many seconds. So, return the child object so we can deal with it
# later, after we've started up all the other children.
def start_command(path, arg):
try:
child = pexpect.spawn(('some command --path {0} somethingmore --args {1}').format(path, arg), timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
# child.read()
child.interact()
return child
except Exception as e:
print "Exception in child process"
print str(e)
# First, start up all the children and do the initial interaction
# with each one.
children = []
for path, args in some_iterable:
children.append(start_command(path, args))
# Now we just need to wait until they're all done. This will get
# them in as-launched order, rather than as-completed, but that
# seems like it should be fine for your use case.
for child in children:
try:
child.wait()
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
A few things:
Notice from the code comments that I'm assuming the child isn't writing anything to us (and waiting for us to read it) after the initial interaction. If that's not true, things are a bit more complicated.
If you want to not only do this, but also spin up 8 children at a time, or even all of them at once, you can (as shown in my other answer) use an executor or just a mess of threads for the initial start_command calls, and have those tasks/threads return the child object to be waited on later. For example, with the Executor version, each future's result() will be a pexpect child process. However, you definitely need to read the pexpect docs on threads in that case—with some versions of linux, passing child-process objects between threads can break the objects.
Finally, since you will now be seeing things much more out-of-order than the original version, you might want to change your print statements to show which child you're printing for (which also probably means changing children from a list of children to a list of (child, path, arg) tuples or the like).

If you want to run a process in the background, but at the same time interact with it, the simplest solution is to just kick off a thread to interact with the process.*
In your case, it sounds like you're running hundreds of processes, so you want to run some of them in parallel, but maybe not all of them at once? If so, you should use a thread pool or executor. For example, using concurrent.futures from the stdlib (or pip install the futures backport if your Python is too old):
def run_command(path, arg):
try:
child = pexpect.spawn(('some command --path {0} somethingmore --args {1}').format(path, arg), timeout=300)
child.logfile = open(file_name,'w')
child.expect('x*')
child.sendline(something)
child.expect('E*')
child.sendline(something))
# child.read()
child.interact()
time.sleep(15)
print child.status
except Exception as e:
print "Exception in child process"
print str(e)
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as x:
fs = []
for path, arg in some_iterable:
fs.append(x.submit(run_command, path, arg))
concurrent.futures.wait(fs)
If you need to return a value (or raise an exception) from the threaded code, you'll probably want a loop over as_completed(fs) instead of just plain wait. But here, you just seem to be printing stuff out and then forgetting it.
If the path, arg really do come straight out of an iterable like this, it's usually simpler to use x.map(run_command, some_iterable).
All of this (and other options, too) is explained pretty nicely in the module docs.
Also see the pexpect FAQ and common problems. I don't think there are any issues that will affect you here in current versions (we're always spawning the child and interacting with it entirely in a single thread-pooled task), but I vaguely remember there used to be an additional problem in the past (something to do with signals?).
* I think asyncio would be a better solution, except that as far as I know none of the attempts to fork or reimplement pexpect in a nonblocking way are complete enough to actually use…

Related

Killing a background process launched with python sh

I have a compiled program I launch using python sh as a background process. I want to run it for 20 seconds, then kill it. I always get an exception I can't catch. The code looks like
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null', _timeout=20)
p.wait()
except sh.TimeoutException:
print('caught timeout')
I have also tried to use p.kill() and p.terminate() after catching the timeout exception. I see a stack trace that ends in SignalException_SIGKILL. I can't seem to catch that. The stack trace references none of my code. Also, the text comes to the screen even though I'm routing stdout and stderr to /dev/null.
The program seems to run OK. The logger collects the data but I want eliminate or catch the exception. Any advice appreciated.
_timeout for the original invocation only applies when the command is run synchronously, in the foreground. When you run a command asynchronously, in the background, with _bg=True, you need to pass timeout to the wait call instead, e.g.:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _bg=True, _out='/dev/null', _err='/dev/null')
p.wait(timeout=20)
except sh.TimeoutException:
print('caught timeout')
Of course, in this case, you're not taking advantage of it being in the background (no work is done between launch and wait), so you may as well run it in the foreground and leave the _timeout on the invocation:
cmd = sh.Command('./rtlogger')
try:
p = cmd('config.txt', _out='/dev/null', _err='/dev/null', _timeout=20)
except sh.TimeoutException:
print('caught timeout')
You don't need to explicitly kill or terminate the child process; the _timeout_signal argument is used to signal the child on timeout (defaulting to signal.SIGKILL). You can change it to another signal if SIGKILL is not what you desire, but you don't need to call kill/terminate yourself either way; the act of timing out sends the signal for you.

Popen.kill() failing

Update 2: So I piped the output of stderr and it looks like when I include shell=True, i just get the help file for omx player (it lists all the command line switches and such). Is it possible that shell=True might not play nicely with omxplayer?
Update: I came across that link before but it failed on me so I moved on without digging deeper. After Tshepang suggested it again I looked into it further. I have two problems, and I'm hoping the first is caused by the second. The first problem is that when I include shell=True as an arg, the video never plays. If I don't include it, the video plays, but is not ever killed. Updated code below.
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video). My eventual goal is to be able to use vid1 as a sort of "screensaver" and have vid2 play when a user interacts with the system, but for now im simply trying to kill vid1 on keyboard input and running into quite the hard time doing it. I'm hoping someone can tell me where my code is falling down.
Forewarning that I'm extremely new to Python, and linux based systems in general, so if im doing this terribly wrong, please feel free to redirect me, but this seemed to be the fastest way to get there.
Here is my code as it stands:
import subprocess
import os
import signal
vid1 = ['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4']
while True:
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
vid = subprocess.Popen(vid1, stdout=subprocess.PIPE, preexec_fn=os.setsid)
print 'SID is: ', preexec_fn
#vid = subprocess.Popen(['omxplayer', '--loop', '/home/pi/Vids/2779832.mp4'])
id = raw_input()
if not id:
break
os.killpg(vid.pid, signal.SIGTERM)
print "your input: ", id
print "While loop has exited"
So I am trying to write a python app for my raspberry pi that plays a video on a loop (I came across Popen as a good way to accomplish this using OMXplayer) and then on keyboard interrupt, it kills that process and opens another process (playing a different video).
By default, SIGINT is propagated to all processes in the foreground process group, see "How Ctrl+C works". preexec_fn=os.setsid (or os.setpgrp) actually prevents it: use it only if you do not want omxplayer to receive Ctrl+C i.e., use it if you manually call os.killpg when you need to kill a process tree (assuming omxplayer children do not change their process group).
"keyboard interrupt" (sigint signal) is visible as KeyboardInterrupt exception in Python. Your code should catch it:
#!/usr/bin/env python
from subprocess import call, check_call
try:
rc = call(['omxplayer', 'first file'])
except KeyboardInterrupt:
check_call(['omxplayer', 'second file'])
else:
if rc != 0:
raise RuntimeError('omxplayer failed to play the first file, '
'return code: %d' % rc)
The above assumes that omxplayer exits on Ctrl+C.
You could see the help message due to several reasons e.g., omxplayer does not support --loop option (run it manually to check) or you mistakenly use shell=True and pass the command as a list: always pass the command as a single string if you need shell=True and in reverse: always (on POSIX) pass the command as a list of arguments if shell=False (default).

Python: Timeout Exception Handling with Signal.Alarm

I am trying to implement a timeout exception handler if a function call is taking too long.
EDIT: In fact, I am writing a Python script using subprocess, which calls an old C++ program with arguments. I know that the program hangs from time to time, not returning anything. That's why I am trying to put a time limit and to move on to next call with different argument and etc.
I've been searching and trying to implement it, but it doesn't quite work, so I wish to get some help. What I have so far is:
#! /usr/bin/env python
import signal
class TimeOutException(Exception):
def __init__(self, message, errors):
super(TimeOutException, self).__init__(message)
self.errors = errors
def signal_handler(signum, frame):
raise TimeOutException("Timeout!")
signal.signal(signal.SIGALRM, signal_handler)
signal.alarm(3)
try:
while True:
pass
except TimeOutException:
print "Timed out!"
signal.alarm(0)
EDIT: The Error message I receive currently is "TypeError: init() takes exactly 3 arguments (2 given)
Also, I would like ask a basic question regarding the except block. what's the role difference between the code right below "except TimeOutException" and the code in the "Exception handler"? It seems both can do the same thing?
Any help would be appreciated.
if a function call is taking too long
I realize that this might not be obvious for inexperienced developers, but the methods applicable for approaching this problem entirely depend on what you are doing in this "busy function", such as:
Is this a heavy computation? If yes, which Python interpreter are you using? CPython or PyPy? If CPython: does this computation only use Python bytecode or does it involve function calls outsourced to compiled machine code (which may hold Python's Global Interpreter Lock for quite an uncontrollable amount of time)?
Is this a lot of I/O work? If yes, can you abort this I/O work in an arbitrary state? Or do you need to properly clean up? Are you using a certain framework such as gevent or Twisted?
Edit:
So, it looks you are just spawning a subprocess and wait for it to terminate. Great, that is actually one of the most simple problems to implement a timeout control for. Python (3) ships a corresponding feature! :-) Have a look at
https://docs.python.org/3/library/subprocess.html#subprocess.call
The timeout argument is passed to Popen.wait(). If the timeout
expires, the child process will be killed and then waited for again.
The TimeoutExpired exception will be re-raised after the child process
has terminated.
Edit2:
Example code for you, save this to a file and execute it with Python 3.3, at least:
import subprocess
try:
subprocess.call(['python', '-c', 'print("hello")'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
try:
subprocess.call(['python'], timeout=2)
except subprocess.TimeoutExpired as e:
print("%s was terminated as of timeout. Its output was:\n%s" % (e.cmd, e.output))
In the first case, the subprocess immediately returns. No timeout exception will be raised. In the second case, the timeout expires, and your controlling process (the process running above's script) will attempt to terminate the subprocess. This succeeds. After that, the subprocess.TimeoutExpired is raised and the exception handler deals with it. For me the output of the script above is ['python'] was terminated as of timeout. Its output was:
None:

Python Multiprocessing - sending inputs to child processes

I am using the multiprocessing module in python to launch few processes in parallel. These processes are independent of each other. They generate their own output and write out the results in different files. Each process calls an external tool using the subprocess.call method.
It was working fine until I discovered an issue in the external tool where due to some error condition it goes into a 'prompt' mode and waits for the user input. Now in my python script I use the join method to wait till all the processes finish their tasks. This is causing the whole thing to wait for this erroneous subprocess call. I can put a timeout for each of the process but I do not know in advance how long each one is going to run and hence this option is ruled out.
How do I figure out if any child process is waiting for an user input and how do I send an 'exit' command to it? Any pointers or suggestions to relevant modules in python will be really appreciated.
My code here:
import subprocess
import sys
import os
import multiprocessing
def write_script(fname,e):
f = open(fname,'w')
f.write("Some useful cammnd calling external tool")
f.close()
subprocess.call(['chmod','+x',os.path.abspath(fname)])
return os.path.abspath(fname)
def run_use(mname,script):
print "ssh "+mname+" "+script
subprocess.call(['ssh',mname,script])
if __name__ == '__main__':
dict1 = {}
dict['mod1'] = ['pp1','ext2','les3','pw4']
dict['mod2'] = ['aaa','bbb','ccc','ddd']
machines = ['machine1','machine2','machine3','machine4']
log_file.write(str(dict1.keys()))
for key in dict1.keys():
arr = []
for mod in dict1[key]:
d = {}
arr.append(mod)
if ((mod == dict1[key][-1]) | (len(arr)%4 == 0)):
for i in range(0,len(arr)):
e = arr.pop()
script = write_script(e+"_temp.sh",e)
d[i] = multiprocessing.Process(target=run_use,args=(machines[i],script,))
d[i].daemon = True
for pp in d:
d[pp].start()
for pp in d:
d[pp].join()
Since you're writing a shell script to run your subcommands, can you simply tell them to read input from /dev/null?
#!/bin/bash
# ...
my_other_command -a -b arg1 arg2 < /dev/null
# ...
This may stop them blocking on input and is a really simple solution. If this doesn't work for you, read on for some other options.
The subprocess.call() function is simply shorthand for constructing a subprocess.Popen instance and then calling the wait() method on it. So, your spare processes could instead create their own subprocess.Popen instances and poll them with poll() method on the object instead of wait() (in a loop with a suitable delay). This leaves them free to remain in communication with the main process so you can, for example, allow the main process to tell the child process to terminate the Popen instance with the terminate() or kill() methods and then itself exit.
So, the question is how does the child process tell whether the subprocess is awaiting user input, and that's a trickier question. I would say perhaps the easiest approach is to monitor the output of the subprocess and search for the user input prompt, assuming that it always uses some string that you can look for. Alternatively, if the subprocess is expected to generate output continually then you could simply look for any output and if a configured amount of time goes past without any output then you declare that process dead and terminate it as detailed above.
Since you're reading the output, actually you don't need poll() or wait() - the process closing its output file descriptor is good enough to know that it's terminated in this case.
Here's an example of a modified run_use() method which watches the output of the subprocess:
def run_use(mname,script):
print "ssh "+mname+" "+script
proc = subprocess.Popen(['ssh',mname,script], stdout=subprocess.PIPE)
for line in proc.stdout:
if "UserPrompt>>>" in line:
proc.terminate()
break
In this example we assume that the process either gets hung on on UserPrompt>>> (replace with the appropriate string) or it terminates naturally. If it were to get stuck in an infinite loop, for example, then your script would still not terminate - you can only really address that with an overall timeout, but you didn't seem keen to do that. Hopefully your subprocess won't misbehave in that way, however.
Finally, if you don't know in advance the prompt that will be giving from your process then your job is rather harder. Effectively what you're asking to do is monitor an external process and know when it's blocked reading on a file descriptor, and I don't believe there's a particularly clean solution to this. You could consider running a process under strace or similar, but that's quite an awful hack and I really wouldn't recommend it. Things like strace are great for manual diagnostics, but they really shouldn't be part of a production setup.

Start background process/daemon from CGI script

I'm trying to launch a background process from a CGI scripts. Basically, when a form is submitted the CGI script will indicate to the user that his or her request is being processed, while the background script does the actual processing (because the processing tends to take a long time.) The problem I'm facing is that Apache won't send the output of the parent CGI script to the browser until the child script terminates.
I've been told by a colleague that what I want to do is impossible because there is no way to prevent Apache from waiting for the entire process tree of a CGI script to die. However, I've also seen numerous references around the web to a "double fork" trick which is supposed to do the job. The trick is described succinctly in this Stack Overflow answer, but I've seen similar code elsewhere.
Here's a short script I wrote to test the double-fork trick in Python:
import os
import sys
if os.fork():
print 'Content-type: text/html\n\n Done'
sys.exit(0)
if os.fork():
os.setsid()
sys.exit(0)
# Second child
os.chdir("/")
sys.stdout.close()
sys.stderr.close()
sys.stdin.close()
f = open('/tmp/lol.txt', 'w')
while 1:
f.write('test\n')
If I run this from the shell, it does exactly what I'd expect: the original script and first descendant die, and the second descendant keeps running until it's killed manually. But if I access it through CGI, the page won't load until I kill the second descendant or Apache kills it because of the CGI timeout. I've also tried replacing the second sys.exit(0) with os._exit(0), but there is no difference.
What am I doing wrong?
Don't fork - run batch separately
This double-forking approach is some kind of hack, which to me is indication it shouldn't be done :). For CGI anyway. Under the general principle that if something is too hard to accomplish, you are probably approaching it the wrong way.
Luckily you give the background info on what you need - a CGI call to initiate some processing that happens independently and to return back to the caller. Well sure - there are unix commands that do just that - schedule command to run at specific time (at) or whenever CPU is free (batch). So do this instead:
import os
os.system("batch <<< '/home/some_user/do_the_due.py'")
# or if you don't want to wait for system idle,
# os.system("at now <<< '/home/some_user/do_the_due.py'")
print 'Content-type: text/html\n'
print 'Done!'
And there you have it. Keep in mind that if there is some output to stdout/stderr, that will be mailed to the user (which is good for debugging but otherwise script probably should keep quiet).
PS. i just remembered that Windows also has version of at, so with minor modification of the invocation you can have that work under apache on windows too (vs fork trick that won't work on windows).
PPS. make sure the process running CGI is not excluded in /etc/at.deny from scheduling batch jobs
I think there are two issues: setsid is in the wrong place and doing buffered IO operations in one of the transient children:
if os.fork():
print "success"
sys.exit(0)
if os.fork():
os.setsid()
sys.exit()
You've got the original process (grandparent, prints "success"), the middle parent, and the grandchild ("lol.txt").
The os.setsid() call is being performed in the middle parent after the grandchild has been spawned. The middle parent can't influence the grandchild's session after the grandchild has been created. Try this:
print "success"
sys.stdout.flush()
if os.fork():
sys.exit(0)
os.setsid()
if os.fork():
sys.exit(0)
This creates a new session before spawning the grandchild. Then the middle parent dies, leaving the session without a process group leader, ensuring that any calls to open a terminal will fail, making sure there's never any blocking on terminal input or output, or sending unexpected signals to the child.
Note that I've also moved the success to the grandparent; there's no guarantee of which child will run first after calling fork(2), and you run the risk that the child would be spawned, and potentially try to write output to standard out or standard error, before the middle parent could have had a chance to write success to the remote client.
In this case, the streams are closed quickly, but still, mixing standard IO streams among multiple processes is bound to give difficulty: keep it all in one process, if you can.
Edit I've found a strange behavior I can't explain:
#!/usr/bin/python
import os
import sys
import time
print "Content-type: text/plain\r\n\r\npid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.stdout.flush()
if os.fork():
print "\nfirst fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.exit(0)
os.setsid()
print "\nafter setsid pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.stdout.flush()
if os.fork():
print "\nsecond fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
sys.exit(0)
#os.sleep(1) # comment me out, uncomment me, notice following line appear and dissapear
print "\nafter second fork pid: " + str(os.getpid()) + "\nppid: " + str(os.getppid())
The last line, after second fork pid, only appears when the os.sleep(1) call is commented out. When the call is left in place, the last line never appears in the browser. (But otherwise all the content is printed to the browser.)
I wouldn't suggets going about the problem this way. If you need to execute some task asynchronously, why not use a work queue like beanstalkd instead of trying to fork off the tasks from the request? There are client libraries for beanstalkd available for python.
I needed to break the stdout as well as the stderr like this:
sys.stdout.flush()
os.close(sys.stdout.fileno()) # Break web pipe
sys.sterr.flush()
os.close(sys.stderr.fileno()) # Break web pipe
if os.fork(): # Get out parent process
sys.exit()
#background processing follows here
Ok, I'm adding a simpler solution, if you don't need to start another script but continue in the same one to do the long process in background. This will let you give a waiting message instantly seen by the client and continue your server processing even if the client kill the browser session:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import time
import datetime
print "Content-Type: text/html;charset=ISO-8859-1\n\n"
print "<html>Please wait...<html>\n"
sys.stdout.flush()
os.close(sys.stdout.fileno()) # Break web pipe
if os.fork(): # Get out parent process
sys.exit()
# Continue with new child process
time.sleep(1) # Be sure the parent process reach exit command.
os.setsid() # Become process group leader
# From here I cannot print to Webserver.
# But I can write in other files or do any long process.
f=open('long_process.log', 'a+')
f.write( "Starting {0} ...\n".format(datetime.datetime.now()) )
f.flush()
time.sleep(15)
f.write( "Still working {0} ...\n".format(datetime.datetime.now()) )
f.flush()
time.sleep(300)
f.write( "Still alive - Apache didn't scalped me!\n" )
f.flush()
time.sleep(150)
f.write( "Finishing {0} ...\n".format(datetime.datetime.now()) )
f.flush()
f.close()
I have read half the Internet for one week without success on this one, finally I tried to test if there is a difference between sys.stdout.close() and os.close(sys.stdout.fileno()) and there is an huge one: The first didn't do anything while the second closed the pipe from the web server and completly disconnected from the client. The fork is only necessary because the webserver will kill its processes after a while and your long process probably needs more time to complete.
As other answers have noted, it is tricky to start a persistent process from your CGI script because the process must cleanly dissociate itself from the CGI program. I have found that a great general-purpose program for this is daemon. It takes care of the messy details involving open file handles, process groups, root directory, etc etc for you. So the pattern of such a CGI program is:
#!/bin/sh
foo-service-ping || daemon --restart foo-service
# ... followed below by some CGI handler that uses the "foo" service
The original post describes the case where you want your CGI program to return quickly, while spawning off a background process to finish handling that one request. But there is also the case where your web application depends on a running service which must be kept alive. (Other people have talked about using beanstalkd to handle jobs. But how do you ensure that beanstalkd itself is alive?) One way to do this is to restart the service (if it's down) from within the CGI script. This approach makes sense in an environment where you have limited control over the server and can't rely on things like cron or an init.d mechanism.
There are situations where passing work off to a daemon or cron is not appropriate. Sometimes you really DO need to fork, let the parent exit (to keep Apache happy) and let something slow happen in the child.
What worked for me: When done generating web output, and before the fork:
fflush(stdout), close(0), close(1), close(2); // in the process BEFORE YOU FORK
Then fork() and have the parent immediately exit(0);
The child then AGAIN does
close(0), close(1), close(2);
and also a
setsid();
...and then gets on with whatever it needs to do.
Why you need to close them in the child even though they were closed in the primordial process in advance is confusing to me, but this is what worked. It didn't without the 2nd set of closes. This was on Linux (on a raspberry pi).
I haven't tried using fork but I have accomplished what you're asking by executing a sys.stdout.flush() after the original message, before calling the background process.
i.e.
print "Please wait..."
sys.stdout.flush()
output = some_processing() # put what you want to accomplish here
print output # in my case output was a redirect to a results page
My head still hurting on that one. I tried all possible ways to use your code with fork and stdout closing, nulling or anything but nothing worked. The uncompleted process output display depends on webserver (Apache or other) config, and in my case it wasn't an option to change it, so tries with "Transfer-Encoding: chunked;chunk=CRLF" and "sys.stdout.flush()" didn't worked either. Here is the solution that finally worked.
In short, use something like:
if len(sys.argv) == 1: # I'm in the parent process
childProcess = subprocess.Popen('./myScript.py X', bufsize=0, stdin=open("/dev/null", "r"), stdout=open("/dev/null", "w"), stderr=open("/dev/null", "w"), shell=True)
print "My HTML message that says to wait a long time"
else: # Here comes the child and his long process
# From here I cannot print to Webserver, but I can write in files that will be refreshed in my web page.
time.sleep(15) # To verify the parent completes rapidly.
I use the "X" parameter to make the distinction between parent and child because I call the same script for both, but you could do it simpler by calling another script. If a complete example would be useful, please ask.
For thous that have "sh: 1: Syntax error: redirection unexpected" with the at/batch solution try using something like this:
Make sure that the at command is installed and the user running the application ins't in /etc/at.deny
os.system("echo sudo /srv/scripts/myapp.py | /usr/bin/at now")

Categories