I have a python script and I want to launch an independent daemon process. I want to call ym python script, launch this system tray dameon, do some python magic on a database file and quit, leaving the system tray daemon running.
I have tried os.system, subprocess.call, subprocess.Popen, os.execl, but it always keeps my script alive until I close the system tray daemon.
This sounds like it should be a simple solution, but I can't get anything to work.
You can use a couple nifty Popen parameters to accomplish a truly detached process on Windows (thanks to greenhat for his answer here):
import subprocess
DETACHED_PROCESS = 0x00000008
results = subprocess.Popen(['notepad.exe'],
close_fds=True, creationflags=DETACHED_PROCESS)
print(results.pid)
See also this answer for a nifty cross-platform version (make sure to add close_fds though as it is critical for Windows).
Solution for Windows: os.startfile()
Works as if you double clicked an executable and causes it to launch independently. A very handy one liner.
http://docs.python.org/library/os.html?highlight=startfile#os.startfile
I would recommend using the double-fork method.
Example:
import os
import sys
import time
def main():
fh = open('log', 'a')
while True:
fh.write('Still alive!')
fh.flush()
time.sleep(1)
def _fork():
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError, e:
print >>sys.stderr, 'Unable to fork: %d (%s)' % (e.errno, e.strerror)
sys.exit(1)
def fork():
_fork()
# remove references from the main process
os.chdir('/')
os.setsid()
os.umask(0)
_fork()
if __name__ == '__main__':
fork()
main()
Related
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
I'm writing a python script with an infinite while loop that I am running over ssh. I would like the script to terminate when someone kills ssh. For example:
The script (script.py):
while True:
# do something
Will be run as:
ssh foo ./script.py
When I kill the ssh process, I would like the script on the other end to stop running.
I have tried looking for a closed stdout:
while not sys.stdout.closed:
# do something
but this didn't work.
How do I achieve this?
Edit:
The remote machine is a Mac which opens the program in a csh:
502 29352 ?? 0:00.01 tcsh -c python test.py
502 29354 ?? 0:00.04 python test.py
I'm opening the ssh process from a python script like so:
p = Popen(['ssh','foo','./script.py'],stdout=PIPE)
while True:
line = p.stdout.readline()
# etc
EDIT
Proposed Solutions:
Run the script with while os.getppid() != 1
This seems to work on Linux systems, but does not work when the remote machine is running OSX. The problem is that the command is launched in a csh (see above) and so the csh has its parent process id set to 1, but not the script.
Periodically log to stderr
This works, but the script is also run locally, and I don't want to print a heartbeat to stderr.
Run the script in a pseduo tty with ssh -tt.
This does work, but has some weird consequences. Consider the following:
remote_script:
#!/usr/bin/env python
import os
import time
import sys
while True:
print time.time()
sys.stdout.flush()
time.sleep(1)
local_script:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import time
p = Popen(['ssh','-tt','user#foo','remote_script'],stdout=PIPE)
while True:
line = p.stdout.readline().strip()
if line:
print line
else:
break
time.sleep(10)
First of all, the output is really weird, it seems to keep adding tabs or something:
[user#local ~]$ local_script
1393608642.7
1393608643.71
1393608644.71
Connection to foo closed.
Second of all, the program does not quit the first time it receives a SIGINT, i.e. I have to hit Ctrl-C twice in order to kill the local_script.
Okay, I have a solution for you
When the ssh connection closes, the parent process id will change from the pid of the ssh-deamon (the fork that handles your connection) to 1.
Thus the following solves your problem.
#!/usr/local/bin/python
from time import sleep
import os
#os.getppid() returns parent pid
while (os.getppid() != 1):
sleep(1)
pass
Can you confirm this is working in your end too :)
edit
I saw you update.
This is not tested, but to get this idea working on OSX, you may be able to detect if the process of the csh changes. The code below only illustrates an idea and has not been tested. That said i think it would work, but it would not be the most elegant solution. If a cross platform solution using signals could be found, it would be preferred.
def do_stuff():
sleep(1)
if sys.platform == 'darwin':
tcsh_pid = os.getppid()
sshfork_pid = psutil.Process(tcsh_pid).ppid
while (sshfork_pid == psutil.Process(tcsh_pid).ppid)
do_stuff()
elif sys.platform == 'linux':
while (os.getppid() != 1):
sleep(1)
else:
raise Exception("platform not supported")
sys.exit(1)
Have you tried
ssh -tt foo ./script.py
When the terminal connection is lost, the application is supposed to receive SIGHUP signal, so all you have to do is to register a special handler using signal module.
import signal
def MyHandler(self, signum, stackFrame):
errorMessage = "I was stopped by %s" % signum
raise Exception(errorMessage)
# somewhere in the beginning of the __main__:
# registering the handler
signal.signal(signal.SIGHUP, MyHandler)
Note that most likely you'll have to handle some other signals. You can do it in absolutely the same way.
I'd suggest periodically logging to stderr.
This will cause an exception to occur when you no longer have a stderr to write to.
The running script is a child pid of the terminal session. If you close the SSH session properly it will terminate the process. But, another method of going about this is to connect your while loop to another factor and disconnect it from your SSH session.
You can have your script controlled by cron to execute regularly. You can have the while loop have a counter. You can have a sleep command in the loop to control execution. Pretty much anything other than having it connected to your SSH session is valid.
To do this you could use exec & to disconnect instances from your loop.
I'm writing a script that needs to open another script, but continue running the main script such that both scripts are running simultaneously.
I've tried execfile() but the file doesn't open. When I use os.system(somefile.py) it successfully opens the .py file via console but immediately closes it. Are there alternatives so that I can run a python script within a main python script, but have both processes running simultaneously without conflicting one another?
Here is sample code I've tested:
import os
file_path = 'C:\\Users\\Tyler\\Documents\\Multitask Bot\\somefile.py'
def main():
os.system(file_path)
if __name__ == '__main__':
main()
execfile() and os.system() will block the parent process until the child exits. Use subprocess.Popen(), e.g.
import subprocess, time
file_path = 'C:\\Users\\Tyler\\Documents\\Multitask Bot\\somefile.py'
def main():
child = subprocess.Popen(['python', file_path])
while child.poll() is None:
print "parent: child (pid = %d) is still running" % child.pid
# do parent stuff
time.sleep(1)
print "parent: child has terminated, returncode = %d" % child.returncode
if __name__ == '__main__':
main()
This is just one way to handle it. You may want to collect stdout and/or stderr from the child and possibly send data to the child's stdin. Read up on the subprocess module.
If you want to run another script simultaneously, consider the subprocess module.
Your problem can be that that file is not executed in C:\\Users\\Tyler\\Documents\\Multitask Bot\\
but somewhere else. The local import may fail.
could you try executing os.chdir('C:\\Users\\Tyler\\Documents\\Multitask Bot\\') before os.system ?
I don't understand why this simple code
# file: mp.py
from multiprocessing import Process
import sys
def func(x):
print 'works ', x + 2
sys.stdout.flush()
p = Process(target= func, args= (2, ))
p.start()
p.join()
p.terminate()
print 'done'
sys.stdout.flush()
creates "pythonw.exe" processes continuously and it doesn't print anything, even though I run it from the command line:
python mp.py
I am running the latest of Python 2.6 on Windows 7 both 32 and 64 bits
You need to protect then entry point of the program by using if __name__ == '__main__':.
This is a Windows specific problem. On Windows your module has to be imported into a new Python interpreter in order for it to access your target code. If you don't stop this new interpreter running the start up code it will spawn another child, which will then spawn another child, until it's pythonw.exe processes as far as the eye can see.
Other platforms use os.fork() to launch the subprocesses so don't have the problem of reimporting the module.
So your code will need to look like this:
from multiprocessing import Process
import sys
def func(x):
print 'works ', x + 2
sys.stdout.flush()
if __name__ == '__main__':
p = Process(target= func, args= (2, ))
p.start()
p.join()
p.terminate()
print 'done'
sys.stdout.flush()
According to the programming guidelines for multiprocessing, on windows you need to use an if __name__ == '__main__':
Funny, works on my Linux machine:
$ python mp.py
works 4
done
$
Is the multiprocessing thing supposed to work on Windows? A lot of programs originated in the Unix world don't handle Windows so well, because Unix uses fork(2) to clone processes quite cheaply, but (it is my understanding) that Windows does not support fork(2) gracefully, if at all.
im spawning a script that runs for a long time from a web app like this:
os.spawnle(os.P_NOWAIT, "../bin/producenotify.py", "producenotify.py", "xx",os.environ)
the script is spawned successfully and it runs, but till it gets over i am not able to free the port that is used by the web app, or in other words i am not able to restart the web app. how do i spawn off a process and make it completely independent of the web app?
this is on linux os.
As #mark clarified it's a Linux system, the script could easily make itself fully independent, i.e., a daemon, by following this recipe. (You could also do it in the parent after an os.fork and only then os.exec... the child process).
Edit: to clarify some details wrt #mark's comment on my answer: super-user privileges are not needed to "daemonize" a process as per the cookbook recipes, nor is there any need to change the current working directory (though the code in the recipe does do that and more, that's not the crucial part -- rather it's the proper logic sequence of fork, _exit and setsid calls). The various os.exec... variants that do not end in e use the parent process's environment, so that part is easy too -- see Python online docs.
To address suggestions made in others' comments and answers: I believe subprocess and multiprocessing per se don't daemonize the child process, which seems to be what #mark needs; the script could do it for itself, but since some code has to be doing forks and setsid, it seems neater to me to keep all of the spawning on that low-level plane rather than mix some high-level and some low-level code in the course of the operation.
Here's a vastly reduced and simplified version of the recipe at the above URL, tailored to be called in the parent to spawn a daemon child -- this way, the code can be used to execute non-Python executables just as well. As given, the code should meet the needs #mark explained, of course it can be tailored in many ways -- I strongly recommend reading the original recipe and its comments and discussions, as well as the books it recommends, for more information.
import os
import sys
def spawnDaemon(path_to_executable, *args)
"""Spawn a completely detached subprocess (i.e., a daemon).
E.g. for mark:
spawnDaemon("../bin/producenotify.py", "producenotify.py", "xx")
"""
# fork the first time (to make a non-session-leader child process)
try:
pid = os.fork()
except OSError, e:
raise RuntimeError("1st fork failed: %s [%d]" % (e.strerror, e.errno))
if pid != 0:
# parent (calling) process is all done
return
# detach from controlling terminal (to make child a session-leader)
os.setsid()
try:
pid = os.fork()
except OSError, e:
raise RuntimeError("2nd fork failed: %s [%d]" % (e.strerror, e.errno))
raise Exception, "%s [%d]" % (e.strerror, e.errno)
if pid != 0:
# child process is all done
os._exit(0)
# grandchild process now non-session-leader, detached from parent
# grandchild process must now close all open files
try:
maxfd = os.sysconf("SC_OPEN_MAX")
except (AttributeError, ValueError):
maxfd = 1024
for fd in range(maxfd):
try:
os.close(fd)
except OSError: # ERROR, fd wasn't open to begin with (ignored)
pass
# redirect stdin, stdout and stderr to /dev/null
os.open(os.devnull, os.O_RDWR) # standard input (0)
os.dup2(0, 1)
os.dup2(0, 2)
# and finally let's execute the executable for the daemon!
try:
os.execv(path_to_executable, args)
except Exception, e:
# oops, we're cut off from the world, let's just give up
os._exit(255)
You can use the multiprocessing library to spawn processes. A basic example is shown here:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()