Launching a Daemon from Python, then detaching the Parent from the Child - python

I'm running a daemon on a remote machine: mydaemon. This daemon should be persistently running at all times.
When I'm running a job on my remote machine, it also launches a lightweight python server process, my_remote_server.py.
One of the commands I can send to my_remote_server.py is to restart mydaemon, which I'm trying to do like this:
os.system("killall mydaemon")
subprocess.Popen(["mydaemon"], stdin=None, stdout=None, stderr=None, close_fds=True)
When my job ends, my_remote_server.py is supposed to terminate, but mydaemon should keep running. However I see my_remote_server.py stuck as a zombie process (This is causing the system to not see my job as terminated)
820 root Z [my_remote_serve]
834 root 552 S /usr/sbin/telnetd -l /bin/sh
835 root 836 S /bin/sh
844 root 672 S mydaemon
I want to detach Parent (my_remote_server.py) from the child (mydaemon), but I can't figure out how.
--
My python version is 2.5.4
edit:
I think I understand daemonization a bit better now, but I'm still having some trouble getting the daemon to separate
I'm leaving out the error handling here for brevity
os.system("killall mydaemon")
if(os.fork() > 0):
return True # my_remote_server.py returns to handle additional commands
os.setsid()
if(os.fork() > 0):
exit(0) # first child exits after becoming session leader
os.execlp("mydaemon") # have the second child run as the daemon
This is my ps list before I call the restart_mydaemon function
252 root 672 S mydaemon
286 root 4552 S /usr/bin/python my_remote_server.py
This is after restart_mydaemon, first child is zombied (shouldn't it be gone?)
286 root 4552 S /usr/bin/python my_remote_server.py
300 root Z [my_remote_serve]
304 root 672 S [mydaemon]
This is when the job terminates (my_remote_server.py should have exited, but it's a zombie, however, the first child has exited now at this point)
286 root Z [my_remote_serve]
304 root 1012 S [mydaemon]

Turning a process into a daemon is a multi-step process. Part of your problem is that your parent process is not waiting for child termination correctly (read about wait/waitpid),
Your process must close any open file descriptors (stdin, stdout, stderr)
Your process needs to (change directory, set umask, whatever you want/need)
You must fork a process
That process must become a process group leader
You must fork again, to fully detach from the (grand)parent
try:
#free parent, detach from process group
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#become session leader, process group leader, detach from controlling terminal
os.setsid()
try:
#prevent zombie process, make init cleanup
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#change directories, close stdin/out/err, etc
os.chdir(MYDIR)
os.umask(MYMASK)
#close files (including stdin, stdout, stderr)
#(re)open new stdin/stdout, if desired

Take a look here for reference/good explanation of how daemonization works:
http://code.activestate.com/recipes/278731/
Overall this is pretty non-trivial, I would try using the python-daemon library or similar library:
https://pypi.python.org/pypi/python-daemon/

I wouldn't call sub process to do this. Sub process imo should only be used to run a command that you cant import.
So in this case your my_remote_server.py could import the mydaemon and run it.
You have 2 ways of doing this. You could make my_remote_server the parent and mydaemon the child. Or have a new script that is the parent and spawns 2 children(mydaemon, my_remote_server).
For example. Below I create a child process. Global scope since i need to access it in the HTTP handler of tornado. I set up the Web Server. Start the Child process and then start the web server. When the endpoint is hit the handler runs and the child process is pulled from the global scope. We terminate the child process and print. You could add more endpoint to tornado to start the child again or stop and then restart it.
remote.py:
import multiprocessing
import deamon
import tornado.ioloop
import tornado.web
child_pro = multiprocessing.Process(target=deamon.run_deamon)
child_pro.daemon = True
class MainHandler(tornado.web.RequestHandler):
def get(self):
global child_pro
child_pro = child_pro # type: multiprocessing.Process
child_pro.terminate()
print ("Child killed")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
global child_pro
app = make_app()
app.listen(8888)
child_pro.start()
tornado.ioloop.IOLoop.current().start()
deamon.py:
import time
def run_deamon():
while True:
time.sleep(2)
print ('child alive')

Related

subprocess.Popen.send_signal(CTRL_C_EVENT) does not work

I have a piece of python code that should spawn an interruptible task in a child process:
class Script:
'''
Class instantiated by the parent process
'''
def __init__(self, *args, **kwargs):
self.process = None
self.start()
def __del__(self):
if self.process:
if self.process.poll() is None:
self.stop()
def start(self):
popen_kwargs = {
'executable': sys.executable,
'creationflags': 0 * subprocess.CREATE_DEFAULT_ERROR_MODE
| subprocess.CREATE_NEW_PROCESS_GROUP,
}
self.process = subprocess.Popen(['python', os.path.realpath(__file__)],
**popen_kwargs)
def stop(self):
if not self.process:
return
try:
self.process.send_signal(signal.CTRL_C_EVENT)
self.process.wait()
self.process = None
except KeyboardInterrupt:
pass
class ScriptSubprocess:
def __init__(self):
self.stop = False
def run(self):
try:
while not self.stop:
# ...
except KeyboardInterrupt:
# interrupted!
pass
finally:
# make a *clean* exit if interrupted
self.stop = True
if __name__ == '__main__':
p = ScriptSubprocess()
p.run()
del p
and it works fine in a standalone python interpreter.
The problem arises when I move this code in the real application, which has an embedded Python interpreter.
In this case, it hangs when trying to stop the child process at the line self.process.wait(), indicating the previous line self.process.send_signal(signal.CTRL_C_EVENT) did not work and in fact the child process is still running and if I manually terminate it via task manager, the call to self.process.wait() returns as if it has succeeded stopping the child process.
I am looking for possible causes (e.g. some process flag of the parent process) that disables CTRL_C_EVENT.
The documentation of subprocess says:
Popen.send_signal(signal)
Sends the signal signal to the child.
Do nothing if the process completed.
Note On Windows, SIGTERM is an alias for terminate(). CTRL_C_EVENT and
CTRL_BREAK_EVENT can be sent to processes started with a creationflags
parameter which includes CREATE_NEW_PROCESS_GROUP.
and also:
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
This flag is ignored if CREATE_NEW_CONSOLE is specified.
So I am using creationflags with subprocess.CREATE_NEW_PROCESS_GROUP, but still it is unable to kill the subprocess with CTRL_C_EVENT in the real application. (same as without this flag)
Since the real application (i.e. the parent process) is also using SetConsoleCtrlHandler to handle certain signals, I also try to pass creationflags with subprocess.CREATE_DEFAULT_ERROR_MODE to override that error mode in the child process, but still unable to kill the child process with CTRL_C_EVENT.
Note: CTRL_BREAK_EVENT works but does not give a clean exit (i.e. the finally: clause is not executed).
My guess is that SetConsoleCtrlHandler is the culprit, but I have no means of avoiding that being called in the parent process, or undoing its effect...

python daemon - why does this function kill parent twice?

def daemon_start(pid_file, log_file):
def handle_exit(signum, _):
if signum == signal.SIGTERM:
sys.exit(0)
sys.exit(1)
signal.signal(signal.SIGINT, handle_exit)
signal.signal(signal.SIGTERM, handle_exit)
# fork only once because we are sure parent will exit
pid = os.fork()
assert pid != -1
if pid > 0:
# parent waits for its child
time.sleep(5)
sys.exit(0)
# child signals its parent to exit
ppid = os.getppid()
pid = os.getpid()
if write_pid_file(pid_file, pid) != 0:
os.kill(ppid, signal.SIGINT)
sys.exit(1)
os.setsid()
signal.signal(signal.SIGHUP, signal.SIG_IGN)
print('started')
os.kill(ppid, signal.SIGTERM)
sys.stdin.close()
try:
freopen(log_file, 'a', sys.stdout)
freopen(log_file, 'a', sys.stderr)
except IOError as e:
shell.print_exception(e)
sys.exit(1)
This daemon does not use double fork. It says "fork only once because we are sure parent will exit". Parent calls sys.exit(0) to exit.However child calls os.kill(ppid, signal.SIGTERM) to exit parent.
What does it mean by doing this?
The phrase "double fork" is a standard technique to ensure a daemon is reparented to the init (pid 1) process so that the shell which launched it does not kill it. This is actually using that technique because the first fork is done by the process that launched the python program. When a program calls daemon_start it forks. The original (now parent) process exits a few seconds later or sooner when the child it forked signals it. That will cause the kernel to reparent the child process to pid 1. "Double fork" does not mean the daemon calls fork() twice.
Also, your subject line asks "why does this function kill parent twice?" But the code in question does no such thing. I have no idea how you got that idea.

How to kill a python child process created with subprocess.check_output() when the parent dies?

I am running on a linux machine a python script which creates a child process using subprocess.check_output() as it follows:
subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
The problem is that even if the parent process dies, the child is still running.
Is there any way I can kill the child process as well when the parent dies?
Yes, you can achieve this by two methods. Both of them require you to use Popen instead of check_output. The first is a simpler method, using try..finally, as follows:
from contextlib import contextmanager
#contextmanager
def run_and_terminate_process(*args, **kwargs):
try:
p = subprocess.Popen(*args, **kwargs)
yield p
finally:
p.terminate() # send sigterm, or ...
p.kill() # send sigkill
def main():
with run_and_terminate_process(args) as running_proc:
# Your code here, such as running_proc.stdout.readline()
This will catch sigint (keyboard interrupt) and sigterm, but not sigkill (if you kill your script with -9).
The other method is a bit more complex, and uses ctypes' prctl PR_SET_PDEATHSIG. The system will send a signal to the child once the parent exits for any reason (even sigkill).
import signal
import ctypes
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig = signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
p = subprocess.Popen(args, preexec_fn = set_pdeathsig(signal.SIGTERM))
Your problem is with using subprocess.check_output - you are correct, you can't get the child PID using that interface. Use Popen instead:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE)
# Here you can get the PID
global child_pid
child_pid = proc.pid
# Now we can wait for the child to complete
(output, error) = proc.communicate()
if error:
print "error:", error
print "output:", output
To make sure you kill the child on exit:
import os
import signal
def kill_child():
if child_pid is None:
pass
else:
os.kill(child_pid, signal.SIGTERM)
import atexit
atexit.register(kill_child)
Don't know the specifics, but the best way is still to catch errors (and perhaps even all errors) with signal and terminate any remaining processes there.
import signal
import sys
import subprocess
import os
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
a = subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
while 1:
pass # Press Ctrl-C (breaks the application and is catched by signal_handler()
This is just a mockup, you'd need to catch more than just SIGINT but the idea might get you started and you'd need to check for spawned process somehow still.
http://docs.python.org/2/library/os.html#os.kill
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.pid
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.kill
I'd recommend rewriting a personalized version of check_output cause as i just realized check_output is really just for simple debugging etc since you can't interact so much with it during executing..
Rewrite check_output:
from subprocess import Popen, PIPE, STDOUT
from time import sleep, time
def checkOutput(cmd):
a = Popen('ls -l', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
print(a.pid)
start = time()
while a.poll() == None or time()-start <= 30: #30 sec grace period
sleep(0.25)
if a.poll() == None:
print('Still running, killing')
a.kill()
else:
print('exit code:',a.poll())
output = a.stdout.read()
a.stdout.close()
a.stdin.close()
return output
And do whatever you'd like with it, perhaps store the active executions in a temporary variable and kill them upon exit with signal or other means of intecepting errors/shutdowns of the main loop.
In the end, you still need to catch terminations in the main application in order to safely kill any childs, the best way to approach this is with try & except or signal.
As of Python 3.2 there is a ridiculously simple way to do this:
from subprocess import Popen
with Popen(["sleep", "60"]) as process:
print(f"Just launched server with PID {process.pid}")
I think this will be best for most use cases because it's simple and portable, and it avoids any dependence on global state.
If this solution isn't powerful enough, then I would recommend checking out the other answers and discussion on this question or on Python: how to kill child process(es) when parent dies?, as there are a lot of neat ways to approach the problem that provide different trade-offs around portability, resilience, and simplicity. 😊
Manually you could do this:
ps aux | grep <process name>
get the PID(second column) and
kill -9 <PID>
-9 is to force killing it

Starting a separate process

I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running.
But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
From the Python docs:
When a process exits, it attempts to
terminate all of its daemonic child
processes.
This is the expected behavior.
If you are on a unix system, you could use os.fork:
import os
import time
pid=os.fork()
if pid:
# parent
while True:
print("I'm the parent")
time.sleep(0.5)
else:
# child
while True:
print("I'm just a child")
time.sleep(0.5)
Running this creates two processes. You can kill the parent without killing the child.
For example, when you run script you'll see something like:
% script.py
I'm the parent
I'm just a child
I'm the parent
I'm just a child
...
Stop the script with ctrl-Z:
^Z
[1]+ Stopped script.py
Find the process ID number for the parent. It will be the smaller of the two process ID numbers since the parent came first:
% ps axuw | grep script.py
unutbu 6826 0.1 0.1 33792 6388 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6827 0.0 0.1 33792 4352 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6832 0.0 0.0 17472 952 pts/24 S+ 15:09 0:00 grep --color=auto script.py
Kill the parent process:
% kill 6826
Restore script.py to the foreground:
% fg
script.py
Terminated
You'll see the child process is still running:
% I'm just a child
I'm just a child
I'm just a child
...
Kill the child (in a new terminal) with
% kill 6827
Simply use the subprocess module:
import subprocess
subprocess.Popen(["sleep", "60"])
Here is a related question on SO, where one of the answers gives a nice solution to this problem:
"spawning process from python"
If you are on a unix system (using docs):
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
import multiprocessing
from multiprocessing import Process
def to_use_in_separate_process(*args):
print(args)
#check args before using them:
if len(args)>1:
subprocess.call((args[0], args[1]))
print('subprocess called')
def main(apathtofile):
print('checking os')
if os.name == 'posix':
print('os is posix')
multiprocessing.get_context('fork')
p = Process(target=to_use_in_separate_process, args=('xdg-open', apathtofile))
p.run()
print('exiting def main')
if __name__ == '__main__':
#parameter [1] must be some file that can be opened by xdg-open that this
#program uses.
if len(sys.argv)>1:
main(sys.argv[1])
print('we can exit now.')
else:
print('no parameters...')
print('mother program will end now!')
sys.exit(0)
In Ubuntu the following commands keep working even though the python app exit.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

How to make a Python script run like a service or daemon in Linux

I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times?
You have two options here.
Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.
Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).
I wouldn't recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.
Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.
Here's a nice class that is taken from here:
#!/usr/bin/env python
import sys, os, time, atexit
from signal import SIGTERM
class Daemon:
"""
A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
def daemonize(self):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile,'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def start(self):
"""
Start the daemon
"""
# Check for a pidfile to see if the daemon already runs
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""
Stop the daemon
"""
# Get the pid from the pidfile
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError, err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print str(err)
sys.exit(1)
def restart(self):
"""
Restart the daemon
"""
self.stop()
self.start()
def run(self):
"""
You should override this method when you subclass Daemon. It will be called after the process has been
daemonized by start() or restart().
"""
You should use the python-daemon library, it takes care of everything.
From PyPI: Library to implement a well-behaved Unix daemon process.
Assuming that you would really want your loop to run 24/7 as a background service
For a solution that doesn't involve injecting your code with libraries, you can simply create a service template, since you are using linux:
[Unit]
Description = <Your service description here>
After = network.target # Assuming you want to start after network interfaces are made available
[Service]
Type = simple
ExecStart = python <Path of the script you want to run>
User = # User to run the script as
Group = # Group to run the script as
Restart = on-failure # Restart when there are errors
SyslogIdentifier = <Name of logs for the service>
RestartSec = 5
TimeoutStartSec = infinity
[Install]
WantedBy = multi-user.target # Make it accessible to other users
Place that file in your daemon service folder (usually /etc/systemd/system/), in a *.service file, and install it using the following systemctl commands (will likely require sudo privileges):
systemctl enable <service file name without .service extension>
systemctl daemon-reload
systemctl start <service file name without .service extension>
You can then check that your service is running by using the command:
systemctl | grep running
You can use fork() to detach your script from the tty and have it continue to run, like so:
import os, sys
fpid = os.fork()
if fpid!=0:
# Running as daemon now. PID is fpid
sys.exit(0)
Of course you also need to implement an endless loop, like
while 1:
do_your_check()
sleep(5)
Hope this get's you started.
You can also make the python script run as a service using a shell script. First create a shell script to run the python script like this (scriptname arbitary name)
#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &
now make a file in /etc/init.d/scriptname
#! /bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting feedparser"
start_daemon -p $PIDFILE $DAEMON
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping feedparser"
killproc -p $PIDFILE $DAEMON
PID=`ps x |grep feed | head -1 | awk '{print $1}'`
kill -9 $PID
log_end_msg $?
;;
force-reload|restart)
$0 stop
$0 start
;;
status)
status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
exit 1
;;
esac
exit 0
Now you can start and stop your python script using the command /etc/init.d/scriptname start or stop.
A simple and supported version is Daemonize.
Install it from Python Package Index (PyPI):
$ pip install daemonize
and then use like:
...
import os, sys
from daemonize import Daemonize
...
def main()
# your code here
if __name__ == '__main__':
myname=os.path.basename(sys.argv[0])
pidfile='/tmp/%s' % myname # any name
daemon = Daemonize(app=myname,pid=pidfile, action=main)
daemon.start()
Ubuntu has a very simple way to manage a service.
For python the difference is that ALL the dependencies (packages) have to be in the same directory, where the main file is run from.
I just manage to create such a service to provide weather info to my clients.
Steps:
Create your python application project as you normally do.
Install all dependencies locally like:
sudo pip3 install package_name -t .
Create your command line variables and handle them in code (if you need any)
Create the service file. Something (minimalist) like:
[Unit]
Description=1Droid Weather meddleware provider
[Service]
Restart=always
User=root
WorkingDirectory=/home/ubuntu/weather
ExecStart=/usr/bin/python3 /home/ubuntu/weather/main.py httpport=9570 provider=OWMap
[Install]
WantedBy=multi-user.target
Save the file as myweather.service (for example)
Make sure that your app runs if started in the current directory
python3 main.py httpport=9570 provider=OWMap
The service file produced above and named myweather.service (important to have the extension .service) will be treated by the system as the name of your service. That is the name that you will use to interact with your service.
Copy the service file:
sudo cp myweather.service /lib/systemd/system/myweather.service
Refresh demon registry:
sudo systemctl daemon-reload
Stop the service (if it was running)
sudo service myweather stop
Start the service:
sudo service myweather start
Check the status (log file with where your print statements go):
tail -f /var/log/syslog
Or check the status with:
sudo service myweather status
Back to the start with another iteration if needed
This service is now running and even if you log out it will not be affected.
And YES if the host is shutdown and restarted this service will be restarted...
cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon.
how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong.
If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again
First, read up on mail aliases. A mail alias will do this inside the mail system without you having to fool around with daemons or services or anything of the sort.
You can write a simple script that will be executed by sendmail each time a mail message is sent to a specific mailbox.
See http://www.feep.net/sendmail/tutorial/intro/aliases.html
If you really want to write a needlessly complex server, you can do this.
nohup python myscript.py &
That's all it takes. Your script simply loops and sleeps.
import time
def do_the_work():
# one round of polling -- checking email, whatever.
while True:
time.sleep( 600 ) # 10 min.
try:
do_the_work()
except:
pass
I would recommend this solution. You need to inherit and override method run.
import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod
class Daemon(object):
__metaclass__ = ABCMeta
def __init__(self, pidfile):
self._pidfile = pidfile
#abstractmethod
def run(self):
pass
def _daemonize(self):
# decouple threads
pid = os.fork()
# stop first thread
if pid > 0:
sys.exit(0)
# write pid into a pidfile
with open(self._pidfile, 'w') as f:
print >> f, os.getpid()
def start(self):
# if daemon is started throw an error
if os.path.exists(self._pidfile):
raise Exception("Daemon is already started")
# create and switch to daemon thread
self._daemonize()
# run the body of the daemon
self.run()
def stop(self):
# check the pidfile existing
if os.path.exists(self._pidfile):
# read pid from the file
with open(self._pidfile, 'r') as f:
pid = int(f.read().strip())
# remove the pidfile
os.remove(self._pidfile)
# kill daemon
os.kill(pid, SIGTERM)
else:
raise Exception("Daemon is not started")
def restart(self):
self.stop()
self.start()
to creating some thing that is running like service you can use this thing :
The first thing that you must do is installing the Cement framework:
Cement frame work is a CLI frame work that you can deploy your application on it.
command line interface of the app :
interface.py
from cement.core.foundation import CementApp
from cement.core.controller import CementBaseController, expose
from YourApp import yourApp
class Meta:
label = 'base'
description = "your application description"
arguments = [
(['-r' , '--run'],
dict(action='store_true', help='Run your application')),
(['-v', '--version'],
dict(action='version', version="Your app version")),
]
(['-s', '--stop'],
dict(action='store_true', help="Stop your application")),
]
#expose(hide=True)
def default(self):
if self.app.pargs.run:
#Start to running the your app from there !
YourApp.yourApp()
if self.app.pargs.stop:
#Stop your application
YourApp.yourApp.stop()
class App(CementApp):
class Meta:
label = 'Uptime'
base_controller = 'base'
handlers = [MyBaseController]
with App() as app:
app.run()
YourApp.py class:
import threading
class yourApp:
def __init__:
self.loger = log_exception.exception_loger()
thread = threading.Thread(target=self.start, args=())
thread.daemon = True
thread.start()
def start(self):
#Do every thing you want
pass
def stop(self):
#Do some things to stop your application
Keep in mind that your app must run on a thread to be daemon
To run the app just do this in command line
python interface.py --help
Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc.
You can run a process as a subprocess inside a script or in another script like this
subprocess.Popen(arguments, close_fds=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)
Or use a ready-made utility
https://github.com/megashchik/d-handler

Categories