Celery auto reload on ANY changes - python

I could make celery reload itself automatically when there is changes on modules in CELERY_IMPORTS in settings.py.
I tried to give mother modules to detect changes even on child modules but it did not detect changes in child modules. That make me understand that detecting is not done recursively by celery. I searched it in the documentation but I did not meet any response for my problem.
It is really bothering me to add everything related celery part of my project to CELERY_IMPORTS to detect changes.
Is there a way to tell celery that "auto reload yourself when there is any changes in anywhere of project".
Thank You!

Celery --autoreload doesn't work and it is deprecated.
Since you are using django, you can write a management command for that.
Django has autoreload utility which is used by runserver to restart WSGI server when code changes.
The same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows.
import shlex
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
def restart_celery():
cmd = 'pkill celery'
subprocess.call(shlex.split(cmd))
cmd = 'celery worker -l info -A foo'
subprocess.call(shlex.split(cmd))
class Command(BaseCommand):
def handle(self, *args, **options):
print('Starting celery worker with autoreload...')
# For Django>=2.2
autoreload.run_with_reloader(restart_celery)
# For django<2.1
# autoreload.main(restart_celery)
Now you can run celery worker with python manage.py celery which will autoreload when codebase changes.
This is only for development purposes and do not use it in production. Code taken from my other answer here.

You can manually include additional modules with -I|--include. Combine this with GNU tools like find and awk and you'll be able to find all .py files and include them.
$ celery -A app worker --autoreload --include=$(find . -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//')
Lets explain it:
find . -name "*.py" -type f
find searches recursively for all files containing .py. The output looks something like this:
./app.py
./some_package/foopy
./some_package/bar.py
Then:
awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=','
This line takes output of find as input and removes all occurences of ./. Then it replaces all / with a .. The last sub() removes replaces .py with an empty string. ORS replaces all newlines with ,. This outputs:
app,some_package.foo,some_package.bar,
The last command, sed removes the last ,.
So the command that is being executed looks like:
$ celery -A app worker --autoreload --include=app,some_package.foo,some_package.bar
If you have a virtualenv inside your source you can exclude it by adding -path .path_to_your_env -prune -o:
$ celery -A app worker --autoreload --include=$(find . -path .path_to_your_env -prune -o -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//')

You can use watchmedo
pip install watchdog
Start celery worker indirectly via watchmedo
watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO
More detailed

I used watchdog watchdemo utility, it works great but for some reason the PyCharm debugger was not able to debug the subprocess spawned by watchdemo.
So if your project has werkzeug as dependency, you can use the werkzeug._reloader.run_with_reloader function to autoreload celery worker on code change. Plus it works with PyCharm debugger.
"""
Filename: celery_dev.py
"""
import sys
from werkzeug._reloader import run_with_reloader
# this is the celery app path in my application, change it according to your project
from web.app import celery_app
def run():
# create copy of "argv" and remove script name
argv = sys.argv.copy()
argv.pop(0)
# start the celery worker
celery_app.worker_main(argv)
if __name__ == '__main__':
run_with_reloader(run)
Sample PyCharm debug configuration.
NOTE:
This is a private werkzeug API and is working as of Werkzeug==2.0.3. It may stop working in future versions. Use at you own risk.

OrangeTux's solution didn't work out for me, so I wrote a little Python script to achieve more or less the same. It monitors file changes using inotify, and triggers a celery restart if it detects a IN_MODIFY, IN_ATTRIB, or IN_DELETE.
#!/usr/bin/env python
"""Runs a celery worker, and reloads on a file change. Run as ./run_celery [directory]. If
directory is not given, default to cwd."""
import os
import sys
import signal
import time
import multiprocessing
import subprocess
import threading
import inotify.adapters
CELERY_CMD = tuple("celery -A amcat.amcatcelery worker -l info -Q amcat".split())
CHANGE_EVENTS = ("IN_MODIFY", "IN_ATTRIB", "IN_DELETE")
WATCH_EXTENSIONS = (".py",)
def watch_tree(stop, path, event):
"""
#type stop: multiprocessing.Event
#type event: multiprocessing.Event
"""
path = os.path.abspath(path)
for e in inotify.adapters.InotifyTree(path).event_gen():
if stop.is_set():
break
if e is not None:
_, attrs, path, filename = e
if filename is None:
continue
if any(filename.endswith(ename) for ename in WATCH_EXTENSIONS):
continue
if any(ename in attrs for ename in CHANGE_EVENTS):
event.set()
class Watcher(threading.Thread):
def __init__(self, path):
super(Watcher, self).__init__()
self.celery = subprocess.Popen(CELERY_CMD)
self.stop_event_wtree = multiprocessing.Event()
self.event_triggered_wtree = multiprocessing.Event()
self.wtree = multiprocessing.Process(target=watch_tree, args=(self.stop_event_wtree, path, self.event_triggered_wtree))
self.wtree.start()
self.running = True
def run(self):
while self.running:
if self.event_triggered_wtree.is_set():
self.event_triggered_wtree.clear()
self.restart_celery()
time.sleep(1)
def join(self, timeout=None):
self.running = False
self.stop_event_wtree.set()
self.celery.terminate()
self.wtree.join()
self.celery.wait()
super(Watcher, self).join(timeout=timeout)
def restart_celery(self):
self.celery.terminate()
self.celery.wait()
self.celery = subprocess.Popen(CELERY_CMD)
if __name__ == '__main__':
watcher = Watcher(sys.argv[1] if len(sys.argv) > 1 else ".")
watcher.start()
signal.signal(signal.SIGINT, lambda signal, frame: watcher.join())
signal.pause()
You should probably change CELERY_CMD, or any other global variables.

There was an issue in #AlexTT answer, I don't know if I should comment on his answer of put this as an answer.
You can use watchmedo
pip install watchdog
Start celery worker indirectly via watchmedo
watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery -A <app> worker --concurrency=1 --loglevel=INFO

This is the way I made it work in Django:
# worker_dev.py (put it next to manage.py)
from django.utils import autoreload
def run_celery():
from projectname import celery_app
celery_app.worker_main(["-Aprojectname", "-linfo", "-Psolo"])
print("Starting celery worker with autoreload...")
autoreload.run_with_reloader(run_celery)
Then run python worker_dev.py. This has an advantage of working inside docker container.

This is a huge adaptation from Suor's code.
I made a custom Django command which can be called like this:
python manage.py runcelery
So, every time the code changes, celery's main process is gracefully killed and then executed again.
Change the CELERY_COMMAND variable as you wish.
# File: runcelery.py
import os
import signal
import subprocess
import time
import psutil
from django.core.management.base import BaseCommand
from django.utils import autoreload
DELAY_UNTIL_START = 5.0
CELERY_COMMAND = 'celery --config my_project.celeryconfig worker --loglevel=INFO'
class Command(BaseCommand):
help = ''
def kill_celery(self, parent_pid):
os.kill(parent_pid, signal.SIGTERM)
def run_celery(self):
time.sleep(DELAY_UNTIL_START)
subprocess.run(CELERY_COMMAND.split(' '))
def get_main_process(self):
for process in psutil.process_iter():
if process.ppid() == 0: # PID 0 has no parent
continue
parent = psutil.Process(process.ppid())
if process.name() == 'celery' and parent.name() == 'celery':
return parent
return
def reload_celery(self):
parent = self.get_main_process()
if parent is not None:
self.stdout.write('[*] Killing Celery process gracefully..')
self.kill_celery(parent.pid)
self.stdout.write('[*] Starting Celery...')
self.run_celery()
def handle(self, *args, **options):
autoreload.run_with_reloader(self.reload_celery)

Related

Python - Celery autorelaod

How does you develop when using celery ?
Seem it require reload for every change,
I'm using command:
watchmedo auto-restart --directory=proj/ -p '*.py' --recursive -- celery -A proj worker --concurrency=1 --loglevel=INFO
cellery.py
from decouple import AutoConfig
cwd = os.getcwd()
DOTENV_FILE = cwd + '/proj/config/.env'
config = AutoConfig(search_path='DOTENV_FILE')
app = Celery('proj',
broker=config('CELERY_BROKER_URL'),
backend=config('CELERY_RESULT_BACKEND'),
include=['proj.tasks'])
app.conf.update(
result_expires=3600,
)
if __name__ == '__main__':
app.start()
tasks.py
from .celery import app
#app.task
def add(x, y):
return x + y
Even if there is a technical solution for this kind of reloading I would suggest you shouldn't use celery stuff as you develop your task function because, well, it's just a function! So my approach here is to get the function done first and add celery stuff then to check if it integrates well with other things like tasks in the chain, django, etc. The same technic will apply if you think about unit testing.

Celery not discovering tasks inside Docker

I have a very simple implementation.
/lib/queue/__init__.py
from celery import Celery
from os import environ
REDIS_URI = environ.get('REDIS_URI')
app = Celery('tasks',
broker=f'redis://{REDIS_URI}')
app.autodiscover_tasks([
'lib.queue.cache',
], force=True)
/lib/queue/cache/tasks.py
from lib.queue import app
#app.task
def some_task():
pass
Dockerfile
RUN git clone <my_repo> /usr/src/lib
WORKDIR /usr/src/lib
RUN python3 setup.py install
CMD ["celery", "-A", "worker:app", "worker", "--loglevel=info", "--concurrency=4"]
/worker.py
from lib.queue import app
This works just fine if I initialize command line without Docker.
celery -A worker:app worker --loglevel=info
> [tasks]
> . lib.queue.cache.tasks.some_task
However, when I run it inside Docker, the tasks remain blank:
> [tasks]
Question:
Any thoughts as to why celery would not be able to find the library and tasks inside Docker? I am using another Dockerfile with an almost identical setup to push the tasks, and it is able to access lib.queue.cache.tasks no problem.
Because I have been asked to provide my solution a couple times, here it is. HOWEVER, it may not really be helpful since what I am doing now is slightly different.
Inside my worker file, where app is defined, I have just a single task.
app = Celery("tasks", broker=f"redis://{REDIS_URI}:{REDIS_PORT}/{REDIS_DB}")
#app.task
def run_task(task_name, *args, **kwargs):
print(f"Running {task_name}. Received...")
print(f"- args: {args}")
print(f"- kwargs: {kwargs}")
module_name, method_name = task_name.split(".")
module = import_module(f".{module_name}", package="common.tasks")
task = getattr(module, method_name)
loop = asyncio.get_event_loop()
retval = loop.run_until_complete(task(*args, **kwargs))
This may not be relevant to most people since it takes a string argument to import a coroutine and execute that. This really is because my tasks are sharing some functions that also need to execute in async world.

Using custom signal handlers with gunicorn

I have a Flask application with custom signal handlers to take care of clean up tasks before exiting. When running the application with gunicorn, gunicorn kills the application before it can complete all clean up tasks.
You didn't explain what you mean by custom signal handlers, but I'm not sure that you should be using Flask's signals to capture process-level events, like shutdown. Instead, you can use the signal module from the standard library to hook onto the SIGTERM signal, like so:
# app.py - CREATE THIS FILE
from flask import Flask
from time import sleep, time
import signal
import sys
def create_app():
signal.signal(signal.SIGTERM, my_teardown_handler)
app = Flask(__name__)
#app.route('/')
def home():
return 'hi'
return app
def my_teardown_handler(signal, frame):
"""Sleeps for 3 seconds, then creates/updates a file named app-log.txt with the timestamp."""
sleep(3)
with open('app-log.txt', 'w') as f:
msg = ''.join(['The time is: ', str(time())])
f.write(msg)
sys.exit(0)
if __name__ == '__main__':
app = create_app()
app.run(port=8888)
# wsgi.py - CREATE THIS FILE, in same folder as app.py
import os
import sys
from werkzeug.wsgi import DispatcherMiddleware
from werkzeug.exceptions import NotFound
from app import create_app
app = DispatcherMiddleware(create_app())
Assuming you have a virtual environment with Flask and Gunicorn installed, you should then be able to launch the app with Gunicorn:
$ gunicorn --bind 127.0.0.1:8888 --log-level debug wsgi:app
Next, in a separate terminal, you can send the TERM signal to your app, like so:
$ kill -s TERM [PROCESS ID OF GUNICORN PROCESS / $(ps ax | grep gunicorn | head -n 1 | awk '{print $1}')]
And to observe the results, you should notice that the contents of the app-log.txt file get updated when you run that kill command, after the three-second delay. You could even spawn a third terminal window in this directory and run watch -n 1 "cat app-log.txt" to observe this file being updated in real time, while you cycle between starting the app and sending the TERM signal.
As for tying that into production, I know that Supervisor has a configuration option to specify the stopsignal, like so:
[program:my-app]
command = /path/to/gunicorn [RUNTIME FLAGS]
stopsignal = TERM
...
But that's a separate topic from the original issue of ensuring that your app's clean up tasks are completely executed.

Python Celery could start with a threading inprocess ?

I want make a testcase with my celery codes.
But usually celery need start with a new process like $ celery -A CELERY_MODULE worker, It's means I can't run my testcase code directly ?
I'm configurate the Celery with memory store to void the extra I/O in the testcase. That's config can't sample share the task queue in different process.
Here is my naive implements.
The celery entry from celery.bin.celeryd.WorkCommand, it's parse the args and execute works.
Use the solo to void the MultiProcess use in the case. Of course you need install that's lib first.
You could use this before your celery testcase start.
#!/usr/bin/env python
#vim: encoding=utf-8
import time
import unittest
from threading import Thread
from celery import Celery, states
from celery.bin.celeryd import WorkerCommand
class CELERY_CONFIG(object):
BROKER_URL = "memory://"
CELERY_CACHE_BACKEND = "memory"
CELERY_RESULT_BACKEND = "cache"
CELERYD_POOL = "solo"
class CeleryTestCase(unittest.TestCase):
def test_inprocess(self):
app = Celery(__name__)
app.config_from_object(CELERY_CONFIG)
#app.task
def dumpy_task(dct):
return 321
worker = WorkerCommand(app)
#worker.execute_from_commandline(["-P solo"])
t = Thread(target=worker.execute_from_commandline, args=(["-c 1"],))
t.daemon = True
t.start()
ar = dumpy_task.apply_async(({"a": 123},))
while ar.status != states.SUCCESS:
time.sleep(.01)
self.assertEqual(states.SUCCESS, ar.status)
self.assertEqual(ar.result, 321)
t.join(0)

How to make a Python script run like a service or daemon in Linux

I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times?
You have two options here.
Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.
Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).
I wouldn't recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.
Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.
Here's a nice class that is taken from here:
#!/usr/bin/env python
import sys, os, time, atexit
from signal import SIGTERM
class Daemon:
"""
A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
def daemonize(self):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile,'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def start(self):
"""
Start the daemon
"""
# Check for a pidfile to see if the daemon already runs
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""
Stop the daemon
"""
# Get the pid from the pidfile
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError, err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print str(err)
sys.exit(1)
def restart(self):
"""
Restart the daemon
"""
self.stop()
self.start()
def run(self):
"""
You should override this method when you subclass Daemon. It will be called after the process has been
daemonized by start() or restart().
"""
You should use the python-daemon library, it takes care of everything.
From PyPI: Library to implement a well-behaved Unix daemon process.
Assuming that you would really want your loop to run 24/7 as a background service
For a solution that doesn't involve injecting your code with libraries, you can simply create a service template, since you are using linux:
[Unit]
Description = <Your service description here>
After = network.target # Assuming you want to start after network interfaces are made available
[Service]
Type = simple
ExecStart = python <Path of the script you want to run>
User = # User to run the script as
Group = # Group to run the script as
Restart = on-failure # Restart when there are errors
SyslogIdentifier = <Name of logs for the service>
RestartSec = 5
TimeoutStartSec = infinity
[Install]
WantedBy = multi-user.target # Make it accessible to other users
Place that file in your daemon service folder (usually /etc/systemd/system/), in a *.service file, and install it using the following systemctl commands (will likely require sudo privileges):
systemctl enable <service file name without .service extension>
systemctl daemon-reload
systemctl start <service file name without .service extension>
You can then check that your service is running by using the command:
systemctl | grep running
You can use fork() to detach your script from the tty and have it continue to run, like so:
import os, sys
fpid = os.fork()
if fpid!=0:
# Running as daemon now. PID is fpid
sys.exit(0)
Of course you also need to implement an endless loop, like
while 1:
do_your_check()
sleep(5)
Hope this get's you started.
You can also make the python script run as a service using a shell script. First create a shell script to run the python script like this (scriptname arbitary name)
#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &
now make a file in /etc/init.d/scriptname
#! /bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting feedparser"
start_daemon -p $PIDFILE $DAEMON
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping feedparser"
killproc -p $PIDFILE $DAEMON
PID=`ps x |grep feed | head -1 | awk '{print $1}'`
kill -9 $PID
log_end_msg $?
;;
force-reload|restart)
$0 stop
$0 start
;;
status)
status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
exit 1
;;
esac
exit 0
Now you can start and stop your python script using the command /etc/init.d/scriptname start or stop.
A simple and supported version is Daemonize.
Install it from Python Package Index (PyPI):
$ pip install daemonize
and then use like:
...
import os, sys
from daemonize import Daemonize
...
def main()
# your code here
if __name__ == '__main__':
myname=os.path.basename(sys.argv[0])
pidfile='/tmp/%s' % myname # any name
daemon = Daemonize(app=myname,pid=pidfile, action=main)
daemon.start()
Ubuntu has a very simple way to manage a service.
For python the difference is that ALL the dependencies (packages) have to be in the same directory, where the main file is run from.
I just manage to create such a service to provide weather info to my clients.
Steps:
Create your python application project as you normally do.
Install all dependencies locally like:
sudo pip3 install package_name -t .
Create your command line variables and handle them in code (if you need any)
Create the service file. Something (minimalist) like:
[Unit]
Description=1Droid Weather meddleware provider
[Service]
Restart=always
User=root
WorkingDirectory=/home/ubuntu/weather
ExecStart=/usr/bin/python3 /home/ubuntu/weather/main.py httpport=9570 provider=OWMap
[Install]
WantedBy=multi-user.target
Save the file as myweather.service (for example)
Make sure that your app runs if started in the current directory
python3 main.py httpport=9570 provider=OWMap
The service file produced above and named myweather.service (important to have the extension .service) will be treated by the system as the name of your service. That is the name that you will use to interact with your service.
Copy the service file:
sudo cp myweather.service /lib/systemd/system/myweather.service
Refresh demon registry:
sudo systemctl daemon-reload
Stop the service (if it was running)
sudo service myweather stop
Start the service:
sudo service myweather start
Check the status (log file with where your print statements go):
tail -f /var/log/syslog
Or check the status with:
sudo service myweather status
Back to the start with another iteration if needed
This service is now running and even if you log out it will not be affected.
And YES if the host is shutdown and restarted this service will be restarted...
cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon.
how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong.
If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again
First, read up on mail aliases. A mail alias will do this inside the mail system without you having to fool around with daemons or services or anything of the sort.
You can write a simple script that will be executed by sendmail each time a mail message is sent to a specific mailbox.
See http://www.feep.net/sendmail/tutorial/intro/aliases.html
If you really want to write a needlessly complex server, you can do this.
nohup python myscript.py &
That's all it takes. Your script simply loops and sleeps.
import time
def do_the_work():
# one round of polling -- checking email, whatever.
while True:
time.sleep( 600 ) # 10 min.
try:
do_the_work()
except:
pass
I would recommend this solution. You need to inherit and override method run.
import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod
class Daemon(object):
__metaclass__ = ABCMeta
def __init__(self, pidfile):
self._pidfile = pidfile
#abstractmethod
def run(self):
pass
def _daemonize(self):
# decouple threads
pid = os.fork()
# stop first thread
if pid > 0:
sys.exit(0)
# write pid into a pidfile
with open(self._pidfile, 'w') as f:
print >> f, os.getpid()
def start(self):
# if daemon is started throw an error
if os.path.exists(self._pidfile):
raise Exception("Daemon is already started")
# create and switch to daemon thread
self._daemonize()
# run the body of the daemon
self.run()
def stop(self):
# check the pidfile existing
if os.path.exists(self._pidfile):
# read pid from the file
with open(self._pidfile, 'r') as f:
pid = int(f.read().strip())
# remove the pidfile
os.remove(self._pidfile)
# kill daemon
os.kill(pid, SIGTERM)
else:
raise Exception("Daemon is not started")
def restart(self):
self.stop()
self.start()
to creating some thing that is running like service you can use this thing :
The first thing that you must do is installing the Cement framework:
Cement frame work is a CLI frame work that you can deploy your application on it.
command line interface of the app :
interface.py
from cement.core.foundation import CementApp
from cement.core.controller import CementBaseController, expose
from YourApp import yourApp
class Meta:
label = 'base'
description = "your application description"
arguments = [
(['-r' , '--run'],
dict(action='store_true', help='Run your application')),
(['-v', '--version'],
dict(action='version', version="Your app version")),
]
(['-s', '--stop'],
dict(action='store_true', help="Stop your application")),
]
#expose(hide=True)
def default(self):
if self.app.pargs.run:
#Start to running the your app from there !
YourApp.yourApp()
if self.app.pargs.stop:
#Stop your application
YourApp.yourApp.stop()
class App(CementApp):
class Meta:
label = 'Uptime'
base_controller = 'base'
handlers = [MyBaseController]
with App() as app:
app.run()
YourApp.py class:
import threading
class yourApp:
def __init__:
self.loger = log_exception.exception_loger()
thread = threading.Thread(target=self.start, args=())
thread.daemon = True
thread.start()
def start(self):
#Do every thing you want
pass
def stop(self):
#Do some things to stop your application
Keep in mind that your app must run on a thread to be daemon
To run the app just do this in command line
python interface.py --help
Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc.
You can run a process as a subprocess inside a script or in another script like this
subprocess.Popen(arguments, close_fds=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)
Or use a ready-made utility
https://github.com/megashchik/d-handler

Categories