Django how to run external module as Daemon - python

Is there a correct way to start an infinite task from Django Framework?
I need to run a MQTT Client (based on Paho) and a Python PID implementation.
I want to use Django as "Orhestrator" because I want to start daemons only if django it's running.
I use django becasue of it's simplicity for creating Rest API and ORM layer.
The only way I've found (here on github) it's to modify the __init__.py including here my external module --> How to use paho mqtt client in django?.
This it's not suitable for me beacause it start the daemons on every django manage task.
Has anyone already solved this problem?
Thank you in advance.

As far as I am concerned, I use supervisor to daemonize my django management commands.
As my django projects all run in a virtualenv, I created a script to initialize the virtualenv before running the management command:
/home/cocoonr/run_standalone.sh
#/bin/bash
export WORKON_HOME=/usr/share/virtualenvs
source /usr/share/virtualenvwrapper/virtualenvwrapper.sh
workon cocoonr # name of my virtualenv
django-admin "$#"
And here is an exemple of supervisor configuration for a command
/etc/supervisor/conf.d/cocoonr.conf
[program:send_queued_mails_worker]
command=/bin/bash /home/cocoonr/run_standalone.sh send_queued_mails_worker
user=cocoonr
group=cocoonr
stopasgroup=true
environment=LANG=fr_FR.UTF-8,LC_ALL=fr_FR.UTF-8
stderr_logfile=/var/log/cocoonr/send_queued_mails_worker.err
stdout_logfile=/var/log/cocoonr/send_queued_mails_worker.log

Related

Cron not doing my task without the command python manage.py runcrons

I managed to make a function that sends lots of emails to every user in my Django application, for that I used the django-cron package.
I need to send the emails in a particular hour of the day, so I added in my function the following:
RUN_AT_TIMES = ['14:00']
schedule = Schedule(run_at_times=RUN_AT_TIMES)
The problem is that this function is only called if I run the command:
python manage.py runcrons
What can I do to make the application work after one single call of the command python manage.py runcrons?
P.S.: I need this application to work in Heroku as well.
As described in the docs' installation guide at point 6, you need to set up a cron job to execute the command. The packages takes away the annoyance of setting up separate cron jobs for all your commands, but does not eliminate cron entirely.
EDIT: after seeing your update, as I understand working with crons on heroku depends on plan (really not sure about that), but there are some apps that help with that. Heroku Scheduler for example.

Correct way to update live django web application

Before the actual problem let me explain our architecture. We are using git through ssh on our servers and have post-recieve hooks enabled to setup code. The code is all maintained in a separate folder. What we need is whenever a person pushes code to the server, it runs tests,migrations and updates it on live site. Currently whenever the application undergoes update in model it crashes.
What we need is a way that the hooks script detect if the code is proper, By proper i mean no syntax error etc, then run migrations and update the current application with the new codes without downtime. We are using nginx to proxy to django application,virtualenv for packages install from requirements.txt file and gunicorn for deployment.
The base line is that if there is failure at any point the push commit should be rejected. and if all tests are successfull, it should make migrations to dbs and start with the new app.
A though that i had was to use two ports for the same . One runing the main application and another with the push commits. If pushed codes were successfully tested , change port on nginx to git application and have nginx reload. Please discuss drawbacks of this application if any. And a sample post-commit script to show how to reject git commit in case of failure.
Consider using fabric. Fabric will allow you to create pythonic scripts and you can run deployments in remote server creating a new database and check whether the migrations are done safe. Once all good you can mention in your fabric script to deploy in prod or if fails mention in fabric to send an email.
This makes you life simple.

How to run various workers on OpenShift?

I have a Python/Flask project (API) that contains a few workers that must be run continuously. They connect to Redis using an outside provider (https://redislabs.com/). I didn't find how can I configure Openshift to run my workers. When using Heroku, it was as simple as:
web: gunicorn wsgi --log-file -
postsearch: python manage.py worker --queue post-search
statuses: python manage.py worker --queue statuses
message: python manage.py worker --queue message
invoice: python manage.py worker --queue invoice
But for Openshift, despite googling many things, I was not able to find anything to help me. Ideally, I would avoid deploying my application to each gears. How can I run multiple workers with OpenShift?
Taken from Getting Started with Openshift by Katie J. Miller and Steven Pousty
Cartridge
To get a gear to do anything, you need to add a cartridge. Cartridges are the plugins that house the framework or components that can be used to create and run an application. One or more cartridges run on each gear, and the same cartridge can run on many gears for clustering or scaling. There are two kind of cartridges:
Standalone
These are the languages or application server that are set up to serve your web content, such as JBoss, Tomcat, Python, or Node.js. Having one of these cartridges is sufficient to run an application.
Embedded
An embedded cartridge provides functionality to enhance your application, such as database or Cron, but cannot be used on its own to create and application.
TL;DR: you must use cartridges to run a worker process. The documentation can be found here and here, and the community-mantained examples here and a series of blog post begins here
A cartridges is a bunch of file and a manifest to let OS know how to run the cartridge and how to resolve a deps.
But let's build something. Create a Django/Python app, the result is:
Now install your (custom) cartridge from the link on the bottom or from the command line tool, you can use the link to the cartridge repository.
OpenShift's integration with external services is done by configuring the relevant environment variables as explained at: https://developers.openshift.com/external-services/index.html#setting-environment-variables
Heroku's apps rely on a REDISCLOUD_URL env var that is automatically provisioned - you'll need to set up something similar in your OpenShift deployment with the applicable information about your database from the service's dashboard.

Can celery celerybeat use a Database Scheduler without Django?

I have a small infrastructure plan that does not include Django. But, because of my experience with Django, I really like Celery. All I really need is Redis + Celery to make my project. Instead of using the local filesystem, I'd like to keep everything in Redis. My current architecture uses Redis for everything until it is ready to dump the results to AWS S3. Admittedly I don't have a great reason for using Redis instead of the filesystem. I've just invested so much into architecting this with Docker and scalability in mind, it feels wrong not to.
I was searching for a non-Django database scheduler too a while back, but it looked like there's nothing else. So I took the Django scheduler code and modified it to use SQLAlchemy. Should be even easier to make it use Redis instead.
It turns out that you can!
First I created this little project from the tutorial on celeryproject.org.
That went great so I built a Dockerized demo as a proof of concept.
Things I learned from this project
Docker
using --link to create network connections between containers
running commands inside containers
Dockerfile
using FROM to build images iteratively
using official images
using CMD for images that "just work"
Celery
using Celery without Django
using Celerybeat without Django
using Redis as a queue broker
project layout
task naming requirements
Python
proper project layout for setuptools/setup.py
installation of project via pip
using entry_points to make console_scripts accessible
using setuid and setgid to de-escalate privileges for the celery deamon

Django application deployment help

I'm using Capistrano to deploy a Django application (it uses Nginx as the web server), using instructions I found at http://akashxav.com/2009/07/11/getting-django-running-on-nginx-and-fastcgi-on-prgmr/ (I had to look at a cached version earlier today) and was wondering about the last command in there, which is
python manage.py runfcgi host=127.0.0.1 port=8081 --settings=settings
I understand at a high level that this is telling the application that we want to run a few instances of the FastCGI binary to serve up this application.
What I was wondering is how is the best way to handle "resetting" this, for lack of a better word. For those who don't know, Capistrano deploys things by creating "releases" directories and then providing a symlink to the latest release.
Since I can do post-deployment tasks (I've done this with CakePHP applications to do things like properly set directory permissions for a caching directory in the application) I was wondering how to turn off the existing processes created by the command above and start up new ones.
I hope I am making sense.
There is a section in the django docs about this
Basically use the pidfile option to manage.py and then write a small shell script to use that pid to kill the existing cgi process if it exists before starting the new one.
Something like this
#!/bin/bash
if [ -f "pidfile" ]; then
kill `cat -- pidfile`
rm -f -- pidfile
fi
exec python manage.py runfcgi host=127.0.0.1 port=8081 pidfile=pidfile --settings=settings
NB FastCGI support is deprecated and will be removed in Django 1.9

Categories