Django application deployment help - python

I'm using Capistrano to deploy a Django application (it uses Nginx as the web server), using instructions I found at http://akashxav.com/2009/07/11/getting-django-running-on-nginx-and-fastcgi-on-prgmr/ (I had to look at a cached version earlier today) and was wondering about the last command in there, which is
python manage.py runfcgi host=127.0.0.1 port=8081 --settings=settings
I understand at a high level that this is telling the application that we want to run a few instances of the FastCGI binary to serve up this application.
What I was wondering is how is the best way to handle "resetting" this, for lack of a better word. For those who don't know, Capistrano deploys things by creating "releases" directories and then providing a symlink to the latest release.
Since I can do post-deployment tasks (I've done this with CakePHP applications to do things like properly set directory permissions for a caching directory in the application) I was wondering how to turn off the existing processes created by the command above and start up new ones.
I hope I am making sense.

There is a section in the django docs about this
Basically use the pidfile option to manage.py and then write a small shell script to use that pid to kill the existing cgi process if it exists before starting the new one.
Something like this
#!/bin/bash
if [ -f "pidfile" ]; then
kill `cat -- pidfile`
rm -f -- pidfile
fi
exec python manage.py runfcgi host=127.0.0.1 port=8081 pidfile=pidfile --settings=settings
NB FastCGI support is deprecated and will be removed in Django 1.9

Related

right way to deploy a django-rq worker

maybe it's a silly question, but I didn't found much while googling around.
So I'm on the way to transform my development environment to a deploy environment. I connected django and nginx using uwsgi, put both in docker containers... so far no problem.
But I'm using django-rq, so I need a Worker process. In all these nice examples about deploying django I didn't found much about deploying django-rq. All I found was "create a docker container and use the manage.py " like this:
CMD python manage.py rqworker [queue1] [queue2]
Really? Should I just start the worker like this? I think manage.py is just for testing!?
You can create a systemd service in Ubuntu then enable and start the service.
FYR: https://github.com/rq/django-rq#deploying-on-ubuntu

Django how to run external module as Daemon

Is there a correct way to start an infinite task from Django Framework?
I need to run a MQTT Client (based on Paho) and a Python PID implementation.
I want to use Django as "Orhestrator" because I want to start daemons only if django it's running.
I use django becasue of it's simplicity for creating Rest API and ORM layer.
The only way I've found (here on github) it's to modify the __init__.py including here my external module --> How to use paho mqtt client in django?.
This it's not suitable for me beacause it start the daemons on every django manage task.
Has anyone already solved this problem?
Thank you in advance.
As far as I am concerned, I use supervisor to daemonize my django management commands.
As my django projects all run in a virtualenv, I created a script to initialize the virtualenv before running the management command:
/home/cocoonr/run_standalone.sh
#/bin/bash
export WORKON_HOME=/usr/share/virtualenvs
source /usr/share/virtualenvwrapper/virtualenvwrapper.sh
workon cocoonr # name of my virtualenv
django-admin "$#"
And here is an exemple of supervisor configuration for a command
/etc/supervisor/conf.d/cocoonr.conf
[program:send_queued_mails_worker]
command=/bin/bash /home/cocoonr/run_standalone.sh send_queued_mails_worker
user=cocoonr
group=cocoonr
stopasgroup=true
environment=LANG=fr_FR.UTF-8,LC_ALL=fr_FR.UTF-8
stderr_logfile=/var/log/cocoonr/send_queued_mails_worker.err
stdout_logfile=/var/log/cocoonr/send_queued_mails_worker.log

Possible to use Python program on node js Heroku Dyno?

I have a node js Dyno on Heroku and I've changed the script so it uses the npm python-shell module to communicate with a Python program I've written. I'd rather not translate the Python code to js as it is performance-sensitive and works best with certain python modules.
Is it possible to place the Python file in my deployment folder, push it to the Dyno and have it work somehow? I couldn't find much online for this.
Thanks a lot!
You can set up a node.js and Python environment on the same dyno by following Heroku's advice on using multiple build packs:
You can type this into the Heroku shell:
heroku buildpacks:add --index 1 heroku/python
You then need to add the relevant Python start command into your Procfile so that both your Python and Node applications run side by side.

Is it possible to run mod_python Python Server Pages outside of the Apache server context?

I have experience coding PHP and Python but have until recently not used Python for much web development.
I have started a project using mod_python with Python Server Pages which does a very good job of what I need it to do. Amongst the advantages of this is performance; the web server does not spawn a new interpreter for each request.
The system will finally be deployed on a server where I am able to setup /etc/apache/conf.d correctly, etc.
However, during development and for automated regression testing I would like the ability to run the .psp scripts without having to serve using an Apache instance. For example, it is possible to run PHP scripts using the PHP cli, and I would like to run my .psp scripts from the shell and see the resulting HTTP/HTML on stdout.
I suppose this mode would need to support simulation of POST requests, etc.
Update
OK, after typing all that question I discovered the mod_python command line in the manual:
http://modpython.org/live/current/doc-html/commandline.html
I guess that would get me most of the way there, which is to be able to exercise my application as a local user without deploying to an Apache server.
I am leaving this question here in case anyone does a web search like I did.
The mod_python command line tool is one way to do this.
See http://modpython.org/live/current/doc-html/commandline.html
Essentially,
mod_python create /path/to/new/server_root \
--listen 8888 \
--pythonpath=/path/to/my/app \
--pythonhandler=mod_python.wsgi \
--pythonoption="mod_python.wsgi.application myapp.wsgi::application"
sets up a skeleton app.
Then you run it:
mod_python start /path/to/new/server_root/conf/httpd.conf
wget http://localhost/path/to/my/app:8888
I am still testing this...
Caveat This function only seems to be from 3.4.0

How to autorun collectstatic on local django instance?

While developing locally on a side Django project, my workflow is currently this:
Make changes.
Go to Terminal session with virtual environment. Stop foreman with ctrl+C.
Type python manage.py collectstatic to move all my static css/js/img files.
Restart foreman with foreman start.
In an attempt to be more efficient and do a better job of learning, I'm wondering how I can optimize the workflow is it's more like this:
Make changes.
Run a single command that moves static files and restarts foreman.
Would someone be able to point me in the right direction. Thanks.
You could create a bash script that does this bunch of commads for you.
While I have no experience with foreman, you could create a script with content something like:
#!/bin/bash
sudo killall foreman
python manage.py collectstatic
foreman start
Then add execution rights to it:
chmod +x script.sh
And execute everything in one command:
./script.sh
I assume you absolutely can't get around using foreman for local development, cause otherwise you would not even need to do a collectstatic or manual restart.
Maybe writting a custom management command based on runserver is the way to go for you, as it would already have the check-for-change-and-restart logic in it.
https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
https://github.com/django/django/blob/master/django/core/management/commands/runserver.py?source=cc

Categories