I followed this tutorial to set up Gunicorn to run Django on a VPS, this is working perfectly fine and the web server is running on Nginx.
I created a separate manage.py command that I want to run Async using a worker, I am unsure how to integrate this through Gunicorn.
This is a follow up to Run code on first Django start, where the recommendation was to create a separate manage.py command and then run it as a separate worker process through Gunicorn.
Gunicorn's purpose here is to serve the Django project using WSGI, it doesn't use manage.py at all. You should call anything related to manage.py directly:
$ cd <projectdir>
$ source myprojectenv/bin/activate
$ python manage.py <your command here>
For setting it as a worker, you can either set a cron job that points the python binary in the virtualenv or you can consider making a Celery setup with the process management tool (supervisord, docker etc) of your choice.
Related
I have a django project hosted on an amazon ec2 linux instance.
For run my app also when section is close i use gunicorn but i experience some errors and degradation in perfonrmances.
When i run command:
python manage.py runserver
from terminal all works great but when section is close app does not work.
How can i run command "python manage.py runserver" for work forever (until i'll kill it) in background also in case of closed session?
I know there is uWSGI but i prefer if possible use directly django native command.
Thanks in advance
What happens here is that the script is interrupted by SIGHUP signal when your session is closed. To overcome this problem, there is a tool called nohup which doesn't pass the SIGHUP down to the program/script it executes. Use it as follows:
nohup python manage.py runserver &
(note the & in the end, it is needed so that manage.py runs in background rather than in foreground).
By default nohup redirects the output in the file nohup.out, so you can use tail -f nohup.out to watch the output/logs of your Django app.
Note, however, that manage.py runserver is not supposed to be used in production. For production you really should use a proper WSGI server, such as uWSGI or Gunicorn.
You can install and use tmux if you want to run your scripts in background even after closing SSH and mosh connections
$ sudo apt-get install tmux
then run it using command $ tmux a new shell will be opened just execute your command
$ python manage.py runserver 0.0.0.0:8000
0.0.0.0:8000 here will automatically get your allowed hosts. Now you can detach your tmux session to run it in background using CTRL + B and then press D
Now you can exit your terminal but your command keep on running in tmux. Just learn basic commands to use tmux from here
for that, you can use screen just start a new screen and run
python manage.py runserver
My cron jobs work fine on localhost but when i deploy they are not getting added.
The following settings.py:
CRONJOBS = [
('*/1 * * * *', 'push.cron.my_scheduled_job')
]
In development, cron works perfectly by doing this:
python manage.py crontab add
python manage.py crontab run 2e847f370afca8feeddaa55d5094d128
But when i deploy it to server using.. the cron jobs don't get added automatically. How do i add the crob jobs to the server ?
I just managed to get this working.
First I wrote the script as a "Django Custom Management Command".
Then I established a SSH connection, which starts at directory "/home/ec2-user", and entered "crontab -e" to edit the crontab.
In the crontab file, just add the following line (replace MY_CUSTOM_MANAGEMENT_COMMAND with your own file):
source /opt/python/run/venv/bin/activate && python manage.py MY_CUSTOM_MANAGEMENT_COMMAND
Then you're done.
You didn't mention in your question, but there's something I would like to point out, because I've seen it in some well known blogs: you don't need a worker tier for this, the crontab works just fine in the web server tier. Use the worker tier if you have some heavy background processing.
Your cron jobs running on localhost are not related to your server. You will need to run them separately pretty much in the same manner as you do in your local.
## I am assuming that you already activated your virtual env
python manage.py crontab add #returns a hash
python manage.py crontab run "hash" # put the has here without quotes
You could automate by writing and running some sort of bash script.
I will recommend that you use celery-beat instead of crontab if you want some automation.
How do I run a python script that is in my main directory with Heroku scheduler?
Normally I run this through the command line with Heroku run python "script.py", but this syntax is clearly not correct for the Heroku Scheduler. Where it says "rake do_something" what should the correct syntax be to run a python script here? I've tried "python script.py" and this does not work either.
Thanks!
The Heroku Scheduler will try to run any command you give it. For Python, if you added a mytask.py to
your app repo, you could have the Scheduler run:
python mytask.py
Instead of waiting for the Scheduler to run to see if the command works as expected, you can also test run it like this:
heroku run python mytask.py # or heroku run:detached ...
heroku logs --tail
Another way to use the Scheduler would be to extend your app with a cli tool or a script runner that shares the app context. A popular one for Flask is Flask-Script.
Note: the "rake" references in the Heroku Scheduler docs example is for running tasks with ruby.
I have been researching docker and understand almost everything I have read so far. I have built a few images, linked containers together, mounted volumes, and even got a sample django app running.
The one thing I can not wrap my head around is setting up a development environment. The whole point of docker is to be able to take your environment anywhere so that everything you do is portable and consistent. If I am running a django app in production being served by gunicorn for example, I need to restart the server in order for my code changes to take affect; this is not ideal when you are working on your project in your local laptop environment. If I make a change to my models or views I don't want to have to attach to the container, stop gunicorn, and then restart it every time I make a code change.
I am also not sure how I would run management commands. python manage.py syncdb would require me to get inside of the container and run commands. I also use south to manage data and schema migrations python manage.py migrate. How are others dealing with this issue?
Debugging is another issue. Would I have to somehow get all my logs to save somewhere so I can look at things? I usually just look at the django development server's output to see errors and prints.
It seems that I would have to make a special dev-environment container that had a bunch of workarounds; that seems like it completely defeats the purpose of the tool in the first place though. Any suggestions?
Update after doing more research:
Thanks for the responses. They set me on the right path.
I ended up discovering fig http://www.fig.sh/ It let's you orchestrate the linking and mounting of volumes, you can run commands. fig run container_name python manage.py syncdb . It seems pretty nice and I have been able to set up my dev environment using it.
Made a diagram of how I set it up using vagrant https://www.vagrantup.com/.
I just run
fig up
in the same directory as my fig.yml file and it does everything needed to link the containers and start the server. I am just running the development server when working on my mac so that it restarts when I change python code.
At my current gig we setup a bash script called django_admin. You run it like so:
django_admin <management command>
Example:
django_admin syncdb
The script looks something like this:
docker run -it --rm \
-e PYTHONPATH=/var/local \
-e DJANGO_ENVIRON=LOCAL \
-e LC_ALL=en_US.UTF-8 \
-e LANG=en_US.UTF-8 \
-v /src/www/run:/var/log \
-v /src/www:/var/local \
--link mysql:db \
localhost:5000/www:dev /var/local/config/local/django-admin $#
I'm guessing you could also hook something up like this to manage.py
I normally wrap my actual CMD in a script that launches a bash shell. Take a look at Docker-Jetty container as an example. The final two lines in the script are:
/opt/jetty/bin/jetty.sh restart
bash
This will start jetty and then open a shell.
Now I can use the following command to enter a shell inside the container and run any commands or look at logs. Once I am done I can use Ctrl-p + Ctrl-q to detach from the container.
docker attach CONTAINER_NAME
I'm developing some Python project with Django. When we render the Python/Django application, we need to open the command prompt and type in python manage.py runserver. That's ok on for the development server. But for production, it looks funny. Is there anyway to run the Python/Django project without opening the command prompt?
The deployment section of the documentation details steps to configure servers to run django in production.
runserver is to be used strictly for development and should never be used in production.
You run the runserver command only when you develop. After you deploy, the client does not need to run python manage.py runserver command. Calling the url will execute the required view. So it need not be a concern
If you are using Linux I wrote a pretty, pretty basic script, which I am always using when I don't want to call this command.
Note: You really just should use the "runserver" for developing :)
#!/bin/bash
#Of course change "IP-Address" to your current IP-Address and the "Port" to your port.
#ifconfig to get your IP-Address
python manage.py runserver IP-Address:Port
just name it runserver.sh and execute it like this in your terminal:
./runserver.sh