I am very new to fabric. In my fabric file I want to restart gunicorn. For that I am killing the gunicorn process first and then starting it..
It looks like:
def restart_gunicorn():
run('ps ax|grep gunicorn')
run('pkill gunicorn')
run('gunicorn -b 0.0.0.0:8080 %(path)s/application/wsgi &' % env)
When I run this it gives me error at pkill gunicorn because at start i will not have any gunicorn process running. So I want to have a check lik if gunicorn processes are running then only kill gunicorn. If not gunicorn process are running I just want to start the gunicorn process..
How can I do this ?
Need help. Thank you
You can just add settings(warn_only=True) and will only give you a warning, but the execution won't fail:
def restart_gunicorn():
run('ps ax|grep gunicorn')
with settings(warn_only=True):
run('pkill gunicorn')
run('gunicorn -b 0.0.0.0:8080 %(path)s/application/wsgi &' % env)
More info on settings context manager here: http://docs.fabfile.org/en/1.10/api/core/context_managers.html#fabric.context_managers.settings
Related
I have a Docker container running Supervisor with 2 processes:
Celery
Django
I want Supervisor to exit when one of these processes returns an error.
This is my configuration:
[supervisord]
nodaemon=true
loglevel=debug
logfile=/app/supervisord.log
pidfile=/var/run/supervisord.pid
childlogdir=/app
[program:django]
command=python manage.py runserver 0.0.0.0:8000
redirect_stderr=true
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
[program:celery]
command=celery -A myapp worker --beat --scheduler django --loglevel=debug
redirect_stderr=true
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
[eventlistener:processes]
command=bash -c "printf 'SUPERVISORD READY' && while read line; do kill -SIGQUIT $PPID; done < /dev/stdin"
events=PROCESS_STATE_STOPPED,PROCESS_STATE_EXITED,PROCESS_STATE_FATAL
When I have a fatal error that should normally make Docker exit, Supervisor tries to launch Django again and again.. while the goal is to exit.
What's missing here?
I tried different other configurations but it's not working.
As documented [autorestart]
Default: unexpected
If unexpected, the process will be restarted when the program exits
with an exit code that is not one of the exit codes associated with
this process’ configuration (see exitcodes)
I have recently started with django. And I started doing a small project. I've been using celery with redis worker. And every to use celery and redis I have to run the celery and redis server and then django server. Which is a bit lengthy process.
I have two questions.
1. Am I doing the right thing by running the servers everytime or are there any other right method to this process?
2. If I'm in the right direction, is there any method to do this?
I tried circus.ini , but it did not work.
If you use UNIX system:
For this purpose you can get along just with bash. Just run celery and redis in background - use & command.
redis-server & celery -A app_name worker -l info & python manage.py runserver
Disadvantage of this approach - redis and celery will work in the background even after a shutdown of django dev server. So you need to terminate these processes. See this unix se answer for examples how to do that.
So you can create 2 bash scripts start.sh (contains commands with &) and cleanup.sh (terminate processes) and run them respectively.
For production see purpose #2
Use systemd or supervisor. You need to create conf files for your daemons and then run them.
Building upon Yevhenii M.'s answer, you can start a subshell command with a trap to kill all running processes in that subshell when you hit Ctrl+C:
(trap "kill 0" SIGINT; redis-server & celery -A app_name worker -l info & python manage.py runserver)
or as a more readable multiline command:
(
trap "kill 0" SIGINT
redis-server &
celery -A app_name worker -l info &
python manage.py runserver
)
Another option is to use a Procfile manager, but that requires installing additional dependencies/programs. Something like foreman or one of it's ports in other languages:
forego - Go
node-foreman - Node.js
gaffer - Java/JVM
goreman - Go
honcho - python
proclet - Perl
shoreman - shell
crank - Crystal
houseman - Haskell
(Source: foreman's README)
For this you create a Procfile (file in your project root) where you specify which commands to run:
redis: redis-server
worker: celery -A app_name worker
web: python manage.py runserver
Then run foreman start
Error explain:
I do a django-celery project and use supervisor to keep the celery process.
With a lot of action,it mad out a error that I can't start a work.It says:
stale pidfile exists.Removing it.
But i did not point the pidfile path when I setting the supervisor.
question
where is the supervisor keep the process's pidfile default?
Could someone tell me how to right do command that I can see the tasks and workers in django-admin-site? I try like this when I develop the projects:
python manage.py runserver 0.0.0.0:8090
python manage.py celery events --camera=djcelery.snapshot.Camera
python manage.py celerybeat -l INFO
python manage.py celeryd -n worker_1 -l INFO
But when I try like this in supervisor,with nginx+uwsgi,I see nothing in django-admin-site
I have configured a supervisor on the server like this:
[program:myproject]
command = /home/mydir/myproj/venv/bin/python /home/mydir/myproj/venv/bin/gunicorn manage:app -b <ip_address>:8000
directory = /home/mydir
I have installed gevent on my virtual environment but I don't know how can I implement it on the supervisor command variable, I can run it manually through terminal like this:
gunicorn manage:app -b <ip_address>:8000 --worker-class gevent
I tried to include a path when I call gevent in supervisor command just like python and gunicorn, but it's not working, honestly, I don't know what's the correct directory/file to execute gevent and I am also not sure if this is the correct way to execute a worker class in supervisor. I am running on Ubuntu v14.04
Anyone?Thanks
Already made a solution for this. But I am not 100% sure if it is correct, after searching a hundred times, I finally came up with a working solution :)
Got this from here, I've created a gunicorn.conf.py file on my project directory containing:
worker_class = 'gevent'
And integrated this file on supervisor config setting:
[program:myproject]
command = /home/mydir/myproj/venv/bin/python /home/mydir/myproj/venv/bin/gunicorn -c /home/mydir/myproj/gunicorn.conf.py manage:app -b <ip_address>:8000
directory = /home/mydir
And start running the supervisor:
sudo supervisorctl start <my_project>
And poof! It's already working!
I am in a hurry, I can find out how to do this but I need some help to achieve this without loosing too much time.
Currently what I do to run a uWsgi instance along with my ini file is just:
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid
and then to stop:
uwsgi --stop /var/run/uwsgi_serv.pid.
By the way, I have this code inside a uwsgi init file in my /etc/init.d/uwsgi.
so when I run /etc/init.d/uwsgi start it executes the ini config file and when I execute /etc/init.d/uwsgi stop it stops the uwsgi process id.
The problem is that when I start the uWsgi service it runs normally and logs every http request, any debug print and so on, but when I close putty which is where I run my Vps it kills all uWsgi process and quits the site from being displayed.
I do not know if I have to touch the pid file only, or what do I need to do leave the uWsgi process executing and I can close putty.
Thanks in advance.
If you are setting the parameters in the command line, add the flag -d file.log to your command (-d stands for daemonize):
uwsgi --ini /home/myonlinesite/uwsgi.ini --pidfile /var/run/uwsgi_serv.pid -d file.log
If you are setting the parameters in a config file, add the following line in your config:
daemonize = /absolute/path/to/file.log
In both cases, uWsgi will run in the background and log everything in file.log. Given these options, there is no need using nohup et al.
Using nohup to start the uWsgi process should solve your problem of the process stopping when you log out.
A tutorial
Be sure to add
daemon = logfile
to your config