I'm deploying a django application and it works fine when I run it manually. I'm trying to use supervisor but when I run sudo supervisorctl status botApp the log file says:
Starting botApp as ubuntu
/home/ubuntu/gunicorn_start.bash: line 28: exec: gunicorn: not found
My gunicorn_start.bash is the following one:
#!/bin/bash
NAME="botApp" # Name of the application
DJANGODIR=/home/ubuntu/chimpy # Django project directory
SOCKFILE=/home/ubuntu/django_env/run/gunicorn.sock # we will communicte using this unix socket
USER=ubuntu # the user to run as
GROUP=ubuntu # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=botApp.settings # which settings file should Django use
DJANGO_WSGI_MODULE=botApp.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/ubuntu/django_env/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
And my configuration file in /etc/supervisor/conf.d/botApp.conf is:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
Is something wrong in my gunicorn bash? Many thanks
To check virtual environment is active or not. Just write the simple script
#!bin/bash
source /var/www/html/project_env/bin/activate
Then run command
-> sudo bash file_name.sh
If you will not get any error that mean venv is activated
Note - You will not get any venv activate mark on terminal or shell.
Related
I want to start a django application in gunicorn at reboot.
All commands below are run as user simernes
I have installed gunicorn with pip3:
pip3 install gunicorn
crontab:
crontab -e
#reboot /home/simernes/run_gunicorn.sh > /home/simernes/logfile 2>&1 &
run_gunicorn.sh
#!/bin/bash
source /home/simernes/.bashrc
cd /home/simernes/djangoapp
gunicorn --bind localhost:8000 config.wsgi
However, when I go and reboot and check the log file it says:
line 4: gunicorn: command not found
Running the script on it's own from a ssh logged in terminal works fine.
Do I need to source the python environment for cron to be able to see the apps installed through pip, or something of the like?
cron runs your script in a shell with minimal environment variables and path, usually the following:
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=username>
X-Cron-Env: <USER=username>
X-Cron-Env: <HOME=/Users/username>
Which means gunicorn or anything else not in /usr/bin:/bin wont be available to your script.
What you can do is export the path to gunicorn as an environment variable by adding something like this to your crontab:
#reboot export GUNICORN=/path/to/gunicorn && /home/simernes/run_gunicorn.sh > /home/simernes/logfile 2>&1 &
And in your script you execute gunicorn thusly:
#!/bin/bash
source /home/simernes/.bashrc
cd /home/simernes/djangoapp
$GUNICORN --bind localhost:8000 config.wsgi
Maybe give full path to gunicorn in the script
I am not able to access server when starting gunicorn via a .bash file I made. It works when I do it manually with this command
$ gunicorn project.wsgi:application --bind 192.168.1.130:8000
Created a gunicorn.bash file from tutorials. I looks like this and runs without fault.
#!/bin/bash
NAME="project" # Name of the application
DJANGODIR=/home/username/projects/project # Django project directory
SOCKFILE=/home/username/.venvs/project/run/gunicorn.sock # We will communicate using this unix socket
USER=username # the user to run as
GROUP=username # the group to run as
NUM_WORKERS=1 # how many worker processes shoul Gunicorn spawn
DJANGO_SETTINGS_MODULE=project.settings.production # which settings file should Django use
DJANGO_WSGI_MODULE=project.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/username/.venvs/project/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exsist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start yout Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use daemon)
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
I don't know how to troubleshoot this? Maybe some command to see what differs in running settings from manually starting gunicorn and from the .bash file?
$ gunicorn project.wsgi:application --bind 192.168.1.130:8000
Above you use --bind with host:port but below:
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
you specify unix:file which will make your gunicorn listen on unix socket file instead of on your network interface:port, so just replace the unix:$SOCKFILE with 192.168.1.130:8000 and it should be accessible as expected
Additionally you can try and connect to the current config with a curl (curl --unix-socket /path/to/socket http:/some/resurce) or other tool of your choice to verify that it actually runs
I have 2 files that depend on each other when docker is start up. 1 is a flask file and one is a file with a few functions. When docker starts, only the functions file will be executed but it imports flask variables from the flask file. Example:
Flaskfile
import flask
from flask import Flask, request
import json
_flask = Flask(__name__)
#_flask.route('/', methods = ['POST'])
def flask_main():
s = str(request.form['abc'])
ind = global_fn_main(param1,param2,param3)
return ind
def run(fn_main):
global global_fn_main
global_fn_main = fn_main
_flask.run(debug = False, port = 8080, host = '0.0.0.0', threaded = True)
Main File
import flaskfile
#a few functions then
if__name__ == '__main__':
flaskfile.run(main_fn)
The script runs fine without need a gunicorn.
Dockerfile
FROM python-flask
ADD *.py *.pyc /code/
ADD requirements.txt /code/
WORKDIR /code
EXPOSE 8080
CMD ["python","main_file.py"]
In the Command line: i usally do: docker run -it -p 8080:8080 my_image_name and then docker will start and listen.
Now to use gunicorn:
I tried to modify my CMD parameter in the dockerfile to
["gunicorn", "-w", "20", "-b", "127.0.0.1:8083", "main_file:flaskfile"]
but it just keeps exiting. Am i not writing the docker gunicorn command right?
I just went through this problem this week and stumbled on your question along the way. Fair to say you either resolved this or changed approaches by now, but for future's sake:
The command in my Dockerfile is:
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "app:app"]
Where the first "app" is the module and the second "app" is the name of the WSGI callable, in your case, it should be _flask from your code although you've some other stuff going on that makes me less certain.
Gunicorn takes the place of all the run statements in your code, if Flask's development web server and Gunicorn try to take the same port it can conflict and crash Gunicorn.
Note that when run by Gunicorn, __name__ is not "main". In my example it is equal to "app".
At my admittedly junior level of both Python, Docker, and Gunicorn the fastest way to debug is to comment out the "CMD" in the Dockerfile, get the container up and running:
docker run -it -d -p 8080:8080 my_image_name
Hop onto the running container:
docker exec -it container_name /bin/bash
And start Gunicorn from the command line until you've got it working, then test with curl - I keep a basic route in my app.py file that just prints out "Hi" and has no dependencies for validating the server is up before worrying about the port binding to the host machine.
After struggling with this issue over the last 3 days, I found that all you need to do is to bind to the non-routable meta-address 0.0.0.0 rather than the loopback IP 127.0.0.1:
CMD ["gunicorn" , "--bind", "0.0.0.0:8000", "app:app"]
And don't forget to expose the port, one option to do that is to use EXPOSE
in your Dockerfile:
EXPOSE 8000
Now:
docker build -t test .
Finally you can run:
docker run -d -p 8000:8000 test
This is my last part of my Dockerfile with Django App
EXPOSE 8002
COPY entrypoint.sh /code/
WORKDIR /code
ENTRYPOINT ["sh", "entrypoint.sh"]
then in entrypoint.sh
#!/bin/bash
# Prepare log files and start outputting logs to stdout
mkdir -p /code/logs
touch /code/logs/gunicorn.log
touch /code/logs/gunicorn-access.log
tail -n 0 -f /code/logs/gunicorn*.log &
export DJANGO_SETTINGS_MODULE=django_docker_azure.settings
exec gunicorn django_docker_azure.wsgi:application \
--name django_docker_azure \
--bind 0.0.0.0:8002 \
--workers 5 \
--log-level=info \
--log-file=/code/logs/gunicorn.log \
--access-logfile=/code/logs/gunicorn-access.log \
"$#"
Hope this could be useful
This work for me:
FROM docker.io/python:3.7
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
ENV GUNICORN_CMD_ARGS="--bind=0.0.0.0 --chdir=./src/"
COPY . .
EXPOSE 8000
CMD [ "gunicorn", "app:app" ]
I was trying to run a flask app as well. I found out that you can just use
ENTRYPOINT['gunicorn', '-b', ':8080', 'app:APP']
This will take take the file you have specified and run on the docker instance. Also, don't forget the shebang on the top, #!/usr/bin/env python if you are running the Debug LOG-LEVEL.
gunicorn main:app --workers 4 --bind :3000 --access-logfile '-'
For some reason, supervisor refuses to start the command as user - it always runs it as root - and this is an issue for me since I am activating a virtualenv and running commands specific to that particulat virtualenv.
So, my conf looks like so:
[program:site]
command = /home/some/virtual/env/dir/run/start.sh
user = some
stdout_logfile = /home/some/etc/supervisor/logs/logging.log
redirect_stderr = true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8
stopsignal=KILL
killasgroup=true
autostart=true
start.sh looks like so:
#!/bin/bash
echo $USER >> /home/some/user.txt
cd
source /home/foo/some/virtual/env/bin/activate
cd /home/foo/some/virtual/env
SOCKFILE01=/home/some/etc/supervisor/site.sock
exec /home/some/virtual/env/bin/gunicorn -b unix:$SOCKFILE01 site.wsgi:application -w 2 -k gevent --worker-connections=2000
exit 0
when I inspect the log, I see:
start.sh: line 2: cd: /root: Permission denied
which means this is still running as root.
I am totally baffled by this. I start supervisor as root. The even weirder part is that the above code works totally fine on my local machine, but shows me the above log on a server.
I have run out of ideas... :((
EDIT:
added echo to the .sh script and user.txt spits out:
root
..totally puzzled!
You need to set the environment variables as below and update the command:
[program:site]
command=bash -c "/home/some/virtual/env/dir/run/start.sh"
user=some
stdout_logfile=/home/some/etc/supervisor/logs/logging.log
redirect_stderr=true
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8,HOME="/home/some",USER="some"
stopsignal=KILL
killasgroup=true
autostart=true
This is described in http://supervisord.org/subprocess.html#subprocess-environment and solved this issue for me when trying to run npm scripts.
I am trying to run celery inside a docker container and its never updating for some reason. Whenever I add a new function in tasks.py or update an existing function it never registers with celery even after I restart the container.
Here is my dockerfile:
# start with a base image
FROM python:3.4-slim
ENV REDIS_IP 1.1.1.111
ENV REDIS_PORT 6379
ENV REDIS_DB 0
# install dependencies
RUN apt-get update && apt-get install -y \
apt-utils \
nginx \
supervisor \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
RUN echo "America/New_York" > /etc/timezone; dpkg-reconfigure -f noninteractive tzdata
# update working directories
ADD ./app /app
ADD ./config /config
ADD requirements.txt /
# install dependencies
RUN pip install --upgrade pip
RUN pip3 install -r requirements.txt
# setup config
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /config/nginx.conf /etc/nginx/sites-enabled/
RUN ln -s /config/supervisor.conf /etc/supervisor/conf.d/
EXPOSE 80
CMD ["supervisord", "-n"]
Then my supervisor.conf:
[program:app]
command = uwsgi --ini /config/app.ini
autostart=true
autorestart=true
[program:nginx]
command = service nginx restart
autostart=true
autorestart=true
[program:celery]
directory = /app
command = celery -A tasks.celery worker -P eventlet -c 1000
autostart=true
autorestart=true
My tasks.py:
import os
from celery import Celery
from app import app as flask_app
def make_celery(app):
celery = Celery(app.import_name, backend='redis://{0}:{1}/{2}'.format(os.environ['REDIS_IP'],os.environ['REDIS_PORT'],os.environ['REDIS_DB']),
broker='redis://{0}:{1}/{2}'.format(os.environ['REDIS_IP'],os.environ['REDIS_PORT'],os.environ['REDIS_DB']))
celery.conf.update(
CELERY_ENABLE_UTC=True,
CELERY_TIMEZONE='America/New_York'
)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery = make_celery(flask_app)
#celery.task()
def add_together(a, b):
return a+ b
#celery.task()
def multiply(a,b)
return a*b
and for some reason:
I have 21 workers registered and multiply never gets registered,
also when I make changes to add_together, that never registers as well, even when I restart the container.
I am starting my container with:
docker build --rm -t myapp .
docker run -d -p 88:80 -v $(pwd)/app:/app --name=myapp myapp
and restart with:
docker restart myapp
I have also tried
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
and then rebuilding the app all over again. Nothing helps. Any ideas would be very much so appreciated.
I think this may be the problem:
#celery.task() # <-- you shouldn't call this decorator directly
def add_together(a, b):
return a+ b
Try changing it to this:
#celery.task
def add_together(a, b):
return a+ b
The reason: just check the source code of decorator task:
def task(self, *args, **opts):
"""Creates new task class from any callable."""
# ... handling named options
if len(args) == 1:
if callable(args[0]):
return inner_create_task_cls(**opts)(*args)
raise TypeError('argument 1 to #task() must be a callable')
if args:
raise TypeError(
'#task() takes exactly 1 argument ({0} given)'.format(
sum([len(args), len(opts)])))
return inner_create_task_cls(**opts)
The only unnamed argument it accepted is the function to be decorated. Otherwise it would raise a TypeError and be swallowed by supervisord since you didn't configure the loglevel to debug.
I could not reproduce your problem on my setup. I created a simple Flask app as in the Celery documentation.
Can you try a few commands to double check your setup?
Open a shell into myapp container (it must be already running):
docker exec -t -i myapp /bin/bash
And then:
cd /app
celery -A tasks.celery status
celery -A tasks.celery inspect registered
Does the new task show up?
I think you may have other celery instances connected to the same redis server, that's why you have 21 instances. But I'm guessing.
You can also try with an independent redis container.
docker run --name myredis -d redis
And execute celery in debug mode, with:
docker run --rm -t -i -v $(pwd)/app:/app -e REDIS_IP=myredis -u nobody -w /app --link myredis myapp celery -A tasks.celery worker -P eventlet -c 1000 -l debug
Is the task there now? It should be listed just bellow Celery startup banner message.
I don't think you have a problem with your image, but you can double check that looking into:
docker exec myapp /bin/bash -c "cat /app/tasks.py"
I don't think this is the problem because you copy /app into the image and when you run the container, you map /app again using the local directory. Are you running the container from the same directory you built the container?
The -v $(pwd)/app:/app will override /app in the container with current directory ./app. Do you really need this? Without the -v part, do you have the same results?
I hope it helps to figure out what's wrong.
I made a working repo of your code. It lives here.
Things I changed:
Colon typo (look at your multiply def)
Not calling decorators
General code cleanup
Using a single redis URI
Some directory navigation in my test
I did not focus on the supervisord parts.