Google Cloud Run does not load .env file - python

I spend the last couple of days trying to find what I have done wrong but I am still not able to figure out because I am able to run the app locally using flask run and also using Docker using docker-compose up --build. Source code is here
My issue is my Cloud Run deployment is successful but Service Unavailable when I am clicking on the URL. I checked the logs and seems my environment variables are not correctly loaded:
line 7, in <module> from web_messaging.blueprints.user import user File
"/web_messaging/web_messaging/blueprints/user/__init__.py", line 1, in <module> from
web_messaging.blueprints.user.views import user File
"/web_messaging/web_messaging/blueprints/user/views.py", line 3, in <module> from
web_messaging.extensions import mongo, login_manager, c, bc File
"/web_messaging/web_messaging/extensions.py", line 18, in <module> twilio_client = Client(TWILIO_SID,
TWILIO_TOKEN) File "/usr/local/lib/python3.9/site-packages/twilio/rest/__init__.py", line 54, in __init__
raise TwilioException("Credentials are required to create a TwilioClient")
twilio.base.exceptions.TwilioException: Credentials are required to create a TwilioClient
I have a config/.env file and a config/settings.py. I am loading the env variables from .env using load_dotenv() on my config/settings.py. I decided to add some print and try/expect statements in my config/settings.py to see the value of variables.
settings.py
import os
from dotenv import load_dotenv
BASEDIR = os.path.abspath(os.path.dirname(__file__))
try:
load_dotenv(os.path.join(BASEDIR, '.env'))
print("OK")
print(BASEDIR)
except Exception as e:
print(str(e))
# Mongo Database
MONGO_URI = os.getenv('MONGO_URI')
TWILIO_SID = os.getenv('TWILIO_SID')
TWILIO_TOKEN = os.getenv('TWILIO_TOKEN')
print(MONGO_URI)
print(TWILIO_SID)
When I am running with flask run, docker-compose or on cloud-run:
The BASEDIR value is /web_messaging/config
There is no exceptions during the load_dotenv() call
However, there is one major difference, it is the value of my env variables such as MONGO_URI, TWILIO_SID. Those variables have correct values when using flask run and docker-compose but not on the Cloud Run logs. On Cloud Run, those variables are equal to None.
When I don't use a .env and directly put the value of my variables inside /config/settings.py, there is no issues and my Cloud Run link is working correctly. I also tried to moved .env outside of the config file and in few other locations but I still got the same issue.
.
├── requirements.txt
├── Dockerfile
├── Docker-compose.yml
├── config
│ ├── .env
│ ├── settings.py
│ ├── gunicorn.py
│ └── __init__.py
├── web_messaging
│ ├── app.py # where I am calling create_app() - factory pattern
│ ├── blueprints
│ ├── static
│ └── ...
└── ...
Dockerfile
FROM python:3.9-slim
ENV INSTALL_PATH /web_messaging
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -b 0.0.0.0:8080 --access-logfile - "web_messaging.app:create_app()"
docker-compose.yml
version: '2'
services:
website:
build: .
command: >
gunicorn -b 0.0.0.0:8080
--access-logfile -
--reload
"web_messaging.app:create_app()"
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- '.:/web_messaging'
ports:
- '8080:8080'
config/.env
COMPOSE_PROJECT_NAME=web_messaging
FLASK_SECRET=xxx
MONGO_URI=mongodb+srv://xxx
MONGO_DB=xxx
TWILIO_SID=xxx
TWILIO_TOKEN=xxx
config/settings.py
import os
from dotenv import load_dotenv
BASEDIR = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(BASEDIR, '.env'))
DEBUG = True
PYTHONDONTWRITEBYTECODE=1
#SERVER_NAME = '127.0.0.1:5000'
# Mongo Database
MONGO_DBNAME = os.getenv('MONGO_DB')
MONGO_URI = os.getenv('MONGO_URI')
# Twilio API
FLASK_SECRET = os.getenv('FLASK_SECRET')
TWILIO_SID = os.getenv('TWILIO_SID')
TWILIO_TOKEN = os.getenv('TWILIO_TOKEN')
config/gunicorn.py
bind = '0.0.0.0:8080'
accesslog = '-'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" in %(D)sµs'

Fixed, I found exactly what went wrong but I do not know why.
It worked when I build my own image before to push the image on GCP container registry following those steeps:
docker-compose up --build
docker tag 52e6159b6b13 gcr.io/mousset005/zoro
gcloud auth configure-docker
docker push gcr.io/mousset005/zoro
However, what I was doing is building my Image using GCP API (which is what they recommend in the Cloud Run Python quickstart) using that command:
gcloud run deploy --image gcr.io/mousset005/zoro --platform managed

Related

Docker run error - Failed to find Flask application or factory in module "app". Use "FLASK_APP=app:name to specify one

trying to dockerize this flask app... running the following
docker build --tag flask-website .
works, output successfully built, successfully tagged.
edit: the next command works
$ docker run --publish 5000:5000 flask-website
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
ok, so then I run curl localhost:5000
which gives this error
curl: (7) Failed to connect to localhost port 5000: Connection refused
ok straight forward enough, so then I try this
docker-compose up
and this results
Creating network "app_default" with the default driver
Creating app_web_1 ... done
Attaching to app_web_1
web_1 | * Environment: production
web_1 | WARNING: This is a development server. Do not use it in a production deployment.
web_1 | Use a production WSGI server instead.
web_1 | * Debug mode: off
however trying to navigate to localhost:5000 I get
This site can’t be reachedThe webpage at http://localhost:5000/
might be temporarily down or it may have moved permanently
to a new web address.
ERR_SOCKET_NOT_CONNECTED
directory structure looks like this
├──app_folder/
└── app/
| ├── static/
| | └── css/
| | └──app.css
| | └── js/
| | └──app.js
| └── templates/
| | └── app.html
| | └── subapp.html
| | └── subapp1.html
| | └── subapp2.html
| | └── subapp3.html
| └── app.py
| └── util.py
| └── pickle_file.pickle
| └── requirements.txt
| └── Dockerfile
| └── Makefile
| └── docker-compose.yml
dockerfile looks like this
FROM python:3.8
ENV PYTHONUNBUFFERED=1
WORKDIR /
COPY requirements.txt requirements.txt
COPY . .
RUN pip install -r requirements.txt
# EXPOSE 5000
CMD [ "python", "-m" , "flask", "run", "--host=0.0.0.0"]
I had tried with EXPOSE 5000 uncommented and commented, making no difference
I also updated the directory structure and dockerfile, which got rid of the command line error I was seeing
docker-compose looks like this
version: "3.7"
services:
web:
image: flask-website:latest
ports:
- 5000:5000
I tried with the dockerfile, docker-compose, makefile, and requirements outside of the app directory and a slightly modified dockerfile on the WORKDIR line, that resulted in this error
Error: Failed to find Flask application or factory in module "app". Use "FLASK_APP=app:name to specify one.
not sure what else to try? I can run it locally with python -m flask run, but I cannot seem to dockerize it, seems like this should not be this difficult?
for completeness sake, app.py looks like this
from flask import Flask, request, jsonify
from flask import render_template, redirect
import json
import util
app = Flask(__name__, template_folder="templates", static_folder="static")
#app.route("/", methods=["GET", "POST"])
def index():
return render_template("app.html")
#app.route("/predict_home_price", methods=["GET", "POST"])
def make_prediction():
x = request.form["x"]
y = float(request.form["y"]
response = jsonify(
{
"prediction": util.prediction(x, y)
}
)
response.headers.add("Access-Control-Allow-Origin", "*")
return response
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=5000)
util.py looks like this
import pickle
import pandas as pd
from scipy.special import inv_boxcox
# to run locally uncomment out the following
# with open("/path/to/pickle/app/pickle_mod.pickle", "rb") as f:
# to run in docker use the following
with open("app/pickle_mod.pickle", "rb") as f:
__model = pickle.load(f)
def prediction(x, y):
lambs = 0.205
a = [[x, y]]
cols = ["x", "y"]
my_data = pd.DataFrame(data=a, columns=cols)
pred = inv_boxcox(__model.predict(my_data)[0], lambs)
return f"${round(pred)}"
if __name__ == "__main__":
print(prediction(5, 4))
I have also tried both ways with the pickle import in util, same result - I thought because I was building it in a docker container that the second import was correct
I have also tried this block for app.run as well with the same result
if __name__ == "__main__":
app.run()
I have ran a simple flask app in docker and here is the result. In your docker compose, you do not need to add the command: python app/app.py as this line was added in the Dockerfile.
The only thing you need in your Docker Compose is ports and image name
ok the following changes were required to get this image to build and the container to run
dockerfile:
FROM python:3.8
WORKDIR /
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
ENTRYPOINT ["python", "./app.py"]
then the following change to the main application app.py
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
then to util.py, which had an error I did not see until running docker run [TAG]
with open("pickle_mod.pickle", "rb") as f:
__model = pickle.load(f)
then run docker build -t [TAG] .
then run docker-compose up
then navigate to localhost listening on port 5000 and there is the running container

Celery + Flask + Docker, consumer: Cannot connect to amqp://admin:**#rabbit:5672/myhost: failed to resolve broker hostname

Background
I am building a web application that uses Flask for the backend framework. The application uses Celery to handle all the time-consuming tasks as background tasks as to not block the backend thread. I use RabbitMQ as the message broker for Celery workers. I bundled each service using docker-compose.
Problem
The app has been working well until the past few days, and all of sudden, Celery workers keep failing to connect to the message broker with the error message [ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**#rabbit:5672/myhost: failed to resolve broker hostname.
Directory structure and code
I put together files and directories for minimally reproducible example.
debug/
├── code
│   ├── dev.Dockerfile
│   ├── my_app
│   │   ├── celery_app.py
│   │   ├── config.py
│   │   ├── extensions.py
│   │   ├── __init__.py
│   │   ├── my_tasks.py
│   │   └── test_app.py
│   └── requirements.txt
└── docker-compose_dev.yml
docker-compose_dev.yml
version: "3.7"
services:
rabbit:
image: rabbitmq:3.8.5-management
ports:
- '15673:15672' # in case user has rabbitMQ installed on host
expose:
- "5672"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
- RABBITMQ_DEFAULT_VHOST=myhost
non_working_worker:
build:
context: ./code
dockerfile: dev.Dockerfile
command: "celery worker -A my_app.celery_app:app -l info"
volumes:
- ./code:/code
links:
- rabbit
working_worker:
build:
context: ./code
dockerfile: dev.Dockerfile
command: "celery worker -A my_app.my_tasks:app -l info"
volumes:
- ./code:/code
links:
- rabbit
dev.Dockerfile
FROM continuumio/miniconda3
# Make /backend working directory; flask code lives here
WORKDIR /code
# Install from requirements.txt using pip
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN rm requirements.txt
requirements.txt
luigi==2.8.11
plotnine==0.7.0
celery==4.4.6
flask==1.1.2
flask-cors
flask-socketio
Flask-Mail
eventlet
test_app.py
import eventlet
eventlet.monkey_patch()
from flask import Flask
from my_app.extensions import celery
def create_app():
"""
Application factory. Create application here.
"""
app = Flask(__name__)
app.config.from_object("my_app.config")
return app
def init_celery(app=None):
"""
Initialize Celery App
"""
app = app or create_app()
app.config.from_object("my_app.config")
# Set celery worker configuration
# Use this to load config information from flask config file
celery.conf.broker_url = app.config["CELERY_BROKER_URL"]
celery.conf.result_backend = app.config["CELERY_RESULT_BACKEND"]
class ContextTask(celery.Task):
"""Make celery tasks work with Flask app context"""
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
config.py
# RabbitMQ
CELERY_BROKER_URL='pyamqp://admin:mypass#rabbit/myhost'
CELERY_RESULT_BACKEND='rpc://'
extensions.py
from celery import Celery
celery = Celery()
celery_app.py
from my_app.test_app import init_celery
app = init_celery()
my_tasks.py
from celery import Celery
app = Celery()
app.conf.broker_url = 'pyamqp://admin:mypass#rabbit/myhost'
app.conf.result_backend = 'rpc://'
What I've tried
Followings are the things I've tried, but didn't work.
RabbitMQ isn't launching properly?
a. It launches properly with given username, password, and vhost. (can check using the management plugin # localhost:15673)
RabbitMQ launches after the Celery workers start, so the workers can't find the broker?
a. Celery has retry feature, so it will keep on retrying until message broker is up running.
Network issue?
a. I've tried with/without links to specify service name alias, but still didn't work.
b. Note I've already specified broker name as rabbit as specified in the config.py file instead of localhost
c. I've tried using both the default network docker-compose creates, and custom network, but both failed.
Interestingly, Celery app instance in my_tasks.py works (it's named as working_worker in the docker-compose file), but Celery app instance in the Flask factory pattern does not work (it'a named as non_working_worker in the compose file)
a. Again, it shows that RabbitMQ is working fine, but something funky is going on with the Flask factory pattern style Celery app instantiation.
I spent past few days trying to fix this issue and searching for similar problems on internet, but had no luck doing so.
I know it's a fairly long post, but any help/suggestions would greatly be appreciated.
docker-compose version
docker-compose version 1.25.3, build d4d1b42b
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:36 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
I had a similar issue that I was able to resolve by specifying the version of dnspython, one of eventlets dependencies, to 1.16.0 in my requirements.txt above eventlet. It looks like eventlet is not compatible with the latest version of dnspython, more info here https://github.com/eventlet/eventlet/issues/619

ImportError - attempted relative import with no known parent package

It looks like to be a known problem and I am not the only one who encounter this issue. But none of the StackOverflow topics I've read helped me.
So here is the tree of my folder:
.
├── Dockerfile
├── app
│   ├── __init__.py
│   ├── app.py
│   ├── config.py
│   ├── controllers
│   │   └── home.py
│   ├── models.py
│   └── views
│   └── home.py
├── database.conf
├── docker-compose.yml
├── jarvis.conf
└── requirements.txt
As you can see I've tried to dockerized my app.
Let's have a look to my Dockerfile and docker-compose.yml
Dockerfile:
FROM python:3.6.8-alpine
LABEL maintainer="Jordane * <*>"
LABEL version="1.0.0"
RUN apk add build-base postgresql-dev
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip install -r requirements.txt
COPY app/ /app
WORKDIR /app
CMD ["gunicorn", "-w 1", "app:app", "-b", "0.0.0.0:3000"]
docker-compose.yml:
version: '3.5'
services:
db:
container_name: postgres
image: postgres:11.2-alpine
env_file: database.conf
ports:
- 5432:5432
volumes:
- dbdata:/var/lib/postgresql/data
web:
build: .
container_name: flask
restart: always
env_file:
- jarvis.conf
- database.conf
environment:
- PYTHONDONTWRITEBYTECODE=1
ports:
- 6876:3000
volumes:
- ./app/:/app
depends_on:
- db
volumes:
dbdata:
Here is the begin of my trouble I think
I've wrote this init.py:
from flask import Flask
import flask_sqlalchemy
from .models import db
from . import config
def create_app():
flask_app = Flask(__name__)
flask_app.config['SQLALCHEMY_DATABASE_URI']= config.DB_CONN_URI
flask_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
flask_app.app_context().push()
db.init_app(flask_app)
db.create_all()
return flask_app
and as you saw above in my Dockerfile I am running my app with gunicorn and run the app.
app.py:
""" Jarvis slackBot v1.0 (api) """
__author__ = "titus"
from flask import request, jsonify
from . import create_app
from .models import User, db
from views.home import home_bp
from loguru import logger
app = create_app()
# logger.add("app.log", rotation="500 MB")
app.register_blueprint(home_bp, url_prefix='/home')
And here is the error :
flask | from . import create_app
flask | ImportError: attempted relative import with no known parent package
I've followed this Tutorial to help me: https://medium.com/#hmajid2301/implementing-sqlalchemy-with-docker-cb223a8296de
So it's supposed to work ...
If I replace :
from . import create_app by from __init__ import create_app
from .models import User, db by from models import User, db
from .models import db by from models import db
from . import config
by import config
It works better, but I really feel like I am doing something wrong.
This error is due to the current version of gunicorn (19.9.0) using __import__(your_app) to load your app which apparently doesn't import parent packages. This means that your __init__.py is never called.
(See https://github.com/benoitc/gunicorn/blob/19.x/gunicorn/util.py#L350)
This seems to be fixed in the current repo version of gunicorn, which I think will be released with 20.0.0.
(See https://github.com/benoitc/gunicorn/blob/master/gunicorn/util.py#L331)
The easiest workaround is to use:
CMD ["gunicorn", "-w 1", "app", "-b", "0.0.0.0:3000"]
and adding this to (to the bottom of) your __init__.py:
from .app import app
Or even better, putting create_app in a separate file, and having only imports in your __init__.py. Just make sure create_app is imported before app.
I'm relatively new to python, but ran into this issue today with cloudscraper.
The code originally was:
from . import __version__ as cloudscraper_version
I installed cloudscraper using pip3, so it installed directly to C:\Python39\Lib\site-packages\cloudscraper
I received the same error message when trying to run one of the py files.
I couldn't really find anything that wouldn't be more headache than it was worth (moving, renaming files, etc) as I was just wanting to run the help.py in cloudscraper. I have a ton of modules and packages and finally got them all to where they interact with my interpreter like I want, so I didn't want to move anything. I fixed it by doing this:
from cloudscraper import __version__ as cloudscraper_version
*Note, if you used 'run as administrator' to install the package via cmd, but utilize the files through a user profile on your pc, you'll need to change permissions giving access to your pc user profile on the particular py file you're wanting to edit. (Right click file>Properties>Security>'edit permissions'>Full Control)
Just wanted to share in case this could help someone else that might run into this issue.

Non existing path when setting up Flask to have separated configurations for each environment

I have separated configs for each environment and one single app, the
directory tree looks like:
myapp
├── __init__.py # empty
├── config
│   ├── __init__.py # empty
│   ├── development.py
│   ├── default.py
│   └── production.py
├── instance
│   └── config.py
└── myapp
├── __init__.py
   └── myapp.py
Code
The relevant code, myapp/__init__.py:
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.default')
app.config.from_pyfile('config.py')
app.config.from_envvar('APP_CONFIG_FILE')
myapp/myapp.py:
from myapp import app
# ...
Commands
Then I set the variables:
$export FLASK_APP=myapp.py
And try to run the development server from the project root:
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
And from the project myapp folder:
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
With another FLASK_APP variable:
$ export FLASK_APP=myapp/myapp.py
# in project root
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
# moving to project/myapp
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp/myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Other test without success
$ python -c 'import myapp; print(myapp)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/myapp/myapp/__init__.py", line 6, in <module>
app.config.from_envvar('APP_CONFIG_FILE')
File "/home/user/.virtualenvs/myapp/lib/python3.5/site-packages/flask/config.py", line 108, in from_envvar
variable_name)
RuntimeError: The environment variable 'APP_CONFIG_FILE' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
$ export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
$ python -c 'import myapp; print(myapp)'<module 'myapp' from '/home/user/myapp/myapp/__init__.py'>
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Notes:
I am not using the PYTHON_PATH variable, it is empty
I have already seen other related questions (Flask: How to manage different environment databases?) but my problem is the (relatevely new) flask command
Using Python 3.5.2+
It took me a while but I finally found it:
Flask doesn't like projects with __init__.py at root level, delete myapp/__init__.py. This is the one located at the root folder:
myapp
├── __init__.py <--- DELETE
...
└── myapp
├── __init__.py <--- keep
└── myapp.py
Use $ export FLASK_APP=myapp/myapp.py
The environment variable specifying the configuration should be the absolut path to it: export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
Now everything works \o/
$ flask run
* Serving Flask app "myapp.myapp"
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
$ flask shell
Python 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160927] on linux
App: myapp
Instance: /home/user/myapp/instance
>>>

Running supervisord from the host, celery from a virtualenv (Django app)

I'm trying to use celery and redis queue to perform a task for my Django app. Supervisord is installed on the host via apt-get, whereas celery resides in a specific virtualenv on my system, installed via `pip.
As a result, I can't seem to get the celery command to run via supervisord. If I run it from inside the virtualenv, it works fine, outside of it, it doesn't. How do I get it to run under my current set up? Is the solution simply to install celery via apt-get, instead of inside the virtualenv? Please advise.
My celery.conf inside /etc/supervisor/conf.d is:
[program:celery]
command=/home/mhb11/.virtualenvs/myenv/local/lib/python2.7/site-packages/celery/bin/celery -A /etc/supervisor/conf.d/celery.conf -l info
directory = /home/mhb11/somefolder/myproject
environment=PATH="/home/mhb11/.virtualenvs/myenv/bin",VIRTUAL_ENV="/home/mhb11/.virtualenvs/myenv",PYTHONPATH="/home/mhb11/.virtualenvs/myenv/lib/python2.7:/home/mhb11/.virtualenvs/myenv/lib/python2.7/site-packages"
user=mhb11
numprocs=1
stdout_logfile = /etc/supervisor/logs/celery-worker.log
stderr_logfile = /etc/supervisor/logs/celery-worker.log
autostart = true
autorestart = true
startsecs=10
stopwaitsecs = 600
killasgroup = true
priority = 998
And the folder structure for my Django project is:
/home/mhb11/somefolder/myproject
├── myproject
│ ├── celery.py # The Celery app file
│ ├── __init__.py # The project module file (modified)
│ ├── settings.py # Including Celery settings
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── celerybeat-schedule
└── myapp
├── __init__.py
├── models.py
├── tasks.py # File containing tasks for this app
├── tests.py
└── views.py
If I do a status check via supervisorctl, I get a FATAL error on the command I'm trying to run in celery.conf. Help!
p.s. note that user mhb11 does not have root privileges, in case it matters. Moreover, /etc/supervisor/logs/celery-worker.log is empty. And inside supervisord.log the relevant error I see is INFO spawnerr: can't find command '/home/mhb11/.virtualenvs/redditpk/local/lib/python2.7/site-packages/celery/bin/‌​celery'.
Path to celery binary is myenv/bin/celery whereas you are using myenv/local/lib/python2.7/site-packages/celery/bin/cel‌‌​​ery.
So if you try on your terminal the command you are passing to supervisor (command=xxx), you should get the same error.
You need to replace your command=xxx in your celery.conf with
command=/home/mhb11/.virtualenvs/myenv/bin/celery -A myproject.celery -l info
Note that I have also replaced -A parameter with celery app, instead of supervisor configuration. This celery app is relevant to your project directory set in celery.conf with
directory = /home/mhb11/somefolder/myproject
On a side note, if you are using Celery with Django, you can manage celery with Django's manage.py, no need to invoke celery directly. Like
python manage.py celery worker
python manage.py celery beat
For detail please read intro of Django Celery here.

Categories