It looks like to be a known problem and I am not the only one who encounter this issue. But none of the StackOverflow topics I've read helped me.
So here is the tree of my folder:
.
├── Dockerfile
├── app
│ ├── __init__.py
│ ├── app.py
│ ├── config.py
│ ├── controllers
│ │ └── home.py
│ ├── models.py
│ └── views
│ └── home.py
├── database.conf
├── docker-compose.yml
├── jarvis.conf
└── requirements.txt
As you can see I've tried to dockerized my app.
Let's have a look to my Dockerfile and docker-compose.yml
Dockerfile:
FROM python:3.6.8-alpine
LABEL maintainer="Jordane * <*>"
LABEL version="1.0.0"
RUN apk add build-base postgresql-dev
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip install -r requirements.txt
COPY app/ /app
WORKDIR /app
CMD ["gunicorn", "-w 1", "app:app", "-b", "0.0.0.0:3000"]
docker-compose.yml:
version: '3.5'
services:
db:
container_name: postgres
image: postgres:11.2-alpine
env_file: database.conf
ports:
- 5432:5432
volumes:
- dbdata:/var/lib/postgresql/data
web:
build: .
container_name: flask
restart: always
env_file:
- jarvis.conf
- database.conf
environment:
- PYTHONDONTWRITEBYTECODE=1
ports:
- 6876:3000
volumes:
- ./app/:/app
depends_on:
- db
volumes:
dbdata:
Here is the begin of my trouble I think
I've wrote this init.py:
from flask import Flask
import flask_sqlalchemy
from .models import db
from . import config
def create_app():
flask_app = Flask(__name__)
flask_app.config['SQLALCHEMY_DATABASE_URI']= config.DB_CONN_URI
flask_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
flask_app.app_context().push()
db.init_app(flask_app)
db.create_all()
return flask_app
and as you saw above in my Dockerfile I am running my app with gunicorn and run the app.
app.py:
""" Jarvis slackBot v1.0 (api) """
__author__ = "titus"
from flask import request, jsonify
from . import create_app
from .models import User, db
from views.home import home_bp
from loguru import logger
app = create_app()
# logger.add("app.log", rotation="500 MB")
app.register_blueprint(home_bp, url_prefix='/home')
And here is the error :
flask | from . import create_app
flask | ImportError: attempted relative import with no known parent package
I've followed this Tutorial to help me: https://medium.com/#hmajid2301/implementing-sqlalchemy-with-docker-cb223a8296de
So it's supposed to work ...
If I replace :
from . import create_app by from __init__ import create_app
from .models import User, db by from models import User, db
from .models import db by from models import db
from . import config
by import config
It works better, but I really feel like I am doing something wrong.
This error is due to the current version of gunicorn (19.9.0) using __import__(your_app) to load your app which apparently doesn't import parent packages. This means that your __init__.py is never called.
(See https://github.com/benoitc/gunicorn/blob/19.x/gunicorn/util.py#L350)
This seems to be fixed in the current repo version of gunicorn, which I think will be released with 20.0.0.
(See https://github.com/benoitc/gunicorn/blob/master/gunicorn/util.py#L331)
The easiest workaround is to use:
CMD ["gunicorn", "-w 1", "app", "-b", "0.0.0.0:3000"]
and adding this to (to the bottom of) your __init__.py:
from .app import app
Or even better, putting create_app in a separate file, and having only imports in your __init__.py. Just make sure create_app is imported before app.
I'm relatively new to python, but ran into this issue today with cloudscraper.
The code originally was:
from . import __version__ as cloudscraper_version
I installed cloudscraper using pip3, so it installed directly to C:\Python39\Lib\site-packages\cloudscraper
I received the same error message when trying to run one of the py files.
I couldn't really find anything that wouldn't be more headache than it was worth (moving, renaming files, etc) as I was just wanting to run the help.py in cloudscraper. I have a ton of modules and packages and finally got them all to where they interact with my interpreter like I want, so I didn't want to move anything. I fixed it by doing this:
from cloudscraper import __version__ as cloudscraper_version
*Note, if you used 'run as administrator' to install the package via cmd, but utilize the files through a user profile on your pc, you'll need to change permissions giving access to your pc user profile on the particular py file you're wanting to edit. (Right click file>Properties>Security>'edit permissions'>Full Control)
Just wanted to share in case this could help someone else that might run into this issue.
Related
I spend the last couple of days trying to find what I have done wrong but I am still not able to figure out because I am able to run the app locally using flask run and also using Docker using docker-compose up --build. Source code is here
My issue is my Cloud Run deployment is successful but Service Unavailable when I am clicking on the URL. I checked the logs and seems my environment variables are not correctly loaded:
line 7, in <module> from web_messaging.blueprints.user import user File
"/web_messaging/web_messaging/blueprints/user/__init__.py", line 1, in <module> from
web_messaging.blueprints.user.views import user File
"/web_messaging/web_messaging/blueprints/user/views.py", line 3, in <module> from
web_messaging.extensions import mongo, login_manager, c, bc File
"/web_messaging/web_messaging/extensions.py", line 18, in <module> twilio_client = Client(TWILIO_SID,
TWILIO_TOKEN) File "/usr/local/lib/python3.9/site-packages/twilio/rest/__init__.py", line 54, in __init__
raise TwilioException("Credentials are required to create a TwilioClient")
twilio.base.exceptions.TwilioException: Credentials are required to create a TwilioClient
I have a config/.env file and a config/settings.py. I am loading the env variables from .env using load_dotenv() on my config/settings.py. I decided to add some print and try/expect statements in my config/settings.py to see the value of variables.
settings.py
import os
from dotenv import load_dotenv
BASEDIR = os.path.abspath(os.path.dirname(__file__))
try:
load_dotenv(os.path.join(BASEDIR, '.env'))
print("OK")
print(BASEDIR)
except Exception as e:
print(str(e))
# Mongo Database
MONGO_URI = os.getenv('MONGO_URI')
TWILIO_SID = os.getenv('TWILIO_SID')
TWILIO_TOKEN = os.getenv('TWILIO_TOKEN')
print(MONGO_URI)
print(TWILIO_SID)
When I am running with flask run, docker-compose or on cloud-run:
The BASEDIR value is /web_messaging/config
There is no exceptions during the load_dotenv() call
However, there is one major difference, it is the value of my env variables such as MONGO_URI, TWILIO_SID. Those variables have correct values when using flask run and docker-compose but not on the Cloud Run logs. On Cloud Run, those variables are equal to None.
When I don't use a .env and directly put the value of my variables inside /config/settings.py, there is no issues and my Cloud Run link is working correctly. I also tried to moved .env outside of the config file and in few other locations but I still got the same issue.
.
├── requirements.txt
├── Dockerfile
├── Docker-compose.yml
├── config
│ ├── .env
│ ├── settings.py
│ ├── gunicorn.py
│ └── __init__.py
├── web_messaging
│ ├── app.py # where I am calling create_app() - factory pattern
│ ├── blueprints
│ ├── static
│ └── ...
└── ...
Dockerfile
FROM python:3.9-slim
ENV INSTALL_PATH /web_messaging
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD gunicorn -b 0.0.0.0:8080 --access-logfile - "web_messaging.app:create_app()"
docker-compose.yml
version: '2'
services:
website:
build: .
command: >
gunicorn -b 0.0.0.0:8080
--access-logfile -
--reload
"web_messaging.app:create_app()"
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- '.:/web_messaging'
ports:
- '8080:8080'
config/.env
COMPOSE_PROJECT_NAME=web_messaging
FLASK_SECRET=xxx
MONGO_URI=mongodb+srv://xxx
MONGO_DB=xxx
TWILIO_SID=xxx
TWILIO_TOKEN=xxx
config/settings.py
import os
from dotenv import load_dotenv
BASEDIR = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(BASEDIR, '.env'))
DEBUG = True
PYTHONDONTWRITEBYTECODE=1
#SERVER_NAME = '127.0.0.1:5000'
# Mongo Database
MONGO_DBNAME = os.getenv('MONGO_DB')
MONGO_URI = os.getenv('MONGO_URI')
# Twilio API
FLASK_SECRET = os.getenv('FLASK_SECRET')
TWILIO_SID = os.getenv('TWILIO_SID')
TWILIO_TOKEN = os.getenv('TWILIO_TOKEN')
config/gunicorn.py
bind = '0.0.0.0:8080'
accesslog = '-'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" in %(D)sµs'
Fixed, I found exactly what went wrong but I do not know why.
It worked when I build my own image before to push the image on GCP container registry following those steeps:
docker-compose up --build
docker tag 52e6159b6b13 gcr.io/mousset005/zoro
gcloud auth configure-docker
docker push gcr.io/mousset005/zoro
However, what I was doing is building my Image using GCP API (which is what they recommend in the Cloud Run Python quickstart) using that command:
gcloud run deploy --image gcr.io/mousset005/zoro --platform managed
Background
I am building a web application that uses Flask for the backend framework. The application uses Celery to handle all the time-consuming tasks as background tasks as to not block the backend thread. I use RabbitMQ as the message broker for Celery workers. I bundled each service using docker-compose.
Problem
The app has been working well until the past few days, and all of sudden, Celery workers keep failing to connect to the message broker with the error message [ERROR/MainProcess] consumer: Cannot connect to amqp://admin:**#rabbit:5672/myhost: failed to resolve broker hostname.
Directory structure and code
I put together files and directories for minimally reproducible example.
debug/
├── code
│ ├── dev.Dockerfile
│ ├── my_app
│ │ ├── celery_app.py
│ │ ├── config.py
│ │ ├── extensions.py
│ │ ├── __init__.py
│ │ ├── my_tasks.py
│ │ └── test_app.py
│ └── requirements.txt
└── docker-compose_dev.yml
docker-compose_dev.yml
version: "3.7"
services:
rabbit:
image: rabbitmq:3.8.5-management
ports:
- '15673:15672' # in case user has rabbitMQ installed on host
expose:
- "5672"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
- RABBITMQ_DEFAULT_VHOST=myhost
non_working_worker:
build:
context: ./code
dockerfile: dev.Dockerfile
command: "celery worker -A my_app.celery_app:app -l info"
volumes:
- ./code:/code
links:
- rabbit
working_worker:
build:
context: ./code
dockerfile: dev.Dockerfile
command: "celery worker -A my_app.my_tasks:app -l info"
volumes:
- ./code:/code
links:
- rabbit
dev.Dockerfile
FROM continuumio/miniconda3
# Make /backend working directory; flask code lives here
WORKDIR /code
# Install from requirements.txt using pip
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN rm requirements.txt
requirements.txt
luigi==2.8.11
plotnine==0.7.0
celery==4.4.6
flask==1.1.2
flask-cors
flask-socketio
Flask-Mail
eventlet
test_app.py
import eventlet
eventlet.monkey_patch()
from flask import Flask
from my_app.extensions import celery
def create_app():
"""
Application factory. Create application here.
"""
app = Flask(__name__)
app.config.from_object("my_app.config")
return app
def init_celery(app=None):
"""
Initialize Celery App
"""
app = app or create_app()
app.config.from_object("my_app.config")
# Set celery worker configuration
# Use this to load config information from flask config file
celery.conf.broker_url = app.config["CELERY_BROKER_URL"]
celery.conf.result_backend = app.config["CELERY_RESULT_BACKEND"]
class ContextTask(celery.Task):
"""Make celery tasks work with Flask app context"""
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
config.py
# RabbitMQ
CELERY_BROKER_URL='pyamqp://admin:mypass#rabbit/myhost'
CELERY_RESULT_BACKEND='rpc://'
extensions.py
from celery import Celery
celery = Celery()
celery_app.py
from my_app.test_app import init_celery
app = init_celery()
my_tasks.py
from celery import Celery
app = Celery()
app.conf.broker_url = 'pyamqp://admin:mypass#rabbit/myhost'
app.conf.result_backend = 'rpc://'
What I've tried
Followings are the things I've tried, but didn't work.
RabbitMQ isn't launching properly?
a. It launches properly with given username, password, and vhost. (can check using the management plugin # localhost:15673)
RabbitMQ launches after the Celery workers start, so the workers can't find the broker?
a. Celery has retry feature, so it will keep on retrying until message broker is up running.
Network issue?
a. I've tried with/without links to specify service name alias, but still didn't work.
b. Note I've already specified broker name as rabbit as specified in the config.py file instead of localhost
c. I've tried using both the default network docker-compose creates, and custom network, but both failed.
Interestingly, Celery app instance in my_tasks.py works (it's named as working_worker in the docker-compose file), but Celery app instance in the Flask factory pattern does not work (it'a named as non_working_worker in the compose file)
a. Again, it shows that RabbitMQ is working fine, but something funky is going on with the Flask factory pattern style Celery app instantiation.
I spent past few days trying to fix this issue and searching for similar problems on internet, but had no luck doing so.
I know it's a fairly long post, but any help/suggestions would greatly be appreciated.
docker-compose version
docker-compose version 1.25.3, build d4d1b42b
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:36 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
I had a similar issue that I was able to resolve by specifying the version of dnspython, one of eventlets dependencies, to 1.16.0 in my requirements.txt above eventlet. It looks like eventlet is not compatible with the latest version of dnspython, more info here https://github.com/eventlet/eventlet/issues/619
I'm trying to build a simple application with flask and I've decided to also use gunicorn and docker.
At the moment I have this configuration:
> app
> myapp
__init__.py
index.html
docker-compose.yml
Dockerfile
My docker-compose.yml:
version: '2'
services:
web:
build: .
volumes:
- .:/app
command: /usr/local/bin/gunicorn -b :8000 myapp:app
working_dir: /app
My __init__.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run()
This minimal configuration works and I'm able to access my application and get my index page.
What I don't like is having my app created inside the __init__.py so I would like to move the app creation inside an app.py file.
The new structure will be:
> app
> myapp
__init__.py
app.py
index.html
docker-compose.yml
Dockerfile
app.py will have the content of the old __init__.py file and the new __init__.py file would be empty.
This doesn't work. I get an error
Failed to find application: 'myapp'
and I don't understand why.
Any idea about this?
In the first configuration, your Flask app was located directly in the package myapp; after you moved it, it is in the module myapp.app.
Gunicorn expects the app to be specified as module_name:variable_name, somewhat like from module_name import variable_name.
Option one: specify the correct module path:
/usr/local/bin/gunicorn -b :8000 myapp.app:app
Option two: add the app back to myapp. In myapp/__init__.py, add
from .app import app
Note that if the variable and the module share the name, the module will be overshadowed (not a good thing, although not a critical either).
I have separated configs for each environment and one single app, the
directory tree looks like:
myapp
├── __init__.py # empty
├── config
│ ├── __init__.py # empty
│ ├── development.py
│ ├── default.py
│ └── production.py
├── instance
│ └── config.py
└── myapp
├── __init__.py
└── myapp.py
Code
The relevant code, myapp/__init__.py:
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.default')
app.config.from_pyfile('config.py')
app.config.from_envvar('APP_CONFIG_FILE')
myapp/myapp.py:
from myapp import app
# ...
Commands
Then I set the variables:
$export FLASK_APP=myapp.py
And try to run the development server from the project root:
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
And from the project myapp folder:
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
With another FLASK_APP variable:
$ export FLASK_APP=myapp/myapp.py
# in project root
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
# moving to project/myapp
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp/myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Other test without success
$ python -c 'import myapp; print(myapp)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/myapp/myapp/__init__.py", line 6, in <module>
app.config.from_envvar('APP_CONFIG_FILE')
File "/home/user/.virtualenvs/myapp/lib/python3.5/site-packages/flask/config.py", line 108, in from_envvar
variable_name)
RuntimeError: The environment variable 'APP_CONFIG_FILE' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
$ export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
$ python -c 'import myapp; print(myapp)'<module 'myapp' from '/home/user/myapp/myapp/__init__.py'>
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Notes:
I am not using the PYTHON_PATH variable, it is empty
I have already seen other related questions (Flask: How to manage different environment databases?) but my problem is the (relatevely new) flask command
Using Python 3.5.2+
It took me a while but I finally found it:
Flask doesn't like projects with __init__.py at root level, delete myapp/__init__.py. This is the one located at the root folder:
myapp
├── __init__.py <--- DELETE
...
└── myapp
├── __init__.py <--- keep
└── myapp.py
Use $ export FLASK_APP=myapp/myapp.py
The environment variable specifying the configuration should be the absolut path to it: export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
Now everything works \o/
$ flask run
* Serving Flask app "myapp.myapp"
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
$ flask shell
Python 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160927] on linux
App: myapp
Instance: /home/user/myapp/instance
>>>
I posted a question earlier today here Heroku deploy problem.
I've had a lot of good suggestions, but could not get my app to deploy on Heroku.
I have stripped the app to 15 lines of code. The app still refuses to deploy.
This is the error:
ImportError: No module named 'main'
File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
sys.exit(run())
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
This is my app's directory:
This is the content of the Procfile:
web: gunicorn main:app --log-file=-
This is the content of the main.py file:
import os
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config')
app.config.from_pyfile('config.py')
#app.route('/')
def hello():
return 'Hello World!'
if __name__ == '__main__':
# REMEMBER: Never have this set to True on Production
# manager.run()
app.run()
I have followed all the tutorials, read up on modules and packages, saw suggestions on this site, read Explore Flask, and The Official Flask documentation. They ALL have some sort of variation of establishing an app and its very difficult to understand what is the right way or where files are supposed to be.
There are several problems in your example code:
You need a package.
No module named 'main'
in the proc file, you said: web: gunicorn main:app --log-file=-. The right way is add a __init__.py beside the main.py, so python knows that is a package. Edit your proc file to this:
web: gunicorn blackduckflock.main:app --log-file=-
The instance folder.
Since you specify instance_relative_config=True, I think the proper way to organize your project like this:
blackduckflock
├── blackduckflock
│ ├── __init__.py
│ └── main.py
├── config.py
├── instance
│ └── config.py
└── Procfile
And you can run gunicorn blackduckflock.main:app to see if it works.