Configure app to deploy to Heroku - python

I posted a question earlier today here Heroku deploy problem.
I've had a lot of good suggestions, but could not get my app to deploy on Heroku.
I have stripped the app to 15 lines of code. The app still refuses to deploy.
This is the error:
ImportError: No module named 'main'
File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
sys.exit(run())
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
This is my app's directory:
This is the content of the Procfile:
web: gunicorn main:app --log-file=-
This is the content of the main.py file:
import os
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config')
app.config.from_pyfile('config.py')
#app.route('/')
def hello():
return 'Hello World!'
if __name__ == '__main__':
# REMEMBER: Never have this set to True on Production
# manager.run()
app.run()
I have followed all the tutorials, read up on modules and packages, saw suggestions on this site, read Explore Flask, and The Official Flask documentation. They ALL have some sort of variation of establishing an app and its very difficult to understand what is the right way or where files are supposed to be.

There are several problems in your example code:
You need a package.
No module named 'main'
in the proc file, you said: web: gunicorn main:app --log-file=-. The right way is add a __init__.py beside the main.py, so python knows that is a package. Edit your proc file to this:
web: gunicorn blackduckflock.main:app --log-file=-
The instance folder.
Since you specify instance_relative_config=True, I think the proper way to organize your project like this:
blackduckflock
├── blackduckflock
│   ├── __init__.py
│   └── main.py
├── config.py
├── instance
│   └── config.py
└── Procfile
And you can run gunicorn blackduckflock.main:app to see if it works.

Related

ImportError - attempted relative import with no known parent package

It looks like to be a known problem and I am not the only one who encounter this issue. But none of the StackOverflow topics I've read helped me.
So here is the tree of my folder:
.
├── Dockerfile
├── app
│   ├── __init__.py
│   ├── app.py
│   ├── config.py
│   ├── controllers
│   │   └── home.py
│   ├── models.py
│   └── views
│   └── home.py
├── database.conf
├── docker-compose.yml
├── jarvis.conf
└── requirements.txt
As you can see I've tried to dockerized my app.
Let's have a look to my Dockerfile and docker-compose.yml
Dockerfile:
FROM python:3.6.8-alpine
LABEL maintainer="Jordane * <*>"
LABEL version="1.0.0"
RUN apk add build-base postgresql-dev
RUN pip install --upgrade pip
COPY requirements.txt /
RUN pip install -r requirements.txt
COPY app/ /app
WORKDIR /app
CMD ["gunicorn", "-w 1", "app:app", "-b", "0.0.0.0:3000"]
docker-compose.yml:
version: '3.5'
services:
db:
container_name: postgres
image: postgres:11.2-alpine
env_file: database.conf
ports:
- 5432:5432
volumes:
- dbdata:/var/lib/postgresql/data
web:
build: .
container_name: flask
restart: always
env_file:
- jarvis.conf
- database.conf
environment:
- PYTHONDONTWRITEBYTECODE=1
ports:
- 6876:3000
volumes:
- ./app/:/app
depends_on:
- db
volumes:
dbdata:
Here is the begin of my trouble I think
I've wrote this init.py:
from flask import Flask
import flask_sqlalchemy
from .models import db
from . import config
def create_app():
flask_app = Flask(__name__)
flask_app.config['SQLALCHEMY_DATABASE_URI']= config.DB_CONN_URI
flask_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
flask_app.app_context().push()
db.init_app(flask_app)
db.create_all()
return flask_app
and as you saw above in my Dockerfile I am running my app with gunicorn and run the app.
app.py:
""" Jarvis slackBot v1.0 (api) """
__author__ = "titus"
from flask import request, jsonify
from . import create_app
from .models import User, db
from views.home import home_bp
from loguru import logger
app = create_app()
# logger.add("app.log", rotation="500 MB")
app.register_blueprint(home_bp, url_prefix='/home')
And here is the error :
flask | from . import create_app
flask | ImportError: attempted relative import with no known parent package
I've followed this Tutorial to help me: https://medium.com/#hmajid2301/implementing-sqlalchemy-with-docker-cb223a8296de
So it's supposed to work ...
If I replace :
from . import create_app by from __init__ import create_app
from .models import User, db by from models import User, db
from .models import db by from models import db
from . import config
by import config
It works better, but I really feel like I am doing something wrong.
This error is due to the current version of gunicorn (19.9.0) using __import__(your_app) to load your app which apparently doesn't import parent packages. This means that your __init__.py is never called.
(See https://github.com/benoitc/gunicorn/blob/19.x/gunicorn/util.py#L350)
This seems to be fixed in the current repo version of gunicorn, which I think will be released with 20.0.0.
(See https://github.com/benoitc/gunicorn/blob/master/gunicorn/util.py#L331)
The easiest workaround is to use:
CMD ["gunicorn", "-w 1", "app", "-b", "0.0.0.0:3000"]
and adding this to (to the bottom of) your __init__.py:
from .app import app
Or even better, putting create_app in a separate file, and having only imports in your __init__.py. Just make sure create_app is imported before app.
I'm relatively new to python, but ran into this issue today with cloudscraper.
The code originally was:
from . import __version__ as cloudscraper_version
I installed cloudscraper using pip3, so it installed directly to C:\Python39\Lib\site-packages\cloudscraper
I received the same error message when trying to run one of the py files.
I couldn't really find anything that wouldn't be more headache than it was worth (moving, renaming files, etc) as I was just wanting to run the help.py in cloudscraper. I have a ton of modules and packages and finally got them all to where they interact with my interpreter like I want, so I didn't want to move anything. I fixed it by doing this:
from cloudscraper import __version__ as cloudscraper_version
*Note, if you used 'run as administrator' to install the package via cmd, but utilize the files through a user profile on your pc, you'll need to change permissions giving access to your pc user profile on the particular py file you're wanting to edit. (Right click file>Properties>Security>'edit permissions'>Full Control)
Just wanted to share in case this could help someone else that might run into this issue.

Docker+Gunicorn+Flask, I don't understand why my setup is not working

I'm trying to build a simple application with flask and I've decided to also use gunicorn and docker.
At the moment I have this configuration:
> app
> myapp
__init__.py
index.html
docker-compose.yml
Dockerfile
My docker-compose.yml:
version: '2'
services:
web:
build: .
volumes:
- .:/app
command: /usr/local/bin/gunicorn -b :8000 myapp:app
working_dir: /app
My __init__.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def home():
return render_template('index.html')
if __name__ == '__main__':
app.run()
This minimal configuration works and I'm able to access my application and get my index page.
What I don't like is having my app created inside the __init__.py so I would like to move the app creation inside an app.py file.
The new structure will be:
> app
> myapp
__init__.py
app.py
index.html
docker-compose.yml
Dockerfile
app.py will have the content of the old __init__.py file and the new __init__.py file would be empty.
This doesn't work. I get an error
Failed to find application: 'myapp'
and I don't understand why.
Any idea about this?
In the first configuration, your Flask app was located directly in the package myapp; after you moved it, it is in the module myapp.app.
Gunicorn expects the app to be specified as module_name:variable_name, somewhat like from module_name import variable_name.
Option one: specify the correct module path:
/usr/local/bin/gunicorn -b :8000 myapp.app:app
Option two: add the app back to myapp. In myapp/__init__.py, add
from .app import app
Note that if the variable and the module share the name, the module will be overshadowed (not a good thing, although not a critical either).

Non existing path when setting up Flask to have separated configurations for each environment

I have separated configs for each environment and one single app, the
directory tree looks like:
myapp
├── __init__.py # empty
├── config
│   ├── __init__.py # empty
│   ├── development.py
│   ├── default.py
│   └── production.py
├── instance
│   └── config.py
└── myapp
├── __init__.py
   └── myapp.py
Code
The relevant code, myapp/__init__.py:
from flask import Flask
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.default')
app.config.from_pyfile('config.py')
app.config.from_envvar('APP_CONFIG_FILE')
myapp/myapp.py:
from myapp import app
# ...
Commands
Then I set the variables:
$export FLASK_APP=myapp.py
And try to run the development server from the project root:
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
And from the project myapp folder:
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
With another FLASK_APP variable:
$ export FLASK_APP=myapp/myapp.py
# in project root
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
# moving to project/myapp
$ cd myapp
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp/myapp.py) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Other test without success
$ python -c 'import myapp; print(myapp)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/myapp/myapp/__init__.py", line 6, in <module>
app.config.from_envvar('APP_CONFIG_FILE')
File "/home/user/.virtualenvs/myapp/lib/python3.5/site-packages/flask/config.py", line 108, in from_envvar
variable_name)
RuntimeError: The environment variable 'APP_CONFIG_FILE' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
$ export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
$ python -c 'import myapp; print(myapp)'<module 'myapp' from '/home/user/myapp/myapp/__init__.py'>
$ flask run
Usage: flask run [OPTIONS]
Error: The file/path provided (myapp.myapp) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
Notes:
I am not using the PYTHON_PATH variable, it is empty
I have already seen other related questions (Flask: How to manage different environment databases?) but my problem is the (relatevely new) flask command
Using Python 3.5.2+
It took me a while but I finally found it:
Flask doesn't like projects with __init__.py at root level, delete myapp/__init__.py. This is the one located at the root folder:
myapp
├── __init__.py <--- DELETE
...
└── myapp
├── __init__.py <--- keep
└── myapp.py
Use $ export FLASK_APP=myapp/myapp.py
The environment variable specifying the configuration should be the absolut path to it: export APP_CONFIG_FILE="/home/user/myapp/config/development.py"
Now everything works \o/
$ flask run
* Serving Flask app "myapp.myapp"
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
$ flask shell
Python 3.5.2+ (default, Sep 22 2016, 12:18:14)
[GCC 6.2.0 20160927] on linux
App: myapp
Instance: /home/user/myapp/instance
>>>

Why can't Celery daemon see tasks?

I have a Django 1.62 application running on Debian 7.8 with Nginx 1.2.1 as my proxy server and Gunicorn 19.1.1 as my application server. I've installed Celery 3.1.7 and RabbitMQ 2.8.4 to handle asynchronous tasks. I'm able to start a Celery worker as a daemon but whenever I try to run the test "add" task as shown in the Celery docs, I get the following error:
Received unregistred task of type u'apps.photos.tasks.add'.
The message has been ignored and discarded.
Traceback (most recent call last):
File "/home/swing/venv/swing/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: u'apps.photos.tasks.add'
All of my configuration files are kept in a "conf" directory that sits just below my "myproj" project directory. The "add" task is in apps/photos/tasks.py.
myproj
│
├── apps
   ├── photos
   │   ├── __init__.py
   │   ├── tasks.py
conf
├── celeryconfig.py
├── celeryconfig.pyc
├── celery.py
├── __init__.py
├── middleware.py
├── settings
│   ├── base.py
│   ├── dev.py
│   ├── __init__.py
│   ├── prod.py
├── urls.py
├── wsgi.py
Here is the tasks file:
# apps/photos/tasks.py
from __future__ import absolute_import
from conf.celery import app
#app.task
def add(x, y):
return x + y
Here are my Celery application and configuration files:
# conf/celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
from conf import celeryconfig
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'conf.settings')
app = Celery('conf')
app.config_from_object(celeryconfig)
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
# conf/celeryconfig.py
BROKER_URL = 'amqp://guest#localhost:5672//'
CELERY_RESULT_BACKEND = 'amqp'
CELERY_ACCEPT_CONTENT = ['json', ]
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
This is my Celery daemon config file. I commented out CELERY_APP because I've found that the Celery daemon won't even start if I uncomment it. I also found that I need to add the "--config" argument to CELERYD_OPTS in order for the daemon to start. I created a non-privileged "celery" user who can write to the log and pid files.
# /etc/default/celeryd
CELERYD_NODES="worker1"
CELERYD_LOG_LEVEL="DEBUG"
CELERY_BIN="/home/myproj/venv/myproj/bin/celery"
#CELERY_APP="conf"
CELERYD_CHDIR="/www/myproj/"
CELERYD_OPTS="--time-limit=300 --concurrency=8 --config=celeryconfig"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
I can see from the log file that when I run the command, "sudo service celeryd start", Celery starts without any errors. However, if I open the Python shell and run the following commands, I'll see the error I described at the beginning.
$ python shell
In [] from apps.photos.tasks import add
In [] result = add.delay(2, 2)
What's interesting is that if I examine Celery's registered tasks object, the task is listed:
In [] import celery
In [] celery.registry.tasks
Out [] {'celery.chain': ..., 'apps.photos.tasks.add': <#task: apps.photos.tasks.add of conf:0x16454d0> ...}
Other similar questions here have discussed having a PYTHONPATH environment variable and I don't have such a variable. I've never understood how to set PYTHONPATH and this project has been running just fine for over a year without it.
I should also add that my production settings file is conf/settings/prod.py. It imports all of my base (tier-independent) settings from base.py and adds some extra production-dependent settings.
Can anyone tell me what I'm doing wrong? I've been struggling with this problem for three days now.
Thanks!
Looks like it is happening due to relative import error.
>>> from project.myapp.tasks import mytask
>>> mytask.name
'project.myapp.tasks.mytask'
>>> from myapp.tasks import mytask
>>> mytask.name
'myapp.tasks.mytask'
If you’re using relative imports you should set the name explicitly.
#task(name='proj.tasks.add')
def add(x, y):
return x + y
Checkout: http://celery.readthedocs.org/en/latest/userguide/tasks.html#automatic-naming-and-relative-imports
I'm using celery 4.0.2 and django, and I created a celery user and group for use with celeryd and had this same problem. The command-line version worked fine, but celeryd was not registering the tasks. It was NOT a relative naming problem.
The solution was to add the celery user to the group that can access the django project. In my case, this group is www-data with read, execute, and no write.

Hosting Django app with Waitress

I'm trying to host a Django app on my Ubuntu VPS. I've got python, django, and waitress installed and the directories moved over.
I went to the Waitress site ( http://docs.pylonsproject.org/projects/waitress/en/latest/ ) and they said to use it like this:
from waitress import serve
serve(wsgiapp, host='5.5.5.5', port=8080)
Do I put my app name in place of of 'wsiapp'? Do I need to run this in the top-level Django project directory?
Tested with Django 1.9 and Waitress 0.9.0
You can use waitress with your django application by creating a script (e.g., server.py) in your django project root and importing the application variable from wsgi.py module:
yourdjangoproject project root structure
├── manage.py
├── server.py
├── yourdjangoproject
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   ├── wsgi.py
wsgi.py (Updated January 2021 w/ static serving)
This is the default django code for wsgi.py:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourdjangoproject.settings")
application = get_wsgi_application()
If you need static file serving, you can edit wsgi.py use something like whitenoise or dj-static for static assets:
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourdjangoproject.settings")
"""
YOU ONLY NEED ONE OF THESE.
Choose middleware to serve static files.
WhiteNoise seems to be the go-to but I've used dj-static
successfully in many production applications.
"""
# If using WhiteNoise:
from whitenoise import WhiteNoise
application = WhiteNoise(get_wsgi_application())
# If using dj-static:
from dj_static import Cling
application = Cling(get_wsgi_application())
server.py
from waitress import serve
from yourdjangoproject.wsgi import application
if __name__ == '__main__':
serve(application, port='8000')
Usage
Now you can run $ python server.py
I managed to get it working by using a bash script instead of a python call. I made a script called 'startserver.sh' containing the following (replace yourprojectname with your project name obviously):
#!/bin/bash
waitress-serve --port=80 yourprojectname.wsgi:application
I put it in the top-level Django project directory.
Changed the permissions to execute by owner:
chmod 700 startserver.sh
Then I just execute the script on the server:
sudo ./startserver.sh
And that seemed to work just fine.

Categories