So I'm following the getting started guide from heroku with django. However when I run this command:
heroku run python manage.py syncdb
I get this error
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" and accepting
TCP/IP connections on port 5432?
I assumed this meant that the db wasn't set up yet... so I manually added the shared_db option as well:
heroku addons:add shared-database:5mb
But.. I still get the same error. What gives?
EDITED:
As #mipadi has pointed out here (http://stackoverflow.com/questions/13001031/django-heroku-settings-injection/13092534), it can actually be as simple as this:
import dj_database_url
DATABASES = {'default' : dj_database_url.config() }
This works if you have a DATABASE_URL env variable set. heroku:pg_promote gets your there. Details below
Make sure you have Postgres on your Heroku
heroku addons:add heroku-postgresql:dev
Step 1: figure out your database url
heroku config | grep POSTGRESQL
The output will look something like this:
HEROKU_POSTGRESQL__URL:
postgres://user:password#host:5432/blabla
Step 2: Grab the setting name from the previous step (e.g. HEROKU_POSTGRESQL_ROSE_URL) and put it in your settings file like so
DATABASES = {'default': dj_database_url.config(default=os.environ["HEROKU_POSTGRESQL_ROSE_URL"])}
[UPDATE] As Ted has pointed out, there's a way to promote the color URL to DATABASE_URL variable:
heroku pg:promote HEROKU_POSTGRESQL_ROSE_URL
Your database settings can then use DATABASE_URL as opposed to more exotic colored URLS
DATABASES = {'default': dj_database_url.config(default=os.environ["DATABASE_URL"])}
Bob is your uncle
I got it working by adding the following code to settings.py myself, seems for some reason Heroku didn't add it for me....
Normally it always added the code on Heroku dynamically but I guess after django 1.4 it didnt do it anymore for some reason. Or I was just doing something wrong.
Anyway this is the code just append it to your settings.py and it should work as before.
import sys
import urlparse
import os
# Register database schemes in URLs.
urlparse.uses_netloc.append('postgres')
urlparse.uses_netloc.append('mysql')
try:
# Check to make sure DATABASES is set in settings.py file.
# If not default to {}
if 'DATABASES' not in locals():
DATABASES = {}
if 'DATABASE_URL' in os.environ:
url = urlparse.urlparse(os.environ['DATABASE_URL'])
# Ensure default database exists.
DATABASES['default'] = DATABASES.get('default', {})
# Update with environment configuration.
DATABASES['default'].update({
'NAME': url.path[1:],
'USER': url.username,
'PASSWORD': url.password,
'HOST': url.hostname,
'PORT': url.port,
})
if url.scheme == 'postgres':
DATABASES['default']['ENGINE'] = 'django.db.backends.postgresql_psycopg2'
if url.scheme == 'mysql':
DATABASES['default']['ENGINE'] = 'django.db.backends.mysql'
except Exception:
print 'Unexpected error:', sys.exc_info()
My app structure was off... heroku wants the structure to look like this:
toplevel
requirements.txt
myapp
manage.py
all other django stuff
I had the same problem, this is how I solved it
Step1: Follow Phillip's Step 1 to get database name(color)
Step2:
$ heroku pg:promote HEROKU_POSTGRESQL_<COLOR>
leads to the output
Promoting HEROKU_POSTGRESQL_<COLOR> to DATABASE_URL... done
You need to add this to your requirements.txt:
psycopg2
By default Heroku configures a Postgres database and injects code into your settings.py (https://devcenter.heroku.com/articles/django#postgres_database_config). This reads from the environment variable DATABASE_URL, but does require psycopg2 is installed.
Related
I am serving dash content inside a Flask app which uses blueprint for registering the routes.
App setup:
Dash is initialised with route_pathname_prefix=/dashapp/
dash_app = dash.Dash(
server=server,
routes_pathname_prefix='/dashapp/',
)
dash_app.css.config.serve_locally = True
dash_app.scripts.config.serve_locally = True
Registered dash app with Flask server
Used UWSGI to serve Flask app inside a docker container
[uwsgi]
wsgi-file = </path/to/app.py>
callable = app
http = 0.0.0.0:8080
processes = 4
threads = 2
master = true
chmod-socket = 660
vacuum = true
die-on-term = true
Upto this point everything was working fine locally. Once I added Nginx proxy, I got the below issue
Issue:
urls for _dash-layout and _dash-dependencies are missing the revers proxy uri. For example, I am serving my flask app at www.example.com/app/. But, on the browser, I saw that requests for _dash-layout and _dash-dependencies are coming at www.example.com/dashpp/_dash-layout instead of www.example.com/app/dashpp/_dash-layout.
I read the following forum discussion and tried applying the solution, and got this error,
requests_pathname_prefix needs to start with '/'`
This is my Nginx config,
location /app/ {
proxy_pass http://localhost:<port>;
proxy_redirect http://localhost:<port> http://example.com/app/;
proxy_set_header Accept Encoding "";
sub_filter_types *;
sub_filter 'href="/' 'href="/app/';
sub_filter 'src="/' 'src="/app/';
sub_filter_once off;
}
Anyone has pointers to what is missing. I am new to dash. So, if I missed adding any information please let me know, I will be happy to give additional details
Thanks
PS: I added the same question in dash forum. Posting it here for better reach.
Edit:
To add additional context, I found out that url for _dash-component-suites is generated as expected www.example.com/app/dashpp/_dash-component-suites. I went through the dash source code to understand how urls are generated. Both, _dash-component-suites and _dash-layout are prefixed with routes_pathname_prefix. Line 428 to 448 of dash.py in version 1.14.0 has the code for building urls.
This is confusing!!!.
I was able to fix this by removing sub_filter directive from nginx conf and updating url_prefixes in flask app. The steps I took are posted on this dash forum
I've read the official docs yet I'm not quite sure I understand how to apply what they tell. Also I've seen this QA, I also use a factory pattern. Just can't see the whole picture.
The connection pool as long as other redis/huey settings may differ depending on the given environment (development, production). How do we wire huey up so we can configure it similar to the Flask application?
As long as I understand to fire a task from a view we need to import tasks moudule and call the specific task (call a function passing the sensitive params). Where shoud we instantiate, keep the huey instance?
Should tasks know about the application dependencies? Should we consider another stripped-down Flask app for this matter?
Can you help a little bit?
Here's how I wired it all up.
First off, here's the contents of my project folder:
Get a stripped-down Flask application to be used by your tasks. As it was suggested in the post I created a secondary application factory:
# global dependencies
db = SQLAlchemy()
def create_app_huey(config_name):
app = Flask(__name__)
# apply configuration
app.config.from_object(config[config_name])
# init extensions
db.init_app(app)
return app
Create tasking package. Two important files here are config.py and tasks.py. This post helped a lot. Let's start with configuration. Note, this is very simple approach.
# config.py (app.tasking.config)
import os
from huey import RedisHuey
settings__development = {
'host': 'localhost'
}
settings__testing = {
'host': 'localhost'
}
settings__production = {
'host': 'production_server'
}
settings = {
'development': settings__development,
'testing': settings__testing,
'production': settings__production,
'default': settings__development
}
huey = RedisHuey(**settings[os.getenv('FLASK_ENV') or 'default'])
Then the tasks.py module will look like this:
import os
from app.tasking.config import huey
from app import create_app_huey
app = create_app_huey(config_name=os.getenv('FLASK_ENV') or 'default')
#huey.task()
def create_thumbnails(document):
pass
Run the consumer. Activate your virtual environment. Then run from cmd (I'm on Windows):
huey_consumer.py app.tasking.config.huey
Where app.tasking.config is a package.package.module path (in my case!) and
huey is the name of available (in the config module) huey instance. Check your huey instance name.
Reading this helped.
I have few services such as redis server, centifuge server that runs alongside Django. I would like to start the services and the servers using a single command.
Something like:
python setup.py run
By default, this will run all the services on 127.0.0.1 and below variables from settings.py make sense.
#Sample settings.py
#Django rq configs
RQ_SHOW_ADMIN_LINK = True
RQ_QUEUES = {
'default': {
'HOST': 'localhost',
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': 360,
},
}
# centrifuge configuration
CENTRIFUGE_WEBSOCKET = 'ws://127.0.0.1:8080/connection/websocket'
CENTRIFUGE_ADDRESS = 'http://127.0.0.1:8080'
I would like to run
python setup.py run 172.19.78.179
Here, the services will running on 172.19.78.179, I would to change the settings to this ip.
# centrifuge configuration
CENTRIFUGE_WEBSOCKET = 'ws://172.19.78.179:8080/connection/websocket'
CENTRIFUGE_ADDRESS = 'http://172.19.78.179:8080'
I don't want set it in live settings or through admin interface. How do I go about in solving this problem
if the ip is going to be dynamic, you can try using os.environ, like this:
import os
IP = os.environ['IP']
...
So you can run like: IP=172.19.78.179 python setup.py run
Or if you prefer, you can use a separate settings file and call it with --settings flag:
django-admin runserver --settings=mysite.settings
You can read more about django enironment settings here.
Hope this helps.
I am attempting to deploy a Flask app to Heroku. I'm using Peewee as an ORM for a Postgres database. When I follow the standard Heroku steps to deploying Flask, the web process crashes after I enter heroku ps:scale web=1. Here's what the logs say:
Starting process with command `python app.py`
/app/.heroku/venv/lib/python2.7/site-packages/peewee.py:2434: UserWarning: Table for <class 'flask_peewee.auth.User'> ("user") is reserved, please override using Meta.db_table
cls, _meta.db_table,
Traceback (most recent call last):
File "app.py", line 167, in <module>
auth.User.create_table(fail_silently=True)
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2518, in create_table if fail_silently and cls.table_exists():
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2514, in table_exists return cls._meta.db_table in cls._meta.database.get_tables()
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 507, in get_tables ORDER BY c.relname""")
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 313, in execute cursor = self.get_cursor()
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 310, in get_cursor return self.get_conn().cursor()
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 306, in get_conn self.connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 296, in connect self.__local.conn = self.adapter.connect(self.
database, **self.connect_kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 199, in connect return psycopg2.connect(database=database, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Process exited with status 1
State changed from starting to crashed
I've tried a bunch of different things to get Heroku to allow my app to talk to a Postgres db, but haven't had any luck. Is there an easy way to do this? What do I need to do to configure Flask/Peewee so that I can use a db on Heroku?
According to the Peewee docs, you don't want to use Proxy() unless your local database driver is different than your remote one (i.e. locally, you're using SQLite and remotely you're using Postgres). If, however, you are using Postgres both locally and remotely it's a much simpler change. In this case, you'll want to only change the connection values (database name, username, password, host, port, etc.) at runtime and do not need to use Proxy().
Peewee has a built-in URL parser for database connections. Here's how to use it:
import os
from peewee import *
from playhouse.db_url import connect
db = connect(os.environ.get('DATABASE_URL'))
class BaseModel(Model):
class Meta:
database = db
In this example, peewee's db_url module reads the environment variable DATABASE_URL and parses it to extract the relevant connection variables. It then creates a PostgresqlDatabase object with those values.
Locally, you'll want to set DATABASE_URL as an environment variable. You can do this according to the instructions of whatever shell you're using. Or, if you want to use the Heroku toolchain (launch your local server using heroku local) you can add it to a file called .env at the top level of your project. For the remote setup, you'll want to add your database URL as a remote Heroku environment variable. You can do this with the following command:
heroku config:set DATABASE_URL=postgresql://myurl
You can find that URL by going into Heroku, navigating to your database, and clicking on "database credentials". It's listed under URI.
Are you parsing the DATABASE_URL environment variable? It will look something like this:
postgres://username:password#host:port/database_name
So you will want to pull that in and parse it before you open a connection to your database. Depending on how you've declared your database (in your config or next to your wsgi app) it might look like this:
import os
import urlparse
urlparse.uses_netloc.append('postgres')
url = urlparse.urlparse(os.environ['DATABASE_URL'])
# for your config
DATABASE = {
'engine': 'peewee.PostgresqlDatabase',
'name': url.path[1:],
'password': url.password,
'host': url.hostname,
'port': url.port,
}
See the notes here: https://devcenter.heroku.com/articles/django
heroku config:set HEROKU=1
import os
import urlparse
import psycopg2
from flask import Flask
from flask_peewee.db import Database
if 'HEROKU' in os.environ:
DEBUG = False
urlparse.uses_netloc.append('postgres')
url = urlparse.urlparse(os.environ['DATABASE_URL'])
DATABASE = {
'engine': 'peewee.PostgresqlDatabase',
'name': url.path[1:],
'user': url.username,
'password': url.password,
'host': url.hostname,
'port': url.port,
}
else:
DEBUG = True
DATABASE = {
'engine': 'peewee.PostgresqlDatabase',
'name': 'framingappdb',
'user': 'postgres',
'password': 'postgres',
'host': 'localhost',
'port': 5432 ,
'threadlocals': True
}
app = Flask(__name__)
app.config.from_object(__name__)
db = Database(app)
Modified coleifer's answer to answer hasenj's comment.
Please mark one of these as the accepted answer.
I have managed to get my Flask app which uses Peewee working on Heroku using the below code:
# persons.py
import os
from peewee import *
db_proxy = Proxy()
# Define your models here
class Person(Model):
name = CharField(max_length=20, unique=True)
age = IntField()
class Meta:
database = db_proxy
# Import modules based on the environment.
# The HEROKU value first needs to be set on Heroku
# either through the web front-end or through the command
# line (if you have Heroku Toolbelt installed, type the following:
# heroku config:set HEROKU=1).
if 'HEROKU' in os.environ:
import urlparse, psycopg2
urlparse.uses_netloc.append('postgres')
url = urlparse.urlparse(os.environ["DATABASE_URL"])
db = PostgresqlDatabase(database=url.path[1:], user=url.username, password=url.password, host=url.hostname, port=url.port)
db_proxy.initialize(db)
else:
db = SqliteDatabase('persons.db')
db_proxy.initialize(db)
if __name__ == '__main__':
db_proxy.connect()
db_proxy.create_tables([Person], safe=True)
You should already have a Postgres database add-on attached to your app. You can do this via the command line or through the web front-end. Assuming that the database is already attached to your app and you have already deployed your with the above changes, log-in to Heroku and create the table(s):
$ heroku login
$ heroku run bash
$ python persons.py
Check that the table was created:
$ heroku pg:psql
your_app_name::DATABASE=> \dt
You then import this file (persons.py in this example) in another Python script, e.g. a request handler. You need to manage the database connection explicitly:
# server.py
from flask import g
from persons import db_proxy
#app.before_request
def before_request():
g.db = db_proxy
g.db.connect()
#app.after_request
def after_request(response):
g.db.close()
return response
…
References:
https://devcenter.heroku.com/articles/heroku-postgresql
http://peewee.readthedocs.org/en/latest/peewee/database.html#dynamically-defining-a-database
http://peewee.readthedocs.org/en/latest/peewee/example.html#establishing-a-database-connection
https://stackoverflow.com/a/20131277/3104465
https://gist.github.com/fprieur/9561148
I've been working on a new dev platform using nginx/gunicorn and Flask for my application.
Ops-wise, everything works fine - the issue I'm having is with debugging the Flask layer. When there's an error in my code, I just get a straight 500 error returned to the browser and nothing shows up on the console or in my logs.
I've tried many different configs/options.. I guess I must be missing something obvious.
My gunicorn.conf:
import os
bind = '127.0.0.1:8002'
workers = 3
backlog = 2048
worker_class = "sync"
debug = True
proc_name = 'gunicorn.proc'
pidfile = '/tmp/gunicorn.pid'
logfile = '/var/log/gunicorn/debug.log'
loglevel = 'debug'
An example of some Flask code that borks- testserver.py:
from flask import Flask
from flask import render_template_string
from werkzeug.contrib.fixers import ProxyFix
app = Flask(__name__)
#app.route('/')
def index():
n = 1/0
return "DIV/0 worked!"
And finally, the command to run the flask app in gunicorn:
gunicorn -c gunicorn.conf.py testserver:app
Thanks y'all
The accepted solution doesn't work for me.
Gunicorn is a pre-forking environment and apparently the Flask debugger doesn't work in a forking environment.
Attention
Even though the interactive debugger does not work in
forking environments (which makes it nearly impossible to use on
production servers) [...]
Even if you set app.debug = True, you will still only get an empty page with the message Internal Server Error if you run with gunicorn testserver:app. The best you can do with gunicorn is to run it with gunicorn --debug testserver:app. That gives you the trace in addition to the Internal Server Error message. However, this is just the same text trace that you see in the terminal and not the Flask debugger.
Adding the if __name__ ... section to the testserver.py and running python testserver.py to start the server in development gets you the Flask debugger. In other words, don't use gunicorn in development if you want the Flask debugger.
app = Flask(__name__)
app.config['DEBUG'] = True
if __name__ == '__main__':
app.run()
## Tip for Heroku users:
Personally I still like to use `foreman start`, instead of `python testserver.py` since [it sets up all the env variables for me](https://devcenter.heroku.com/articles/config-vars#using-foreman). To get this to work:
Contents of Procfile
web: bin/web
Contents of bin/web, file is relative to project root
#!/bin/sh
if [ "$FLASK_ENV" == "development" ]; then
python app.py
else
gunicorn app:app -w 3
fi
In development, create a .env file relative to project root with the following contents (docs here)
FLASK_ENV=development
DEBUG=True
Also, don't forget to change the app.config['DEBUG']... line in testserver.py to something that won't run Flask in debug mode in production.
app.config['DEBUG'] = os.environ.get('DEBUG', False)
The Flask config is entirely separate from gunicorn's. Following the Flask documentation on config files, a good solution would be change my source to this:
app = Flask(__name__)
app.config.from_pyfile('config.py')
And in config.py:
DEBUG = True
For Heroku users, there is a simpler solution than creating a bin/web script like suggested by Nick.
Instead of foreman start, just use foreman run python app.py if you want to debug your application in development.
I had similiar problem when running flask under gunicorn I didn't see stacktraces in browser (had to look at logs every time). Setting DEBUG, FLASK_DEBUG, or anything mentioned on this page didn't work. Finally I did this:
app = Flask(__name__)
app.config.from_object(settings_map[environment])
if environment == 'development':
from werkzeug.debug import DebuggedApplication
app_runtime = DebuggedApplication(app, evalex=False)
else:
app_runtime = app
Note evalex is disabled because interactive debbugging won't work with forking (gunicorn).
I used this:
gunicorn "swagger_server.__main__:app" -w 4 -b 0.0.0.0:8080
You cannot really run it with gunicorn and for example use the flask reload option upon code changes.
I've used following snippets in my api launchpoint:
app = Flask(__name__)
try:
if os.environ["yourapp_environment"] == "local":
run_as_local = True
# some other local configs e.g. paths
app.logger.info('Running server in local development mode!')
except KeyError as err:
if "yourapp_environment" in err.args:
run_as_local = False
# some other production configs e.g. paths
app.logger.info('No "yourapp_environment env" given so app running server in production mode!')
else:
raise
...
...
...
if __name__ == '__main__':
if run_as_local:
app.run(host='127.0.0.1', port='8058', debug=True)
else:
app.run(host='0.0.0.0')
For above solution you need to give export yourapp_environment = "local" in the console.
now I can run my local as python api.py and prod gunicorn --bind 0.0.0.0:8058 api:app
The else statement app.run() is not actually needed, but I keep it for reminding me about host, port etc.
Try setting the debug flag on the run command like so
gunicorn -c gunicorn.conf.py --debug testserver:app
and keep the DEBUG = True in your Flask application. There must be a reason why your debug option is not being applied from the config file but for now the above note should get you going.