Heroku Postgresql local connection - python

Everything worked great until today, while I did not change anything in the code.
I can't connect to the database not from the application, not from the IDE, did something go wrong on the heroku side? I have not seen news with global updates over the past couple of days on the heroku website. Can anyone advise how to solve the current problem?
I use the free version of dyno and postgresql, I definitely still have a lot of free space (less than 1 thousand fields). It looks like blocking access to the database locally, not from the service side.

What I would try in your situation:
Go to https://data.heroku.com/, select your Datastore and check everything there: Health, number of connections, number of rows, data size.
If everything is fine: Go to settings -> database credentials and try setting up a connection from any desktop tool such as Navicat or Pgadmin. What error message do you get?
Set-up another database on Heroku and try the same. If the second DB works, there is an issue with the first one. If it does not, it's rather about your setup/settings.
Hope that helps

Related

Python loses connection to MySQL database after about a day

I am developing a web-based application using Python, Flask, MySQL, and uWSGI. However, I am not using SQL Alchemy or any other ORM. I am working with a preexisting database from an old PHP application that wouldn't play well with an ORM anyway, so I'm just using mysql-connector and writing queries by hand.
The application works correctly when I first start it up, but when I come back the next morning I find that it has become broken. I'll get errors like mysql.connector.errors.InterfaceError: 2013: Lost connection to MySQL server during query or the similar mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at '10.0.0.25:3306', system error: 32 Broken pipe.
I've been researching it and I think I know what the problem is. I just haven't been able to find a good solution. As best as I can figure, the problem is the fact that I am keeping a global reference to the database connection, and since the Flask application is always running on the server, eventually that connection expires and becomes invalid.
I imagine it would be simple enough to just create a new connection for every query, but that seems like a far from ideal solution. I suppose I could also build some sort of connection caching mechanism that would close the old connection after an hour or so and then reopen it. That's the best option I've been able to come up with, but I still feel like there ought to be a better one.
I've looked around, and most people that have been receiving these errors have huge or corrupted tables, or something to that effect. That is not the case here. The old PHP application still runs fine, the tables all have less than about 50,000 rows, and less than 30 columns, and the Python application runs fine until it has sat for about a day.
So, here's to hoping someone has a good solution for keeping a continually open connection to a MySQL database. Or maybe I'm barking up the wrong tree entirely, if so hopefully someone knows.
I have it working now. Using pooled connections seemed to fix the issue for me.
mysql.connector.connect(
host='10.0.0.25',
user='xxxxxxx',
passwd='xxxxxxx',
database='xxxxxxx',
pool_name='batman',
pool_size = 3
)
def connection():
"""Get a connection and a cursor from the pool"""
db = mysql.connector.connect(pool_name = 'batman')
return (db, db.cursor())
I call connection() before each query function and then close the cursor and connection before returning. Seems to work. Still open to a better solution though.
Edit
I have since found a better solution. (I was still occasionally running into issues with the pooled connections). There is actually a dedicated library for Flask to handle mysql connections, which is almost a drop-in replacement.
From bash: pip install Flask-MySQL
Add MYSQL_DATABASE_HOST, MYSQL_DATABASE_USER, MYSQL_DATABASE_PASSWORD, MYSQL_DATABASE_DB to your Flask config. Then in the main Python file containing your Flask App object:
from flaskext.mysql import MySQL
mysql = MySQL()
mysql.init_app(app)
And to get a connection: mysql.get_db().cursor()
All other syntax is the same, and I have not had any issues since. Been using this solution for a long time now.

Python ejabberd Auth Script not responding to changes in Database

I have an authentication script in ejabberd (XMPP server) that based off of THIS LINK
I have slightly modified the script so that instead of setting the variable out, it just returns true or false.
I'm using Ubuntu, MySQL, ejabberd, and Python.
I can authenticate all the records that are already on the database. But, when I add or remove records (I do this through phpMyAdmin), the script doesn't seem to know that the database has changed (I remove a user in phpMyAdmin and it still authenticates the user). The only time when the script recognizes the new records is when I restart or force-reload the ejabberd server. I've already been told its not a mySQL caching problem. I made sure I turned off external authentication caching for ejabberd.
That's all I can think of right now. I'll add more information if I can think of it. Any help is appreciated. I have no idea what is going on.
Addition: I turned on the MySQL logs, and all the queries there so there is not skipping queries.
I managed to fix this problem by changing the database engine back to MYISAM rather than INNODB. But I would like to know if this can be fixed for INNODB.
Edit: to fix it in innodb, set autocommit to true

Server Upgrade Script

Does anyone have or know of a good template / plan for doing automated server upgrades? In this case I am upgrading a python/django server, but am going to have to apply this update to many machines, and want to be sure that the operation is fully testable and recoverable should anything go wrong.
Am picturing something along the lines of:
remotely fetch new code
verify code download (e.g. hash of files)
take down server, display "you are upgrading dialog"
backup database(s)
backup code directory
apply new code updates
verify code update (e.g. hash of files)
apply database update (if necessary)
run tests
if success
startup server
verify server update
else
restore old database
restore old code
report error
startup server
verify server restore
I'm sure that this isn't exhaustive, and there are many other error conditions to consider, but am wondering if something like this already exists as a formalized process/best practices checklist to follow? Ideally this whole thing should of course be done by a single script call.
Once you have a plan (and yours looks pretty good), the Fabric site should be your next stop.
I think you're pretty much covering everything. Identify what's important to you and you're business practices: that's what counts.

Why does my remote MongoDB connection require authentication on every query?

After fighting with different things here and there, I was finally able to get BottlePY running on Apache and run a MongoDB powered site. I am used to running Django apps, so I will be relating to that a bit in my question.
The Problem
Every time a page is loaded via BottlePY, the connection to the MongoDB database located on MongoHQ.com needs to be re-authenticated (meaning it probably had to reconnect).
What I Found
I attached a db.keep_alive() function to the top of each model function, so that before any mongodb query is run, it trys to run a simple query. If it fails, it catches the OperationFailure or AutoReconnect errors and then calls the db.authenticate() function. After it reauthenticates, I have it add a log to a logs db to monitor how often it needs to reauthenticate. Currently, it needs to reauthenticate on every page load (that requires running a query). This isn't right.
Difference from Django
I use this same concept in django, and have found that the db connection only needs to be authenticated after 10-15 minutes of no queries being run.
I don't understand why creating a pymongo connection in django would be different from creating one in bottle, since I am using the same driver, functions and methods. I am not using any ORMS or anything like that either.
Versions
Bottle: 0.9.dev
Django: 1.2.1 final
PyMongo: 1.8
I appreciate the help!
Update: A friend was able to take a quick look and noticed the following that may help with answering my question.
It appears that each request is
launching a new Python process, as
opposed to Django, in which a single
process remains running for a long
period of time.
This just ended up to be a weird thing between Bottle and MongoHQ. No real solution was found, but I couldn't recreate it with other frameworks. Any other ideas are appreciated.
does your apache xxx.conf contain something like:
WSGIDaemonProcess project user=mysite group=www-data processes=5 threads=1
WSGIProcessGroup project
I think most important should be threads=1

Django ORM and PostgreSQL connection limits

I'm running a Django project on Postgresql 8.1.21 (using Django 1.1.1, Python2.5, psycopg2, Apache2 with mod_wsgi 3.2). We've recently encountered this lovely error:
OperationalError: FATAL: connection limit exceeded for non-superusers
I'm not the first person to run up against this. There's a lot of discussion about this error, specifically with psycopg, but much of it centers on older versions of Django and/or offer solutions involving edits to code in Django itself. I've yet to find a succinct explanation of how to solve the problem of the Django ORM (or psycopg, whichever is really responsible, in this case) leaving open Postgre connections.
Will simply adding connection.close() at the end of every view solve this problem? Better yet, has anyone conclusively solved this problem and kicked this error's ass?
Edit: we later upped Postgresql's limit to 500 connections; this prevented the error from cropping up, but replaced it with excessive memory usage.
This could be caused by other things. For example, configuring Apache/mod_wsgi in a way that theoretically it could accept more concurrent requests than what the database itself may be able to accept at the same time. Have you reviewed your Apache/mod_wsgi configuration and compared limit on maximum clients to that of PostgreSQL to make sure something like that hasn't been done. Obviously this presumes though that you have managed to reach that limit in Apache some how and also depends on how any database connection pooling is set up.

Categories