SQLAlchemy and (2006, 'MySQL server has gone away') - python

So,a after reading every page around for the last 1h i'm still not able to find a solution for this problem.
This is my connection.py file:
from sqlalchemy import create_engine, Table, Column, String, Integer, MetaData
from sqlalchemy.sql import select
class DatabaseConnectionManager:
def __init__(self):
self.host = 'localhost:3306'
self.db = 'xxx'
self.user = 'xxx'
self.pwd = 'xxx'
self.connect_string = 'mysql://{u}:{p}#{s}/{d}'.format(u=self.user,
p=self.pwd,
s=self.host,
d=self.db)
self.metadata = MetaData()
self.engine = create_engine(self.connect_string, echo=False,
pool_size=100, pool_recycle=3600)
self.conn = self.engine.connect()
def insert_table(self, inputs):
self.conn.execute(self.tbl_auctions().insert(), inputs)
# Update 1 : conn.close() removed.
#self.conn.close()
def select_servers(self):
try:
s = select([self.tbl_servers()])
result = self.conn.execute(s)
except:
raise
else:
return result
And this is my bulk_inserter.py file:
import sys
import time
import json
from connector import DatabaseConnectionManager
def build_auctions_list():
server_list = []
db = DatabaseConnectionManager()
# Loop over regions
for server, env in db.select_servers_table():
request_auction_data = json.loads(dump_auction_url(region, realm))
for auction in request_auction_data:
auction_list.append(auction)
db.insert_table(server_list)
if __name__ == '__main__':
start = time.time()
build_auctions_list()
print time.time() - start
So, the problem happens when i try to insert all the bulk data using db.insert_table(server_list) for 2 or more servers returned by the loop for server, env in db.select_servers_table():
But, if the result on that loop is for only one server, the flows happen normally without problems.
So, resuming :
this program retrieve a list of servers from a db table and dumps the json data into the db.
Bulk insert performs well if only one server is retrieved.
if two or more servers , the following error happens:
sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away')
Anyoen have any idea what could be happening? I allready incresed the timeout and buffer size on mysql config file. So i'm not sure what the problem could be...
Update #1 It seems i cant bulk insert array with more than 50k values. I'm still trying to figure out how to do it.

Related

SQLAlchemy keeps the DB connection open

I have an issue with in my Flask app concerning SQLAlchemy and MySQL.
In one of my file: connection.py I have a function that creates a DB connection and set it as a global variable:
db = None
def db_connect(force=False):
global db
db = pymysql.connect(.....)
def makecursor():
cursor = db.cursor(pymysql.cursors.DictCursor)
return db, cursor
And then I have a User Model created with SQL ALchemy models.ppy
class User(Model):
id = column........
It inherits from Model which is a class that I create in another file orm.py
import connection
Engine = create_engine(url, creator=lambda x: connection.makecursor()[0], pool_pre_ping=True)
session_factory = sessionmaker(bind=Engine, autoflush=autoflush)
Session = scoped_session(session_factory)
class _Model:
query = Session.query_property()
Model = declarative_base(cls=_Model, constructor=model_constructor)
In my application I can have long script running so the DB timeout. So I have a function that "reconnect" my DB (it actually only create a new connexion and replace the global DB variable)
My goal is to be able to catch the close of my DB and reconnect it instantly. I tried with SQLAlchemy events but it never worked. (here)
Here is some line that reproduces the error:
res = User.query.filter_by(username="myuser#gmail.com").first()
connection.db.close()
# connection.reconnect() # --> SOLUTION
res = User.query.filter_by(username="myuser#gmail.com").first()
If you guys have any ideas of how to achieve that, let me know 🙏🏻
Oh and I forgot, this application is still running with python2.7.

TypeError: issubclass() arg 2 must be a class or tuple of classes

I've been using the same class for months for connecting to a SQL Server database, running queries, inserting data into staging tables, etc. Just yesterday whenever my code tries to insert into a staging table, I get the error:
TypeError: issubclass() arg 2 must be a class or tuple of classes
Debugging I learned that this is happening in the method _relationships_for_fks in the automap.py module (sqlalchemy library). Specifically this block fails because referred_cls is None and this is not supported in the issubclass method.
if local_cls is not referred_cls and issubclass(
local_cls, referred_cls):
I'm on Python 3.6 and sqlalchemy version 1.2.15 (and haven't upgraded or anything lately). I have changed no code and this error has just started. Below is the class I'm using for all SQL operations in my code. Any ideas are MUCH appreciated as I cannot figure out why I keep getting this error (oh yeah and it's not always consistent - every 3 or so times, the code runs just fine). The method that fails is get_table_class called from the method save_dataframe_to_table which is called in various other places throughout my code (whenever I have to save data to a table in SQL Server I use this). The specific line of code that errors in this class is Base.prepare(engine, reflect=True).
#!/usr/bin/python
""" Connect and interact with a SQL Server database
Contains a class used for connecting and interacting with a SQL Server database.
"""
from common.Util.Logging import Logging
from common.Util.OSHelpers import get_log_filepath
import pandas
import urllib
import pyodbc
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm import sessionmaker
Base = automap_base()
class SqlDatabase:
"""Connect and interact with a SQL Server database"""
def __init__(self, server, database, driver, port, username, password, logging_obj=None):
""" Create a common.DataAccess.SqlDatabase.SqlDatabase object
Args:
server: (str) name of the SQL Server
database: (str) name of the database on the SQL Server
driver: (str) name of the driver for use in the connection string, e.g. '{ODBC Driver 13 for SQL Server}'
port: (str) SQL Server port number (typically this is 1433)
username: (str) SQL Server username (leave blank to use Windows Authentication)
password: (str) SQL Server password (leave blank to use Windows Authentication)
logging_obj: (common.Util.Logging.Logging) initialized logging object
"""
# Set class variables
if logging_obj is None:
log_filename = get_log_filepath('Python App')
logging_obj = Logging(name=__name__, log_filename=log_filename, log_level_str='INFO')
self.logging_obj = logging_obj
self.server = server
self.database = database
self.driver = driver
self.port = port
self.username = username
self.password = password
self.connection_string = 'Driver=' + self.driver \
+ ';SERVER=' + self.server \
+ ',' + self.port \
+ ';DATABASE=' + self.database \
+ ';UID=' + self.username \
+ ';PWD=' + self.password
# Test connection
self.logging_obj.log(self.logging_obj.DEBUG, "method='common.DataAccess.SqlDatabase.__init__' message='Testing connection'")
conn = self.open_connection()
conn.close()
# Log initialization success
log_msg = """
method='common.DataAccess.SqlDatabase.__init__'
message='Initialized a SqlDatabase object'
server='{server}'
database='{database}'
driver='{driver}'
port='{port}'
username='{username}'
password='{password}'
connection_string='{connection_string}'
""".format(server=self.server,
database=self.database,
driver=self.driver,
port=self.port,
username=self.username,
password='*'*len(self.password),
connection_string=self.connection_string)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
def open_connection(self):
""" Open connection
Opens a connection to a SQL Server database.
Returns:
conn: (pyodbc.Connection) connection to a SQL Server database
"""
self.logging_obj.log(self.logging_obj.DEBUG, "method='common.DataAccess.SqlDatabase.open_connection' message='Opening SQL Server connection'")
try:
conn = pyodbc.connect(self.connection_string)
except Exception as ex:
self.logging_obj.log(self.logging_obj.ERROR,
"""
method='common.DataAccess.SqlDatabase.open_connection'
message='Error trying to open SQL Server connection'
exception_message='{ex_msg}'
connection_string='{cxn_str}'
server='{server}'
port='{port}'
username='{username}'
password='{password}'
database='{database}'""".format(ex_msg=str(ex),
cxn_str=self.connection_string,
server=self.server,
port=self.port,
username=self.username,
password='*'*len(self.password),
database=self.database))
raise ex
else:
self.logging_obj.log(self.logging_obj.DEBUG,
"""
method='common.DataAccess.SqlDatabase.open_connection'
message='Successfully opened SQL Server connection'
connection_string='{cxn_str}'
server='{server}'
username='{username}'
password='{password}'
database='{database}'""".format(cxn_str=self.connection_string,
server=self.server,
username=self.username,
password='*' * len(self.password),
database=self.database))
return conn
def get_engine(self):
""" Create a Sqlalchemy engine
Returns:
engine: ()
"""
self.logging_obj.log(self.logging_obj.DEBUG, "message='Creating a sqlalchemy engine'")
params = urllib.parse.quote_plus(self.connection_string)
try:
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
except Exception as ex:
self.logging_obj.log(self.logging_obj.ERROR,
"""
method='common.DataAccess.SqlDatabase.get_engine'
message='Error trying to create a sqlalchemy engine'
exception_message='{ex_msg}'
connection_string='{conn_str}'""".format(ex_msg=str(ex),
conn_str=self.connection_string))
raise ex
else:
self.logging_obj.log(self.logging_obj.DEBUG,
"""
method='common.DataAccess.SqlDatabase.get_engine'
message='Successfully created a sqlalchemy engine'
connection_string='{conn_str}'
""".format(conn_str=self.connection_string))
return engine
def get_result_set(self, query_str):
""" Get a result set as a Pandas dataframe
Gets a result set using the pandas.read_sql method.
Args:
query_str: (str) query string
Returns:
df: (pandas.DataFrame) result set
"""
log_msg = """
method='common.DataAccess.SqlDatabase.get_result_set'
message='Getting a result set'
query_str='{query_str}'
""".format(query_str=query_str)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
conn = self.open_connection()
df = pandas.read_sql(query_str, conn)
conn.close()
log_msg = """
method='common.DataAccess.SqlDatabase.get_result_set'
message='Successfully got a result set'
query_str='{query_str}'
""".format(query_str=query_str)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
return df
def execute_nonquery(self, query_str):
""" Execute a non-query
Executes a non-query such as a CREATE TABLE or UPDATE statement.
Args:
query_str: (str) non-query statement
Returns:
"""
log_msg = """
method='common.DataAccess.SqlDatabase.execute_nonquery'
message='Executing a non-query'
query_str='{query_str}'
""".format(query_str=query_str)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
conn = self.open_connection()
curs = conn.execute(query_str)
curs.commit()
curs.close()
conn.close()
log_msg = """
method='common.DataAccess.SqlDatabase.execute_nonquery'
message='Successfully executed a non-query'
query_str='{query_str}'
""".format(query_str=query_str)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
return None
def to_staging_table(self,
dataframe,
staging_table_name,
insert_index=True,
index_label=None,
if_table_exists='replace',
bulkcopy_chunksize=1000):
""" Puts a pandas.DataFrame into a staging table
Puts a pandas.DataFrame into a staging table.
This uses a bulk copy method to put data from a pandas.DataFrame into a SQL staging table.
Args:
dataframe: (pandas.DataFrame) dataframe with data to copy into a SQL server staging table
staging_table_name: (str) name of the staging table to copy data into
insert_index: (logical) indicates whether or not to insert an index
index_label: (str) indicates the column name of the index - if None, an auto-generated index will be used
if_table_exists: (str) indicates what pandas.DataFrame.to_sql method to use if the table already exists
bulkcopy_chunksize: (int) number of rows to bulk copy at once
Returns:
"""
log_msg = """
method='common.DataAccess.SqlDatabase.to_staging_table'
message='Copying data into a staging table'
staging_table_name='{staging_table_name}'
""".format(staging_table_name=staging_table_name)
self.logging_obj.log(self.logging_obj.INFO, log_msg)
engine = self.get_engine()
try:
pandas.DataFrame.to_sql(
self=dataframe,
name=staging_table_name,
con=engine,
if_exists=if_table_exists,
index=insert_index,
index_label=index_label,
chunksize=bulkcopy_chunksize)
except Exception as ex:
self.logging_obj.log(self.logging_obj.ERROR,
"""
method='common.DataAccess.SqlDatabase.to_staging_table'
message='Error trying to copy data into a staging table'
exception_message='{ex_msg}'
connection_string='{staging_table_name}'""".format(ex_msg=str(ex),
staging_table_name=staging_table_name))
raise ex
else:
self.logging_obj.log(self.logging_obj.DEBUG,
"""
method='common.DataAccess.SqlDatabase.to_staging_table'
message='Successfully Copied data into a staging table'
staging_table_name='{staging_table_name}'
""".format(staging_table_name=staging_table_name))
return None
def truncate_table(self, table_name, schema_name='dbo'):
""" Truncate a table in the SQL database
Usually used to truncate staging tables prior to populating them.
Args:
table_name: (str) name of the table to truncate
schema_name: (str) name of the schema of the table to truncate
Returns:
"""
query_str = "TRUNCATE TABLE {schema_name}.{table_name}".format(schema_name=schema_name, table_name=table_name)
self.execute_nonquery(query_str)
def get_table_class(self, table_name, engine=None):
""" Get a table's class
Args:
engine:
table_name:
Returns:
table_class:
"""
if engine is None:
engine = self.get_engine()
Base.prepare(engine, reflect=True)
base_classes = Base.classes
for index, value in enumerate(base_classes):
class_name = value.__name__
if class_name == table_name:
class_index = index
table_class = list(base_classes)[class_index]
return table_class
def save_dataframe_to_table(self,
dataframe,
table_name,
remove_id_column_before_insert=True):
""" Save a pandas DataFrame to a table in SQL Server
Args:
dataframe: (pandas.DataFrame)
table_name: (str)
Returns:
"""
engine = self.get_engine()
Session = sessionmaker(bind=engine)
session = Session()
table = self.get_table_class(table_name, engine)
if remove_id_column_before_insert:
delattr(table, table_name+"Id") # Id columns should always be <table_name>Id (USANA standard)
dataframe.columns = table.__table__.columns.keys()[1:] # Id columns should always be the first column in table (for simplicity people!)
else:
dataframe.columns = table.__table__.columns.keys()
dataframe = dataframe.where((pandas.notnull(dataframe)), None) # replace NaN with None for the bulk insert
try:
session.bulk_insert_mappings(table, dataframe.to_dict(orient="records"), render_nulls=True)
except IntegrityError as e:
session.rollback()
self.logging_obj.log(self.logging_obj.ERROR, """method='common.DataAccess.SqlDatabase.save_dataframe_to_table'
exception_message='{ex}'""".format(ex=str(e)))
finally:
session.commit()
session.close()
The only other hint/clue into this issue I'm having is that I also just start getting the following warnings (for a whole set of tables in our database). I haven't seen these warnings until yesterday.
SAWarning: This declarative base already contains a class with the same class name and module name as sqlalchemy.ext.automap.WeeklySocialSellingProductMetricsReport, and will be replaced in the string-lookup table.
I had similar problem with Oracle database and it turned out that the reason was difference in letter case of schema name. Automap converts Oracle schema names and table names to lowercase, but in the metadata.reflect(engine, schema='MYSCHEMA') I provided my schema name in uppercase.
As a result, some tables was discovered twice:
as MYSCHEMA.mytable, probably generated by plain table discovery
as myschema.mytable, probably generated by a relationship discovered from another table
and caused warnings:
sqlalchemy\ext\declarative\clsregistry.py:129: SAWarning: This declarative base already contains a class with the same class name and module name as sqlalchemy.ext.automap.my_table_name, and will be replaced in the string-lookup table.
followed by TypeError.
The solution was as simple as changing schema name to lowercase.
This script helped me to spot table duplicates:
engine = create_engine(my_connection_string)
metadata = MetaData()
metadata.reflect(engine, schema='MYSCHEMA') # I'm using WRONG letter case here.
Base = automap_base(metadata=metadata)
# prepend table name with schema name
def classname_for_table(base, tablename, table):
return str(table.fullname.replace(".","__"))
Base.prepare(classname_for_table=classname_for_table)
# and look what's going on
pprint(Base.classes.keys())

Flask-MySQL gives "closing a closed connection" error the second time a view runs

I am using Flask-MySQL to connect to my database in a view. The view works the first time I go to it, but when I go to it the second time it always crashes with the error:
ProgrammingError: closing a closed connection
Why am I getting this error? How do I connect successfully the second time?
from flask import Flask, render_template, request
from flaskext.mysql import MySQL
app = Flask(__name__)
#app.route('/hello/', methods=['POST'])
def hello():
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = 'xxx'
app.config['MYSQL_DATABASE_DB'] = 'pies'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
query = request.form['yourname']
mysql = MySQL(app)
conn = mysql.connect()
with conn as cursor:
try:
cursor.execute(query)
name = str(cursor.fetchone())
except:
name = "SQL is wrong"
conn.close()
return render_template('form_action.html', name=name)
if __name__ == '__main__':
app.run(debug=True)
You should not be initializing the extension during every request. Create it during app setup, then use it during the requests. Setting configuration should also be done during app setup.
The extension adds a teardown function that executes after every request and closes the connection, which is stored as a threadlocal. Since on the second request you've registered the extension multiple times, it is trying to close the connection multiple times.
There's no need to call connect, the extension adds a before request function that does that. Use get_db to get the connection for the request. Since the extension closes this connection for you, don't call close either.
from flask import Flask
from flaskext.mysql import MySQL
MYSQL_DATABASE_USER = 'root'
MYSQL_DATABASE_PASSWORD = 'xxx'
MYSQL_DATABASE_DB = 'pies'
MYSQL_DATABASE_HOST = 'localhost'
app = Flask(__name__)
app.config.from_object(__name__)
mysql = MySQL(app)
#app.route('/hello/')
def hello():
conn = mysql.get_db()
# ...
Note that Flask-MySQL is using a very old extension pattern that is no longer supported by Flask. It also depends on an unsupported mysql package that does not support Python 3. Consider using Flask-MySQLdb instead.

Flask app keeps returning error 500

I have been learning how to use the Flask framework using a tutorial, but the code in my app.py keeps returning an error 500, and I can't figure out why (my code is identical to the tutorial).
Here's the app.py:
from flask import Flask, render_template, json, request
from flask.ext.mysql import MySQL
from werkzeug import generate_password_hash, check_password_hash
mysql = MySQL()
app = Flask(__name__)
# MySQL configurations
app.config['MYSQL_DATABASE_USER'] = 'root'
app.config['MYSQL_DATABASE_PASSWORD'] = 'root'
app.config['MYSQL_DATABASE_DB'] = 'BucketList'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
#app.route('/')
def main():
return render_template('index.html')
#app.route('/showSignUp')
def showSignUp():
return render_template('signup.html')
#app.route('/signUp',methods=['POST','GET'])
def signUp():
try:
_name = request.form['inputName']
_email = request.form['inputEmail']
_password = request.form['inputPassword']
# validate the received values
if _name and _email and _password:
# All Good, let's call MySQL
conn = mysql.connect()
cursor = conn.cursor()
_hashed_password = generate_password_hash(_password)
cursor.callproc('sp_createUser',(_name,_email,_hashed_password))
data = cursor.fetchall()
if len(data) is 0:
conn.commit()
return json.dumps({'message':'User created successfully !'})
else:
return json.dumps({'error':str(data[0])})
else:
return json.dumps({'html':'<span>Enter the required fields</span>'})
except Exception as e:
return json.dumps({'error':str(e)})
return traceback.format_exc()
finally:
cursor.close()
conn.close()
if __name__ == "__main__":
app.run(port=5002)
It's for a signup system.
a 500 error usually means there is an error in your python instead when you run try it withh app.run(port=5002,debug=True) this wont solve your problem ... but it should tell you whats going on
I know you are following this tutorial, because I am having the same problem - http://code.tutsplus.com/tutorials/creating-a-web-app-from-scratch-using-python-flask-and-mysql--cms-22972
The issue is that inside the stored procedure they are having you set a column of size 20:
CREATE DEFINER=`root`#`localhost` PROCEDURE `sp_createUser`(
IN p_name VARCHAR(20),
IN p_username VARCHAR(20),
IN p_password VARCHAR(20)
)
But when they tell you to salt the password in your python code, like you do:
_hashed_password = generate_password_hash(_password)
You are creating a string much longer than 20 characters, so if you ran this in debug mode you'd see the error says invalid column length for column password. I fixed this by just changing the size of the column to 100. :)
I know this tutorial and I was getting the same error a couple of minutes back.
I changed -
_hashed_password = generate_password_hash(_password)
to
_hashed_password = _password
and it worked! :).
I am assuming that the reason is size that we have declared for password field is less than what is actually needed if we hash. But for now, you can do the same and get the app running.
Happy Coding!

CherryPy and MySQL can't connect to the database

I have a CherryPy "site" set up under Apache with modwsgi. It works fine and I can return hello world messages no problem. The problem is when I try to connect to my MySQL database. Here is the code I'm using.
import sys
sys.stdout = sys.stderr
import atexit
import threading
import cherrypy
import MySQLdb
cherrypy.config.update({'environment': 'embedded'})
if cherrypy.__version__.startswith('3.0') and cherrypy.engine.state == 0:
cherrypy.engine.start(blocking=False)
atexit.register(cherrypy.engine.stop)
def initServer():
global db
db=MySQLdb.connect(host="localhost", user="root",passwd="pass",db="Penguin")
class Login(object):
def index(self):
return 'Login Page'
index.exposed = True
class Root(object):
login = Login();
def index(self):
# Sample page that displays the number of records in "table"
# Open a cursor, using the DB connection for the current thread
c=db.cursor()
c.execute('SELECT count(*) FROM Users')
result=cursor.fetchall()
cursor.close()
return 'Help' + result
index.exposed = True
application = cherrypy.Application(Root(), script_name=None, config=None)
Most of this was copied from the CherryPy site on setting up modwsgi, I just added the database stuff which I pieced together from various internet sources.
When I try to view the root page I get a 500 Internal Server Error. I can still get to the login page fine but so I'm pretty sure I'm messing up the database connection somehow.
You have a bunch of errors, not related to CherryPy really.
def initServer():
global db
db is not defined in the global scope. Try:
db = None
def initServer():
global db
In addition, initServer() is never called to create the DB connection.
Another:
c = db.cursor()
c.execute('SELECT count(*) FROM Users')
result = cursor.fetchall()
cursor.close()
cursor is not defined. I think you mean c:
c = db.cursor()
c.execute('SELECT count(*) FROM Users')
result = c.fetchall()
c.close()

Categories