In my python script, I've subscribed to a web socket. Whenever the data is received, I'm inserting this data into MySQL db. Every second there are about 100-200 queries. The problem is it works for some time, and then it gives the error "error 2006: MySQL server has gone away"
I've increased Max_allowed_packets up to 512M. but it didn't work.
Here's my code.
def db_entry(threadName, _data):
_time = time.strftime('%Y-%m-%d %H:%M:%S')
#print ("starting new thread...")
for data in _data:
#print (data)
sql = "INSERT INTO %s (Script_Name, Lot_Size, Date, Time, Last_Price, Price_Change, Open,High, Low, Close, Volume, Buy_Quantity, Sell_Quantity) VALUES('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s')" %("_" + str(data['instrument_token']), data['instrument_token'], 1, datetime.datetime.today().strftime("%Y-%m-%d"), _time, data['last_price'], data['change'], data['ohlc']['open'], data['ohlc']['high'], data['ohlc']['low'], data['ohlc']['close'], data['volume'], data['buy_quantity'], data['sell_quantity'])
cursor.execute(sql)
# Commit your changes in the database
db.commit()
def on_tick(tick, ws):
thread_name = "Thread" + str(thread_count + 1)
try:
_thread.start_new_thread(db_entry,(thread_name,tick, ))
except exception as e:
print (e)
raise
def on_connect(ws):
# Subscribe to a list of instrument_tokens (RELIANCE and ACC here).
ws.subscribe(instrument_token)
# Set RELIANCE to tick in `full` mode.
ws.set_mode(ws.MODE_FULL,instrument_token)
# Assign the callbacks.
kws.on_tick = on_tick
kws.on_connect = on_connect
kws.enable_reconnect(reconnect_interval=5, reconnect_tries=50)
# Infinite loop on the main thread. Nothing after this will run.
# You have to use the pre-defined callbacks to manage subscriptions.
kws.connect()
Thanks in advance. :)
The documentation provided by the MySQL developer docs are very clear on this point. Odds are, some of those MySQL queries are running slower than others because they're waiting for their turn to insert data. If they wait too long, MySQL will just close their connection. By default, MySQL's wait_timeout is eight hours (28800s). Has the MySQL configuration been tweaked? How much hardware is allocated to MySQL?
Generally, look at all the timeout configurations. Read them and understand them. Do not simply copy and paste all the performance tweaks bloggers like blogging about.
Finally, It's solved.
I kept the db connection open which was causing the problem.
I'm closing the db connection when the query is fired. and opening again when want to insert something again.
You need to create an object with its own connection handling methods. I use this and works well.
class DB():
def __init__(self, **kwargs):
self.conn = MySQLdb.connect(‘host’, ‘user’, ‘pass’, ‘db’)
try:
if (self.conn):
status = "DB init success"
else:
status = "DB init failed"
self.conn.autocommit(True)
# self.conn.select_db(DB_NAME)
self.cursor = self.conn.cursor()
except Exception as e:
status = "DB init fail %s " % str(e)
def execute(self, query):
try:
if self.conn is None:
self.__init__()
else:
self.conn.ping(True)
self.cursor.execute(query)
return self.cursor.fetchall()
except Exception as e:
import traceback
traceback.print_exc()
# error ocurs,rollback
self.conn.rollback()
return False
Usage
data = DB().execute("SELECT * FROM Users")
print(data)
Related
The below is my way of handling database connection, but it is a bit clumsy than I want...So the question is whether or not there are some other even more proper ways to close the database while returning an error message to client if DB operations return some errors.
#app.route('/get-data/', methods=['GET'])
def get_data():
db_error = False
try:
conn = pymysql.connect(db_url, db_username, db_password, db_name)
cursor = conn.cursor()
cursor.execute('SELECT a_column FROM a_table WHERE a_condition = 0')
results = cursor.fetchall()
except Exception as ex:
logging.error('{ex}')
db_error = True # Cannot simply return here; otherwise DB connection is left open
finally:
cursor.close()
conn.close()
if db_error:
return Response('Database error', 500)
return jsonify(results) # Let's assume the jsonify() function will not throw an error...
Suppose I use a context manager, does it mean that both conn and cursor will definitely be closed even when an exception is thrown? Or is it something implementation-dependent, i.e., some packages, say, pymysql, will make sure all cursors and conns are closed, regardless of errors are thrown or not; while other packages, say, pyodbc, will NOT ensure this. (Here pymysql and pyodbc are just two examples of course...)
I have a linux server and I would like to run a python script every day to run mysql procedures but I do not know how to run multiple procedures and put a condition if there is an error that it sends me an email with the description of the error. Here is my script with only one procedure:
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","user","password","bddname" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# execute SQL query using execute() method.
cursor.execute("CALL proc_commande_clts_detail7();")
# Fetch a single row using fetchone() method.
data = cursor.fetchone()
print "Database version : %s " % data
# disconnect from server
db.close()
Thank you for your help.
You can use callproc method to execute MySQL procedures
for proc_name in proc_name_list:
try:
result_args = cursor.callproc(proc_name, args=())
except Exception as e:
send_mail(str(e))
If you want to call multiple procedures, you can put callproc in some kind of loop and use try...catch for error handling.
wrapping them in try/except block and trigger email in except block?
Scheduling can be done through cron job.
import traceback
try:
cursor.execute("CALL proc_commande_clts_detail7();")
catch Exception as e:
email_msg = traceback.format_exc()
#send email logic
I am getting the error InterfaceError (0, ''). Is there way in Pymysql library I can check whether connection or cursor is closed. For cursor I am already using context manager like that:
with db_connection.cursor() as cursor:
....
You can use Connection.open attribute.
The Connection.open field will be 1 if the connection is open and 0 otherwise. So you can say
if conn.open:
# do something
The conn.open attribute will tell you whether the connection has been
explicitly closed or whether a remote close has been detected.
However, it's always possible that you will try to issue a query and
suddenly the connection is found to have given out - there is no way
to detect this ahead of time (indeed, it might happen during the
process of issuing the query), so the only truly safe thing is to wrap
your calls in a try/except block
Use conn.connection in if statement.
import pymysql
def conn():
mydb=pymysql.Connect('localhost','root','password','demo_db',autocommit=True)
return mydb.cursor()
def db_exe(query,c):
try:
if c.connection:
print("connection exists")
c.execute(query)
return c.fetchall()
else:
print("trying to reconnect")
c=conn()
except Exception as e:
return str(e)
dbc=conn()
print(db_exe("select * from users",dbc))
This is how I did it, because I want to still run the query even if the connection goes down:
def reconnect():
mydb=pymysql.Connect(host='localhost',user='root',password='password',database='demo_db',ssl={"fake_flag_to_enable_tls":True},autocommit=True)
return mydb.cursor()
try:
if (c.connection.open != True):
c=reconnect() # reconnect
if c.connection.open:
c.execute(query)
return c.fetchall()
except Exception as e:
return str(e)
I think the try and catch might do the trick instead of checking cursor only.
try:
c = db_connection.cursor()
except OperationalError:
connected = False
else:
connected = True
#code here
I initially went with the solution from AKHIL MATHEW to call conn.open but later during testing found that sometimes conn.open was returning positive results even though the connection was lost. To be certain, I found I could call conn.ping() which actually tests the connection. The function also accepts an optional parameter (reconnect=True) which will cause it to automatically reconnect if the ping fails.
Of course there is a cost to this - as implied by the name, ping actually goes out to the server and tests the connection. You don't want to do this before every query, but in my case I have an AWS lambda spinning up on warm start and trying to reuse the connection, so I think I can justify testing my connection once on each warm start and reconnecting if it's been lost.
I have a projects deployed on Google App Engine having Google API (Python). Every request to any of API make a database connection , execute a procedure and return data and close the connection. I was not able to access any of API as it was showing
"Process terminated because the request deadline was exceeded. (Error code 123)" and "This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application." error.
Database is also on cloud (Google Cloud SQL). As I checked there was 900 connection and more than 150 instances were up but no api request was getting handled. This happens frequently. So I restart database server and deploy API code again to solve this issue. What is the issue and how can I solve this permanently ? Here is my python code for database connectivity :-
import logging
import traceback
import os
import MySQLdb
from warnings import filterwarnings
filterwarnings('ignore', category = MySQLdb.Warning)
class TalkWithDB:
def callQueries(self,query,req_args):
try:
if (os.getenv('SERVER_SOFTWARE') and os.getenv('SERVER_SOFTWARE').startswith('Google App Engine/')):
db = MySQLdb.connect(unix_socket = UNIX_SOCKET + INSTANCE_NAME, host = HOST, db = DB, user = USER ,charset='utf8',use_unicode=True)
else:
db = MySQLdb.connect(host = HOST, port = PORT, db = DB, user = USER, passwd = PASSWORD,charset='utf8',use_unicode=True)
cursor = db.cursor()
cursor.connection.autocommit(True)
try:
sql = query+str(req_args)
logging.info("QUERY = "+ sql )
cursor.execute(sql)
procedureResult = cursor.fetchall();
if str(procedureResult) == '()':
logging.info("Procedure Returned 0 Record")
procedureResult = []
procedureResult.append({0:"NoRecord", 1:"Error"})
#procedureResult = (("NoRecord","Error",),)
elif procedureResult[0][0] == 'Session Expired'.encode(encoding='unicode-escape',errors='strict'):
procedureResult = []
procedureResult.append({0:"SessionExpired", 1:"Error"})
except Exception, err:
logging.info("ConnectDB.py : - Error in Procedure Calling : " + traceback.format_exc())
#procedureResult = (('ProcedureCallError','Error',),)
procedureResult = []
procedureResult.append({0:"ProcedureCallError", 1:"Error"})
except Exception, err:
logging.info("Error In DataBase Connection : " + traceback.format_exc())
#procedureResult = (('DataBaseConnectionError','Error',),)
procedureResult = []
procedureResult.append({0:"DataBaseConnectionError", 1:"Error"})
# disconnect from server
finally:
try:
cursor.close()
db.close()
except Exception, err:
logging.info("Error In Closing Connection : " + traceback.format_exc())
return procedureResult
Two possible improvements :
your startup code for instances may take too long, check what is the startup time and if possible use warmup requests to reduce startup times. Since increasing your idle instances seems to help, your startup time may take too long.
A better approach would be to call external services (e.g. talk to Google Calendar) in a Task Queue outside of the user request scope. This gives you a 10-min deadline instead of the 60s deadline for user requests
I'm creating a RESTful API which needs to access the database. I'm using Restish, Oracle, and SQLAlchemy. However, I'll try to frame my question as generically as possible, without taking Restish or other web APIs into account.
I would like to be able to set a timeout for a connection executing a query. This is to ensure that long running queries are abandoned, and the connection discarded (or recycled). This query timeout can be a global value, meaning, I don't need to change it per query or connection creation.
Given the following code:
import cx_Oracle
import sqlalchemy.pool as pool
conn_pool = pool.manage(cx_Oracle)
conn = conn_pool.connect("username/p4ss#dbname")
conn.ping()
try:
cursor = conn.cursor()
cursor.execute("SELECT * FROM really_slow_query")
print cursor.fetchone()
finally:
cursor.close()
How can I modify the above code to set a query timeout on it?
Will this timeout also apply to connection creation?
This is similar to what java.sql.Statement's setQueryTimeout(int seconds) method does in Java.
Thanks
for the query, you can look on timer and conn.cancel() call.
something in those lines:
t = threading.Timer(timeout,conn.cancel)
t.start()
cursor = conn.cursor()
cursor.execute(query)
res = cursor.fetchall()
t.cancel()
In linux see /etc/oracle/sqlnet.ora,
sqlnet.outbound_connect_timeout= value
also have options:
tcp.connect_timeout and sqlnet.expire_time, good luck!
You could look at setting up PROFILEs in Oracle to terminate the queries after a certain number of logical_reads_per_call and/or cpu_per_call
Timing Out with the System Alarm
Here's how to use the operating system timout to do this. It's generic, and works for things other than Oracle.
import signal
class TimeoutExc(Exception):
"""this exception is raised when there's a timeout"""
def __init__(self): Exception.__init__(self)
def alarmhandler(signame,frame):
"sigalarm handler. raises a Timeout exception"""
raise TimeoutExc()
nsecs=5
signal.signal(signal.SIGALRM, alarmhandler) # set the signal handler function
signal.alarm(nsecs) # in 5s, the process receives a SIGALRM
try:
cx_Oracle.connect(blah blah) # do your thing, connect, query, etc
signal.alarm(0) # if successful, turn of alarm
except TimeoutExc:
print "timed out!" # timed out!!