CALL multiple procedures with python - python

I have a linux server and I would like to run a python script every day to run mysql procedures but I do not know how to run multiple procedures and put a condition if there is an error that it sends me an email with the description of the error. Here is my script with only one procedure:
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","user","password","bddname" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# execute SQL query using execute() method.
cursor.execute("CALL proc_commande_clts_detail7();")
# Fetch a single row using fetchone() method.
data = cursor.fetchone()
print "Database version : %s " % data
# disconnect from server
db.close()
Thank you for your help.

You can use callproc method to execute MySQL procedures
for proc_name in proc_name_list:
try:
result_args = cursor.callproc(proc_name, args=())
except Exception as e:
send_mail(str(e))
If you want to call multiple procedures, you can put callproc in some kind of loop and use try...catch for error handling.

wrapping them in try/except block and trigger email in except block?
Scheduling can be done through cron job.
import traceback
try:
cursor.execute("CALL proc_commande_clts_detail7();")
catch Exception as e:
email_msg = traceback.format_exc()
#send email logic

Related

How do I resolve a connection pool handle = 1 error when using teradatasql in python?

I am attempting to execute some basic SQL via Python using the teradatasql module. The code appears to run and the SQL is executed: however, the execution of the Python itself ends with an error on the end of the code reproduced below. Currently, I need to run additional data preprocessing steps using pandas on the output of the SQL, but the larger program will not continue past the Operational Error (not even via a try/except block excepting the teradatsql.OperationalError). Therefore even though the SQL executes fine with this issue, I need to resolve it.
Any suggestions? Thank you!
Error:
teradatasql.OperationalError: 1 is not a valid connection pool handle
Code:
import teradatasql
import os
def refresh_table():
usr = ****1
with open(f'C:\\Users\\{usr}\\Documents\\my_td_password.txt', 'r') as my_pwd_f:
pw = my_pwd_f.read()
with teradatasql.connect(host = '*******2'
, user = usr
, password = pw
, ) as con:
with con.cursor() as cur:
with open('C:\\Users\\****1\\Documents\\test.sql', 'r') as my_sql:
sql_script = my_sql.read()
for sql_block in sql_script.split(';'):
try:
cur.execute(sql_block)
print("Block executed")
except ValueError:
print("Failure to execute block: ValueError")
finally:
print(sql_block)
my_sql.close()
print("SQL file closed")
con.close()
print("Connection closed")
refresh_table()
Fixed by removing con.close() from the end - as Fred pointed out, the with block implicitly closes the connection when it finishes execution
https://stackoverflow.com/users/11552426/fred

psycopg2 why is the schema not created?

im trying to create a schema in postgres database using psycopg2.
For some reason the schema is not created and later on the code crashes because it tries to refer to the missing schema. The connection is set to auto commit mode, which definetly works because i can create a database with this specific connection.
For debugging purposes i have wrapped every step in it's own try/except statement.
Code is below, as it is right there, it does not raise any exceptions, just the follow up crashes because the schema is missing.
def createDB(dbName, connString):
conn = psycopg2.connect(connString)
conn.set_session(autocommit =True) # autocommit must be True sein, else CREATE DATABASE will fail https://www.psycopg.org/docs/usage.html#transactions-control
cursor = conn.cursor()
createDB = sql.SQL('CREATE DATABASE {};').format(
sql.Identifier(dbName)
)
createSchema = sql.SQL('CREATE SCHEMA IF NOT EXISTS schema2;')
searchpath = sql.SQL('ALTER DATABASE {} SET search_path TO public, schema2;').format(
sql.Identifier(dbName)
)
dropDB = sql.SQL('DROP DATABASE IF EXISTS {};').format(
sql.Identifier(dbName)
)
try:
cursor.execute(dropDB)
except Exception as e:
print('drop DB failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(createDB)
except Exception as e:
print('create DB failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(createSchema)
print('schema created')
except Exception as e:
print('create schema failed')
logging.error(e)
conn.close()
exit()
try:
cursor.execute(searchpath)
except Exception as e:
print('set searchpath failed')
logging.error(e)
conn.close()
exit()
conn.close()
Adding an explicit commit does not do the trick either.
What am i missing?
EDIT
I have added a small screenshot with the console logs. As you can see, the code below gets executed.
EDIT 2
Out of sheer curiosity, i have tried to execute this very SQL statement in pgadmin:
CREATE SCHEMA IF NOT EXISTS schema2
and it works just fine, which shows, that my SQL is not wrong, so back to square one.
EDIT 3 -- Solution
So i have come up with a solution, thank to you #jjanes for pointing me in the right direction. This function does not connect to a specific database, but the server as a whole, since im using it to create new databases, hence the connection string looks something like this :
user=postgres password=12345 host=localhost port=5432
Which allows me to perform server level operations like create and drop database. But schemas are a Database level operation. Moving the exact same logic to the part of the code which is connected to the newly created database works like a charm.
You create the schema in the original database specified by the connect string. Once you create the new database, you need to connect to it in order to work in it. Otherwise, you are just working in the old database.

pyodbc not committing changes to db2 database

I am trying to update my db2 database using pyodbc in python. The sql statement runs normally without errors on the database directly. when I run the code below, I get no errors and the code executes successfully but when I query the database, the changes did not save.
try:
conn2 = pyodbc.connect("DRIVER={iSeries Access ODBC Driver};SYSTEM="+ Config_Main.iseriesServer +";DATABASE="+ Config_Main.iseriesDB +";UID="+ Config_Main.iseriesUser +";PWD=" + Config_Main.iseriesPass)
db2 = conn2.cursor()
for row in encludeData:
count = len(str(row[2]))
srvid = row[2]
if count < 10:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid LIKE '%?' and svbrch = ?"
db2.execute(sql3, (row[4],row[1],"%" + str(srvid),row[5]))
else:
sql3 = "UPDATE SVCEN2DEV.SRVMAST SET svbrch = ? WHERE svtype != '*DCS-' AND svacct = ? AND svcid = ? and svbrch = ?"
db2.execute(sql3, (row[4],row[1],srvid,row[5]))
conn2.commit()
except pyodbc.Error as e:
logging.error(e)
I have tried setting conn2.autocommit = True. and I have also tried moving the conn2.commit() inside of the for loop to commit after each iteration. I also tried a different driver {IBM i Access ODBC Driver}
EDIT:
Sample of encludeData
['4567890001','4567890001','1234567890','1234567890','foo','bar']
After changing the except statement to grab general errors, the code above now produces this error:
IntegrityError('23000', '[23000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0803 - Duplicate key value specified. (-803) (SQLExecDirectW)')
As OP found out, the application layer language, Python, may not raise specific database exceptions such as duplicate index or foreign key issues and hence will silently fail or will be logged on server side. Usually errors that affect actual SQL queries to run like incorrect identifiers and syntax errors will raise an error on client side.
Therefore, as best practice in programming it is necessary to use exception handling like Python's try/except/finally or the equivalent in other general purpose languages that interface with any external API like database connections in order to catch and properly handle runtime issues.
Below will print any exception on statements raised in the try block including connection and query execution. And regardless of success or fail will run the finally statements.
try:
conn2 = pyodbc.connect(...)
db2 = conn2.cursor()
sql = "..."
db2.execute(sql, params)
conn2.commit()
except Exception as e:
print(e)
finally:
db2.close()
conn2.close()

How to avoid crashing python script when executing faulty SQL query?

I am using Python 2.7.6 and MySqldb module. I have a MySQL query that crashes sometimes. It is hard to catch the rootcause for the time being. How can I avoid crashing the python script when executing the SQL query? How can I make it fail gracefully?
The code looks like something;
cursor.execute(query)
You should throw an exception:
try:
cursor.execute(query)
except mysql.connector.Error:
"""your handling here"""
Here is a link to the MySQL Python Dev guide:
You can handle run time errors by using try except block.
At last you must use finally for cleanups like close the connection , rollback , free all the used resources etc.
Here is the example ,
import mysql.connector
try:
cnx = mysql.connector.connect(user='scott', database='employees')
cursor = cnx.cursor()
cursor.execute("SELECT * FORM employees") # Syntax error in query
cnx.close()
except mysql.connector.Error as err:
print("Something went wrong: {}".format(err))
finally:
# cleanup (close the connection, etc...)

Set database connection timeout in Python

I'm creating a RESTful API which needs to access the database. I'm using Restish, Oracle, and SQLAlchemy. However, I'll try to frame my question as generically as possible, without taking Restish or other web APIs into account.
I would like to be able to set a timeout for a connection executing a query. This is to ensure that long running queries are abandoned, and the connection discarded (or recycled). This query timeout can be a global value, meaning, I don't need to change it per query or connection creation.
Given the following code:
import cx_Oracle
import sqlalchemy.pool as pool
conn_pool = pool.manage(cx_Oracle)
conn = conn_pool.connect("username/p4ss#dbname")
conn.ping()
try:
cursor = conn.cursor()
cursor.execute("SELECT * FROM really_slow_query")
print cursor.fetchone()
finally:
cursor.close()
How can I modify the above code to set a query timeout on it?
Will this timeout also apply to connection creation?
This is similar to what java.sql.Statement's setQueryTimeout(int seconds) method does in Java.
Thanks
for the query, you can look on timer and conn.cancel() call.
something in those lines:
t = threading.Timer(timeout,conn.cancel)
t.start()
cursor = conn.cursor()
cursor.execute(query)
res = cursor.fetchall()
t.cancel()
In linux see /etc/oracle/sqlnet.ora,
sqlnet.outbound_connect_timeout= value
also have options:
tcp.connect_timeout and sqlnet.expire_time, good luck!
You could look at setting up PROFILEs in Oracle to terminate the queries after a certain number of logical_reads_per_call and/or cpu_per_call
Timing Out with the System Alarm
Here's how to use the operating system timout to do this. It's generic, and works for things other than Oracle.
import signal
class TimeoutExc(Exception):
"""this exception is raised when there's a timeout"""
def __init__(self): Exception.__init__(self)
def alarmhandler(signame,frame):
"sigalarm handler. raises a Timeout exception"""
raise TimeoutExc()
nsecs=5
signal.signal(signal.SIGALRM, alarmhandler) # set the signal handler function
signal.alarm(nsecs) # in 5s, the process receives a SIGALRM
try:
cx_Oracle.connect(blah blah) # do your thing, connect, query, etc
signal.alarm(0) # if successful, turn of alarm
except TimeoutExc:
print "timed out!" # timed out!!

Categories