How do I promote PostgreSQL warnings to exceptions in psycopg2? - python

From the PostgreSQL docs on BEGIN:
Issuing BEGIN when already inside a transaction block will provoke a
warning message. The state of the transaction is not affected.
How can I make psycopg2 raise an exception on any such warning?

I am very far from being psycopg2 or Postgres expert, and, I am sure there is a better solution to increase the warning level, but here is something that worked for me - a custom cursor which looks into connection notices and, if there is something there - it throws an exception. The implementation itself is for education purposes mostly - I am sure it needs to be adjusted to work in your use case:
import psycopg2
# this "cursor" class needs to be used as a base for custom cursor classes
from psycopg2.extensions import cursor
class ErrorThrowingCursor(cursor):
def __init__(self, conn, *args, **kwargs):
self.conn = conn
super(ErrorThrowingCursor, self).__init__(*args, **kwargs)
def execute(self, query, vars=None):
result = super(ErrorThrowingCursor, self).execute(query, vars)
for notice in self.conn.notices:
level, message = notice.split(": ")
if level == "WARNING":
raise psycopg2.Warning(message.strip())
return result
Usage sample:
conn = psycopg2.connect(user="user", password="secret")
cursor = conn.cursor(conn, cursor_factory=ErrorThrowingCursor)
This would throw an exception (of a psycopg2.Warning type) if a warning was issued after a query execution. Sample:
psycopg2.Warning: there is already a transaction in progress

Related

How to extend sqlite3 connection object with own functions?

I have a project written in Python 2.7 where the main program needs frequent access to a sqlite3 db for writing logs, measurement results, getting settings,...
At the moment I have a db module with functions such as add_log(), get_setting(), and each function in there basically looks like:
def add_log(logtext):
try:
db = sqlite3.connect(database_location)
except sqlite3.DatabaseError as e:
db.close() # try to gracefully close the db
return("ERROR (ADD_LOG): While opening db: {}".format(e))
try:
with db: # using context manager to automatically commit or roll back changes.
# when using the context manager, the execute function of the db should be used instead of the cursor
db.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
except sqlite3.DatabaseError as e:
return("ERROR (ADD_LOG): While adding log to db: {}".format(e))
return "OK"
(some additional code and comments removed).
It seems I should write a class extends the base sqlite connection object function so that the connection is created only once (at the beginning of the main program), and then this object contains the functionality such as
class Db(sqlite3.Connection):
def __init__(self, db_location = database_location):
try:
self = sqlite3.connect(db_location)
return self
except sqlite3.DatabaseError as e:
self.close() # try to gracefully close the db
def add_log(self, logtext):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
It seems this should be fairly straightforward but, I can't seem to get it working.
It seems there is some useful advise here:
Python: How to successfully inherit Sqlite3.Cursor and add my customized method but I can't seem to understand how to use a similar construct for my purpose.
You are not that far away.
First of all, a class initializer cannot return anything but None (emphasis mine):
Because __new__() and __init__() work together in constructing objects (__new__() to create it, and __init__() to customise it), no non-None value may be returned by __init__(); doing so will cause a TypeError to be raised at runtime.
Second, you overwrite the current instance self of your Db object with a sqlite3.Connection object right in the initializer. That makes subclassing SQLite's connection object a bit pointless.
You just need to fix your __init__ method to make this work:
class Db(sqlite3.Connection):
# If you didn't use the default argument, you could omit overriding __init__ alltogether
def __init__(self, database=database_location, **kwargs):
super(Db, self).__init__(database=database, **kwargs)
def add_log(self, logtext, level, source):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
That lets you use instances of your class as context managers:
with Db() as db:
print [i for i in db.execute("SELECT * FROM logs")]
db.add_log("I LAUNCHED THAT PUG INTO SPACE!", 42, "Right there")
Maurice Meyer said in the comments of the question that methods such as execute() are cursor methods and, per the DB-API 2.0 specs, that's correct.
However, sqlite3's connection objects offer a few shortcuts to cursor methods:
This is a nonstandard shortcut that creates an intermediate cursor object by calling the cursor method, then calls the cursor’s execute method with the parameters given.
To expand on the discussion in the comments:
The remark about the default argument in my code example above was targeted at the requirement to override sqlite3.Connection's __init__ method.
The __init__ in the class Db is only needed to define the default value database_location on the database argument for the sqlite3.Connection initializer.
If you were willing to pass such a value upon every instantiation of that class, your custom connection class could look like this, and still work the same way, except for that argument:
class Db(sqlite3.Connection):
def add_log(self, logtext, level, source):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
However, the __init__ method has nothing to do with the context manager protocol as defined in PEP 343.
When it comes to classes, this protocol requires to implement the magic methods __enter__ and __exit__
The sqlite3.Connection does something along these lines:
class Connection:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val is None:
self.commit()
else:
self.rollback()
Note: The sqlite3.Connection is provided by a C module, hence does not have a Python class definition. The above reflects what the methods would roughly look like if it did.
Lets say you don't want to keep the same connection open all the time, but rather have a dedicated connection per transaction while maintaining the general interface of the Db class above.
You could do something like this:
# Keep this to have your custom methods available
class Connection(sqlite3.Connection):
def add_log(self, level, source, log):
self.execute("INSERT INTO logs(level, source, log) VALUES (?, ?, ?)",
(level, source, log))
class DBM:
def __init__(self, database=database_location):
self._database = database
self._conn = None
def __enter__(self):
return self._connection()
def __exit__(self, exc_type, exc_val, exc_tb):
# Decide whether to commit or roll back
if exc_val:
self._connection().rollback()
else:
self._connection().commit()
# close connection
try:
self._conn.close()
except AttributeError:
pass
finally:
self._conn = None
def _connection(self):
if self._conn is None:
# Instantiate your custom sqlite3.Connection
self._conn = Connection(self._database)
return self._conn
# add shortcuts to connection methods as seen fit
def execute(self, sql, parameters=()):
with self as temp:
result = temp.execute(sql, parameters).fetchall()
return result
def add_log(self, level, source, log):
with self as temp:
temp.add_log(level, source, log)
This can be used in a context and by calling methods on the instance:
db = DBM(database_location)
with db as temp:
print [i for i in temp.execute("SELECT * FROM logs")]
temp.add_log(1, "foo", "I MADE MASHED POTATOES")
# The methods execute and add_log are only available from
# the outside because the shortcuts have been added to DBM
print [i for i in db.execute("SELECT * FROM logs")]
db.add_log(1, "foo", "I MADE MASHED POTATOES")
For further reading on context managers refer to the official documentation. I'll also recommend Jeff Knupp's nice introduction. Also, the aforementioned PEP 343 is worth having a look at for the technical specification and rationale behind that protocol.

pymysql cursor doesn't close

In the PyMySQL library, in cursors.py the following functions are called:
def __enter__(self):
return self
def __exit__(self, *exc_info):
del exc_info
self.close()
That's mean that if I use the cursor class in the with statement, the cursor should close whenever I go out from the nested block. Why instead it remain setted?
db = pymysql.connect(config)
with pymysql.cursors.Cursor(db) as cursor:
print(cursor)
print(cursor)
also:
db = pymysql.connect(config)
with db.cursor() as cursor:
print(cursor)
print(cursor)
both forms return the cursor object printing two times (one time inside the with statement and one time out from the with statement?. Am I doing something wrong?
Closing a cursor doesn't null out the cursor, just detaches it from the database. Try printing cursor.connection instead.
Also, I think you're expecting the "with" keyword to delete the object in question, but it's really just syntactic sugar around the enter and exit functions.

Is there a "with conn.cursor() as..." way to work with Sqlite?

Instead of using:
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute(...)
c.close()
would it be possible to use the Pythonic idiom:
with conn.cursor() as c:
c.execute(...)
It doesn't seem to work:
AttributeError: __exit__
Note: it's important to close a cursor because of this.
You can use contextlib.closing:
import sqlite3
from contextlib import closing
conn = sqlite3.connect(':memory:')
with closing(conn.cursor()) as cursor:
cursor.execute(...)
This works because closing(object) automatically calls the close() method of the passed in object after the with block.
A simpler alternative would be to use the connection object with the context manager, as specified in the docs.
with con:
con.execute(...)
If you insist on working with the cursor (because reasons), then why not make your own wrapper class?
class SafeCursor:
def __init__(self, connection):
self.con = connection
def __enter__(self):
self.cursor = self.con.cursor()
return self.cursor
def __exit__(self, typ, value, traceback):
self.cursor.close()
You'll then call your class like this:
with SafeCursor(conn) as c:
c.execute(...)
Adding to sudormrfbin's post. I've recently experienced an issue where an INSERT statement wasn't committing to the database. Turns out I was missing the with context manager for just the Connection object.
Also, it is a good practice to always close the Cursor object as well, as mentioned in this post.
Therefore, use two contextlib.closing() methods, each within a with context manager:
import contextlib
import sqlite3
# Auto-closes the Connection object
with contextlib.closing(sqlite3.connect("path_to_db_file")) as conn:
# Auto-commit to the database
with conn:
# Auto-close the Cursor object
with contextlib.closing(conn.cursor()) as cursor:
# Execute method(s)
cursor.execute(""" SQL statements here """)

how to use python cx_Oracle with spool

I'm using python3.4 to interact with oracle(11g)/sql developer.
Is it true that cx_Oracle could not deal with sqlPlus statements? It seems that the page https://sourceforge.net/p/cx-oracle/mailman/message/2932119/ said so.
So how could we execute 'spool' command by python?
The code:
import cx_Oracle
db_conn = cx_Oracle.connect(...)
cursor = db_conn.cursor()
cursor.execute('spool C:\\Users\Administrator\Desktop\mycsv.csv')
...
the error: cx_Oracle.DatabaseError: ORA-00900:
The "spool" command is very specific to SQL*Plus and is not available in cx_Oracle or any other application that uses the OCI (Oracle Call Interface). You can do something similar, however, without too much trouble.
You can create your own Connection class subclassed from cx_Oracle.Connection and your own Cursor class subclassed from cx_Oracle.Cursor that would perform any logging and have a special command "spool" that would turn it on and off at will. Something like this:
class Connection(cx_Oracle.Connection):
def __init__(self, *args, **kwargs):
self.spoolFile = None
return super(Connection, self).__init__(*args, **kwargs)
def cursor(self):
return Cursor(self)
def spool(self, fileName):
self.spoolFile = open(fileName, "w")
class Cursor(cx_Oracle.Cursor):
def execute(self, statement, args):
result = super(Cursor, self).execute(statement, args)
if self.connection.spoolFile is not None:
self.connection.spoolFile.write("Headers for query\n")
self.connection.spoolFile.write("use cursor.description")
def fetchall(self):
rows = super(Cursor, self).fetchall()
if self.connection.spoolFile is not None:
for row in rows:
self.connection.spoolFile.write("row details")
That should give you some idea on where to go with this.

Perform pre and post processing with sqlite3

How can I perform post processing on my SQL3 database via python? The following code doesn't work, but what I am trying to do is first create a new database if not exists already, then insert some data, and finally execute the query and close the connection. But I what to do so separately, so as to add additional functionality later on, such as delete / updater / etc... Any ideas?
class TitlesDB:
# initiate global variables
conn = None
c = None
# perform pre - processing
def __init__(self, name):
import os
os.chdir('/../../')
import sqlite3
conn = sqlite3.connect(name)
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS titles (title VARCHAR(100) UNIQUE)')
# insert a bunch of new titles
def InsertTitles(self, list):
c.executemany('INSERT OR IGNORE INTO titles VALUES (?)', list)
# perform post - processing
def __fina__(self):
conn.commit()
conn.close()
You could create a context manager to do the pre- and postprocessing.
import contextlib
#contextlib.contextmanager
def titles_cursor():
# perform pre - processing
conn = sqlite3.connect(name)
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS titles (title VARCHAR(100) UNIQUE)')
yield c
# perform post - processing
conn.commit()
conn.close()
Use it in a with statement:
with titles_cursor() as c:
c.executemany('INSERT OR IGNORE INTO titles VALUES (?)', list)
First, wouldn't it be better to avoid having the sql connection inside the __init__?
You will have a problem if you want to use this class in the same instance after using __fina__.
You could have it in another method and call it and call the connection closing method when needed and commit after each method is executed.
Here is what I use : Create a class method that connects to the db and executes a query from an argument,commits and closes connection and you pass any query you want executed as an argument of that method.You can simply call this method anytime you want.
And the best about this is that you can create a method that passes multiple queries as arguments before closing db connection.
This is specially usefull if have to use sql connections to the same db in another class without using a set of methods each time you need to execute a sql query.
Here is a little example I used with MySQLdb module, It's pretty simple but it worked.
import MySQLdb
class DbQuery:
'''Here is the class I talked about'''
def __init__(self):
'''You can define the main queries here but it's not necessary
They can be global variables
If you don't have class dependency from which you get variables
you might not even need to define __init__'''
def Sql_Connect(self):
self.db = MySQLdb.connect("localhost","root","","data_db" )
self.cursor = db.cursor()
def Sql_Commit(self):
self.db.commit()
print "Info : Database updated"
except:
self.db.rollback()
print "Error : Database rollback"
self.db.close()
def Query(self,query):
self.Sql_Connect()
try :
self.cursor.execute(query)
self.Sql_Commit()
The only thing important is to remember the query structure.

Categories