I was doing a tutorial and came across a way to handle connections with sqlite3,
Then I studied about the WITH keyword and found out that it is an alternative to try,except,finally way of doing things
It was said that in case of file-handling, 'WITH' automatically handles closing of files and I thought similar with the connection as said in zetcode tutorial:-
"With the with keyword, the Python interpreter automatically releases
the resources. It also provides error handling." http://zetcode.com/db/sqlitepythontutorial/
so I thought it would be good to use this way of handling things, but I couldn't figure out why both (inner scope and outer scope) statements work? shouldn't the WITH release the connection?
import sqlite3
con = sqlite3.connect('test.db')
with con:
cur = con.cursor()
cur.execute('SELECT 1,SQLITE_VERSION()')
data = cur.fetchone()
print data
cur.execute('SELECT 2,SQLITE_VERSION()')
data = cur.fetchone()
print data
which outputs
(1, u'3.6.21')
(2, u'3.6.21')
I don't know what exactly the WITH is doing here(or does in general), so, if you will please elaborate on the use of WITH over TRY CATCH in this context.
And should the connections be opened and closed on each query? (I am formulating queries inside a function which I call each time with an argument) Would it be a good practice?
In general, a context manager is free to do whatever its author wants it to do when used. Set/reset a certain system state, cleaning up resources after use, acquiring/releasing a lock, etc.
In particular, as Jon already writes, a database connection object creates a transaction when used as a context manager. If you want to auto-close the connection, you can do
with contextlib.closing(sqlite3.connect('test.db')) as con:
with con as cur:
cur.execute('SELECT 1,SQLITE_VERSION()')
data = cur.fetchone()
print data
with con as cur:
cur.execute('SELECT 2,SQLITE_VERSION()')
data = cur.fetchone()
print data
You could also write your own wrapper around sqlite3 to support with:
class SQLite():
def __init__(self, file='sqlite.db'):
self.file=file
def __enter__(self):
self.conn = sqlite3.connect(self.file)
self.conn.row_factory = sqlite3.Row
return self.conn.cursor()
def __exit__(self, type, value, traceback):
self.conn.commit()
self.conn.close()
with SQLite('test.db') as cur:
print(cur.execute('select sqlite_version();').fetchall()[0][0])
https://docs.python.org/2.5/whatsnew/pep-343.html#SECTION000910000000000000000
From the docs: http://docs.python.org/2/library/sqlite3.html#using-the-connection-as-a-context-manager
Connection objects can be used as context managers that automatically commit or rollback transactions. In the event of an exception, the transaction is rolled back; otherwise, the transaction is committed:
So, the context manager doesn't release the connection, instead, it ensures that any transactions occurring on the connection are rolled back if any exception occurs, or committed otherwise... Useful for DELETE, UPDATE and INSERT queries for instance.
Related
Im using the mysql-connector-python library for a script I wrote to access a MySql 8 Database:
def get_document_by_id(conn,id):
cursor = conn.cursor(dictionary=True)
try:
cursor.execute("SELECT * FROM documents WHERE id = %s",(id,))
return cursor.fetchone()
except Exception as e:
print(str(e))
return {}
finally:
cursor.close()
Since i need to call this function multiple times during a loop, I was wondering about the following:
Does the act of creating/closing a cursor actually interact with my MySql-Database in any way or is it just used as an abstraction in python for grouping together SQL queries?
Thanks a lot for your help.
The doc says that clearly
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests.
You can also refer to Connector/Python Connection Pooling for further information.
Hello everyone I have the following issue,
I am trying to run a simple UPDATE query using sqlalchemy and psycopg2 for Postgres.
the query is
update = f"""
UPDATE {requests_table_name}
SET status = '{status}', {column_db_table} = '{dt.datetime.now()}'
WHERE request_id = '{request_id}'
"""
and then commit the changes using
cursor.execute(update).commit()
But it throws an error that AttributeError: 'NoneType' object has no attribute 'commit'
My connection string is
engine = create_engine(
f'postgresql://{self.params["user"]}:{self.params["password"]}#{self.params["host"]}:{self.params["port"]}/{self.params["database"]}')
conn = engine.connect().connection
cursor = conn.cursor()
The other thing is that cursor is always closed <cursor object at 0x00000253827D6D60; closed: 0>
The connection with the database is ok, I can etch tables and update them using pandas pd_to_sql method, but with commmiting using cursor it does not work. It works perfect with sql server but not with postgres.
In postgres, however, it creates a PID with the status "idle in transaction" and Client: ClientRead, every time I run cursor.execute(update).commit().
I connot get where is the problem, in the code or in the database.
I tried to use different methods to initiate a cursor, like raw_connection(), but without a result.
I checked for Client: ClientRead with idle in transaction but am not sure how to overcome it.
You have to call commit() on the connection object.
According to the documentation, execute() returns None.
Note that even if you use a context manager like this:
with my_connection.cursor() as cur:
cur.execute('INSERT INTO ..')
You may find your database processes still getting stuck in the idle in transaction state. The COMMIT is handled at the connection level, like #laurenz-albe said, so you need to wrap that too:
with my_connection as conn:
with conn.cursor() as cur:
cur.execute('INSERT INTO ..')
It's spelled out clearly in the documentation, but I still managed to overlook it.
Being new to both Python and sqlite, I've been playing around with them both recently trying to figure things out. In particular with sqlite I've learned how to open/close/commit data to a db. But now I'm trying to clean things up a bit so that I can open/close the db via function calls. For instance, I'd like to do something like:
def open_db():
conn = sqlite3.connect("path)
c = conn.cursor()
def close_db():
c.close()
conn.close()
def create_db():
open_db()
c.execute("CREATE STUFF")
close_db()
Then when I run the program, before I query or write to the table, I could do something like:
open_db()
c.execute('SELECT * DO STUFF')
OR
c.execute('DELETE * DO OTHER STUFF')
conn.commit
close_db()
I've read about context managers but I'm not sure I understand entirely whats going on with them. What would be the easiest solution to cleaning up the way I open/close my DB connections so I'm not always having to type in the cursor command.
This is because the connection you define is local to the open db function. Change it as follows
def open_db():
conn = sqlite3.connect("path)
return conn.cursor()
and then
c = open_db()
c.execute('SELECT * DO STUFF')
It should be noted that writing function like this purely as a learning exercise might be ok, but generally it's not very useful to write a thin wrapper around a database connectivity api.
I don't know that there is an easy way. As already suggested, if you make the name of a database cursor or connection local to a function then these will be lost upon exit from that function. The answer might be to write code using the contextlib module (which is included with the Python distribution, and documented in the help file); I wouldn't call that easy. The documentation for sqlite3 does mention that connection objects can be used as context managers; I suspect you've already noticed that. I also see that there's some sort of context manager for MySQL but I haven't used it.
I am constructing a model that does large parts of its calculations in a Postgresql database (for performance reasons). It looks somewhat like this:
def sql_func1(conn):
# prepare some data, crunch some number, etc.
curs = conn.cursor()
curs.execute("SOME SQL COMMAND")
curs.commit()
curs.close()
if __name__ == "__main__":
connection = psycopg2.connect(dbname='name', user='user', password='pass', host='localhost', port=1234)
sql_func1(conn)
sql_func2(conn)
sql_func3(conn)
connection.close()
The script uses around 30 individual functions like sql_func1. Obviously it is a little awkward to manage the connection and cursor in each function all the time. Thus I started using a decorator as described here. Now I can simply wrap sql_func1 with a decorator #db_connect and pass the connection from there. However, that means I am opening and closing the connection all the time, which is not good practice either. The psycopg2 FAQ says:
Creating a connection can be slow (think of SSL over TCP) so the best
practice is to create a single connection and keep it open as long as
required. It is also good practice to rollback or commit frequently
(even after a single SELECT statement) to make sure the backend is
never left “idle in transaction”. See also psycopg2.pool for
lightweight connection pooling.
Could you please give me some insights which would be an ideal practice im my case. Should I rather use a decorator that passes the cursor object instead of the connection? If so, please provide a code sample for the decorator. As I am rather new to programming, please let me also know in case you think my overall approach is wrong.
What about storing the connection in a global variable without closing it in the finally block? Something like this (according to the example you linked):
cnn = None
def with_connection(f):
def with_connection_(*args, **kwargs):
global cnn
if not cnn:
cnn = psycopg.connect(DSN)
try:
rv = f(cnn, *args, **kwargs)
except Exception, e:
cnn.rollback()
raise
else:
cnn.commit() # or maybe not
return rv
return with_connection_
I'm writing a python CGI script that will query a MySQL database. I'm using the MySQLdb module. Since the database will be queryed repeatedly, I wrote this function....
def getDatabaseResult(sqlQuery,connectioninfohere):
# connect to the database
vDatabase = MySQLdb.connect(connectioninfohere)
# create a cursor, execute and SQL statement and get the result as a tuple
cursor = vDatabase.cursor()
try:
cursor.execute(sqlQuery)
except:
cursor.close()
return None
result = cursor.fetchall()
cursor.close()
return result
My question is... Is this the best practice? Of should I reuse my cursor within my functions? For example. Which is better...
def callsANewCursorAndConnectionEachTime():
result1 = getDatabaseResult(someQuery1)
result2 = getDatabaseResult(someQuery2)
result3 = getDatabaseResult(someQuery3)
result4 = getDatabaseResult(someQuery4)
or do away with the getDatabaseeResult function all together and do something like..
def reusesTheSameCursor():
vDatabase = MySQLdb.connect(connectionInfohere)
cursor = vDatabase.cursor()
cursor.execute(someQuery1)
result1 = cursor.fetchall()
cursor.execute(someQuery2)
result2 = cursor.fetchall()
cursor.execute(someQuery3)
result3 = cursor.fetchall()
cursor.execute(someQuery4)
result4 = cursor.fetchall()
The MySQLdb developer recommends building an application specific API that does the DB access stuff for you so that you don't have to worry about the mysql query strings in the application code. It'll make the code a bit more extendable (link).
As for the cursors my understanding is that the best thing is to create a cursor per operation/transaction. So some check value -> update value -> read value type of transaction could use the same cursor, but for the next one you would create a new one. This is again pointing to the direction of building an internal API for the db access instead of having a generic executeSql method.
Also remember to close your cursors, and commit changes to the connection after the queries are done.
Your getDatabaseResult function doesn't need to have a connect for every separate query though. You can share the connection between the queries as long as you act responsible with the cursors.