In the PyMySQL library, in cursors.py the following functions are called:
def __enter__(self):
return self
def __exit__(self, *exc_info):
del exc_info
self.close()
That's mean that if I use the cursor class in the with statement, the cursor should close whenever I go out from the nested block. Why instead it remain setted?
db = pymysql.connect(config)
with pymysql.cursors.Cursor(db) as cursor:
print(cursor)
print(cursor)
also:
db = pymysql.connect(config)
with db.cursor() as cursor:
print(cursor)
print(cursor)
both forms return the cursor object printing two times (one time inside the with statement and one time out from the with statement?. Am I doing something wrong?
Closing a cursor doesn't null out the cursor, just detaches it from the database. Try printing cursor.connection instead.
Also, I think you're expecting the "with" keyword to delete the object in question, but it's really just syntactic sugar around the enter and exit functions.
Related
I have a class which is an ndb.Model.
I am trying to add pagination so I added this:
#classmethod
def get_next_page(cls, cursor):
q = cls.query()
q_forward = q.order(cls.title)
if cursor:
cursor = ndb.datastore_query.Cursor(cursor)
objects, cursor, more = q_forward.fetch_page(10, start_cursor=cursor)
return objects, cursor.urlsafe(), more
However, fetch_page ALWAYS returns more == false and cursor is always just empty. But if I instead of cursor use offset=5 or offset=10 or whatever it works just fine. The cursor does not update so it always starts from the first item.
I am testing this locally with stub context.
I wonder what am I missing? I'm very new to this.
I believe it should be ndb._datastore_query.Cursor (see reference) or just do ndb.Cursor
If the cursor came from UI and you had previously made it urlsafe, then you should be doing ndb._datastore_query.Cursor(urlsafe=cursor) or ndb.Cursor(urlsafe=cursor)
Also, when you don't have a cursor, make sure it's explicitly set to None or just do ndb.Cursor() or ndb._datastore_query.Cursor()
Instead of using:
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute(...)
c.close()
would it be possible to use the Pythonic idiom:
with conn.cursor() as c:
c.execute(...)
It doesn't seem to work:
AttributeError: __exit__
Note: it's important to close a cursor because of this.
You can use contextlib.closing:
import sqlite3
from contextlib import closing
conn = sqlite3.connect(':memory:')
with closing(conn.cursor()) as cursor:
cursor.execute(...)
This works because closing(object) automatically calls the close() method of the passed in object after the with block.
A simpler alternative would be to use the connection object with the context manager, as specified in the docs.
with con:
con.execute(...)
If you insist on working with the cursor (because reasons), then why not make your own wrapper class?
class SafeCursor:
def __init__(self, connection):
self.con = connection
def __enter__(self):
self.cursor = self.con.cursor()
return self.cursor
def __exit__(self, typ, value, traceback):
self.cursor.close()
You'll then call your class like this:
with SafeCursor(conn) as c:
c.execute(...)
Adding to sudormrfbin's post. I've recently experienced an issue where an INSERT statement wasn't committing to the database. Turns out I was missing the with context manager for just the Connection object.
Also, it is a good practice to always close the Cursor object as well, as mentioned in this post.
Therefore, use two contextlib.closing() methods, each within a with context manager:
import contextlib
import sqlite3
# Auto-closes the Connection object
with contextlib.closing(sqlite3.connect("path_to_db_file")) as conn:
# Auto-commit to the database
with conn:
# Auto-close the Cursor object
with contextlib.closing(conn.cursor()) as cursor:
# Execute method(s)
cursor.execute(""" SQL statements here """)
Below is a database pooling example. I don't understand the following.
Why the getcursor function use "yield"?
What is the context manager?
from psycopg2.pool import SimpleConnectionPool
from contextlib import contextmanager
dbConnection = "dbname='dbname' user='postgres' host='localhost' password='postgres'"
# pool define with 10 live connections
connectionpool = SimpleConnectionPool(1,10,dsn=dbConnection)
#contextmanager
def getcursor():
con = connectionpool.getconn()
try:
yield con.cursor()
finally:
connectionpool.putconn(con)
def main_work():
try:
# with here will take care of put connection when its done
with getcursor() as cur:
cur.execute("select * from \"TableName\"")
result_set = cur.fetchall()
except Exception as e:
print "error in executing with exception: ", e**
Both of your questions are related. In python context managers are what we're using whenever you see a with statement. Classically, they're written like this.
class getcursor(object):
def __enter__(self):
con = connectionpool.getconn()
return con
def __exit__(self, *args):
connectionpool.putconn(con)
Now when you use a context manager, it calls the __enter__ method on the with statement and the __exit__ method when it exits the context. Think of it like this.
cursor = getcursor()
with cursor as cur: # calls cursor.__enter__()
cur.execute("select * from \"TableName\"")
result_set = cur.fetchall()
# We're now exiting the context so it calls `cursor.__exit__()`
# with some exception info if relevant
x = 1
The #contextmanager decorator is some sugar to make creating a context manager easier. Basically, it uses the yield statement to give the execution back to the caller. Everything up to and including the yield statement is the __enter__ method and everything after that is effectively the __exit__ statement.
How can I perform post processing on my SQL3 database via python? The following code doesn't work, but what I am trying to do is first create a new database if not exists already, then insert some data, and finally execute the query and close the connection. But I what to do so separately, so as to add additional functionality later on, such as delete / updater / etc... Any ideas?
class TitlesDB:
# initiate global variables
conn = None
c = None
# perform pre - processing
def __init__(self, name):
import os
os.chdir('/../../')
import sqlite3
conn = sqlite3.connect(name)
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS titles (title VARCHAR(100) UNIQUE)')
# insert a bunch of new titles
def InsertTitles(self, list):
c.executemany('INSERT OR IGNORE INTO titles VALUES (?)', list)
# perform post - processing
def __fina__(self):
conn.commit()
conn.close()
You could create a context manager to do the pre- and postprocessing.
import contextlib
#contextlib.contextmanager
def titles_cursor():
# perform pre - processing
conn = sqlite3.connect(name)
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS titles (title VARCHAR(100) UNIQUE)')
yield c
# perform post - processing
conn.commit()
conn.close()
Use it in a with statement:
with titles_cursor() as c:
c.executemany('INSERT OR IGNORE INTO titles VALUES (?)', list)
First, wouldn't it be better to avoid having the sql connection inside the __init__?
You will have a problem if you want to use this class in the same instance after using __fina__.
You could have it in another method and call it and call the connection closing method when needed and commit after each method is executed.
Here is what I use : Create a class method that connects to the db and executes a query from an argument,commits and closes connection and you pass any query you want executed as an argument of that method.You can simply call this method anytime you want.
And the best about this is that you can create a method that passes multiple queries as arguments before closing db connection.
This is specially usefull if have to use sql connections to the same db in another class without using a set of methods each time you need to execute a sql query.
Here is a little example I used with MySQLdb module, It's pretty simple but it worked.
import MySQLdb
class DbQuery:
'''Here is the class I talked about'''
def __init__(self):
'''You can define the main queries here but it's not necessary
They can be global variables
If you don't have class dependency from which you get variables
you might not even need to define __init__'''
def Sql_Connect(self):
self.db = MySQLdb.connect("localhost","root","","data_db" )
self.cursor = db.cursor()
def Sql_Commit(self):
self.db.commit()
print "Info : Database updated"
except:
self.db.rollback()
print "Error : Database rollback"
self.db.close()
def Query(self,query):
self.Sql_Connect()
try :
self.cursor.execute(query)
self.Sql_Commit()
The only thing important is to remember the query structure.
I'm using MySQLdb to connect to MySQL using python. My tables are all InnoDB and I'm using transactions.
I'm struggling to come up with a way to 'share' transactions across functions. Consider the following pseudocode:
def foo():
db = connect()
cur = db.cursor()
try:
cur.execute(...)
conn.commit()
except:
conn.rollback()
def bar():
db = connect()
cur = db.cursor()
try:
cur.execute(...)
foo() # note this call
conn.commit()
except:
conn.rollback()
At some points in my code, I need to call foo() and at some points I need to call bar(). What's the best practice here? How would I tell the call to foo() to commit() if called outside bar() but not inside bar()? This is obviously more complex if there are multiple threads calling foo() and bar() and the calls to connect() don't return the same connection object.
UPDATE
I found a solution which works for me. I've wrapped connect() to increment a value when called. Calling commit() decrements that value. If commit() is called and that counter's > 0, no commit happens and the value is decremented. You therefore get this:
def foo():
db = connect() # internal counter = 1
...
db.commit() # internal counter = 0, so commit
def bar():
db = connect() # internal counter = 1
...
foo() # internal counter goes to 2, then to 1 when commit() is called, so no commit happens
db.commit() # internal counter = 0, so commit
You can take advantage of Python's default function arguments in this case:
def foo(cur=None):
inside_larger_transaction = False
if cursor is None:
db = connect()
cur = db.cursor()
inside_larger_transaction = True
try:
cur.execute(...)
if not inside_larger_transaction:
conn.commit()
except:
conn.rollback()
So, if bar is calling foo, it just pass in the cursor object as a parameter.
Not that I don't see much sense in creating a different cursor object for each small function - you should either write your several functions as methods of an object, and have a cursor attribute - or pass the cursos explicitly always (in this case, use another named parameter to indicate whether the current function is part of a major transaction or not)
Another option is to create a context-manager class to make your commits, and encapsulate all transactions within it - therefore, none of your functions should do transaction commit - you would keep both transaction.commit and transaction.rollback calls on the __exit__method of this object.
class Transaction(object):
def __enter__(self):
self.db = connect()
cursor = self.db.cursor()
return cursor
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is None:
self.db.commit()
else:
self.db.rollback()
And just use it like this:
def foo(cursor):
...
def foo(cur):
cur.execute(...)
def bar(cur):
cur.execute(...)
foo(cur)
with Transaction() as cursor:
foo(cursor)
with Transaction() as cursor:
bar(cursor)
The cleanest way IMO is to pass the connection object to foo and bar
Declare you connections outside the functions and pass them to the function as arguements
foo(cur, conn)
bar(cur, conn)