I'm trying to figure out how to chain class methods to improve a utility class I've been writing - for reasons I'd prefer not to get into :)
Now suppose I wanted to chain a chain class methods on a class instance (in this case for setting the cursor) e.g.:
# initialize the class instance
db = CRUD(table='users', public_fields=['name', 'username', 'email'])
#the desired interface class_instance.cursor(<cursor>).method(...)
with sql.read_pool.cursor() as c:
db.cursor(c).get(target='username', where="omarlittle")
The part that's confusing is I would prefer the cursor not to persist as a class attribute after .get(...) has been called and has returned, I'd like to require that .cursor(cursor) must be first called.
class CRUD(object):
def __init__(self, table, public_fields):
self.table = table
self.public_fields = public_fields
def fields(self):
return ', '.join([f for f in self.public_fields])
def get(self, target, where):
#this is strictly for illustration purposes, I realize all
#the vulnerabilities this leaves me exposed to.
query = "SELECT {fields} FROM {table} WHERE {target} = {where}"
query.format(fields=self.fields, table=self.table, target=target,
where=where)
self.cursor.execute(query)
def cursor(self, cursor):
pass # this is where I get lost.
If I understand what you're asking, what you want is for the cursor method to return some object with a get method that works as desired. There's no reason the object it returns has to be self; it can instead return an instance of some cursor type.
That instance could have a back-reference to self, or it could get its own copy of whatever internals are needed to be a cursor, or it could be a wrapper around an underlying object from your low-level database library that knows how to be a cursor.
If you look at the DB API 2.0 spec, or implementations of it like the stdlib's sqlite3, that's exactly how they do it: A Database or Connection object (the thing you get from the top-level connect function) has a cursor method that returns a Cursor object, and that Cursor object has an execute method.
So:
class CRUDCursor(object):
def __init__(self, c, crud):
self.crud = crud
self.cursor = however_you_get_an_actual_sql_cursor(c)
def get(self, target, where):
#this is strictly for illustration purposes, I realize all
#the vulnerabilities this leaves me exposed to.
query = "SELECT {fields} FROM {table} WHERE {target} = {where}"
query.format(fields=self.crud.fields, table=self.crud.table,
target=target, where=where)
self.cursor.execute(query)
# you may want this to return something as well?
class CRUD(object):
def __init__(self, table, public_fields):
self.table = table
self.public_fields = public_fields
def fields(self):
return ', '.join([f for f in self.public_fields])
# no get method
def cursor(self, cursor):
return CRUDCursor(self, cursor)
However, there still seems to be a major problem with your example. Normally, after you execute a SELECT statement on a cursor, you want to fetch the rows from that cursor. You're not keeping the cursor object around in your "user" code, and you explicitly don't want the CRUD object to keep its cursor around, so… how do you expect to do that? Maybe get is supposed to return self.cursor.fetch_all() at the end or something?
Related
I have a class which is an ndb.Model.
I am trying to add pagination so I added this:
#classmethod
def get_next_page(cls, cursor):
q = cls.query()
q_forward = q.order(cls.title)
if cursor:
cursor = ndb.datastore_query.Cursor(cursor)
objects, cursor, more = q_forward.fetch_page(10, start_cursor=cursor)
return objects, cursor.urlsafe(), more
However, fetch_page ALWAYS returns more == false and cursor is always just empty. But if I instead of cursor use offset=5 or offset=10 or whatever it works just fine. The cursor does not update so it always starts from the first item.
I am testing this locally with stub context.
I wonder what am I missing? I'm very new to this.
I believe it should be ndb._datastore_query.Cursor (see reference) or just do ndb.Cursor
If the cursor came from UI and you had previously made it urlsafe, then you should be doing ndb._datastore_query.Cursor(urlsafe=cursor) or ndb.Cursor(urlsafe=cursor)
Also, when you don't have a cursor, make sure it's explicitly set to None or just do ndb.Cursor() or ndb._datastore_query.Cursor()
I am currently working on a huge project, which constantly executes queries. My problem is, that my old code always created a new database connection and cursor, which decreased the speed immensivly. So I thought it's time to make a new database class, which looks like this at the moment:
class Database(object):
_instance = None
def __new__(cls):
if cls._instance is None:
cls._instance = object.__new__(cls)
try:
connection = Database._instance.connection = mysql.connector.connect(host="127.0.0.1", user="root", password="", database="db_test")
cursor = Database._instance.cursor = connection.cursor()
except Exception as error:
print("Error: Connection not established {}".format(error))
else:
print("Connection established")
return cls._instance
def __init__(self):
self.connection = self._instance.connection
self.cursor = self._instance.cursor
# Do database stuff here
The queries will use the class like so:
def foo():
with Database() as cursor:
cursor.execute("STATEMENT")
I am not absolutly sure, if this creates the connection only once regardless of how often the class is created. Maybe someone knows how to initialize a connection only once and how to make use of it in the class afterwards or maybe knows if my solution is correct. I am thankful for any help!
Explanation
The keyword here is clearly class variables. Taking a look in the official documentation, we can see that class variables, other than instance variables, are shared by all class instances regardless of how many class instances exists.
Generally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class:
So let us asume you have multiple instances of the class. The class itself is defined like below.
class Dog:
kind = "canine" # class variable shared by all instances
def __init__(self, name):
self.name = name # instance variable unique to each instance
In order to better understand the differences between class variables and instance variables, I would like to include a small example here:
>>> d = Dog("Fido")
>>> e = Dog("Buddy")
>>> d.kind # shared by all dogs
"canine"
>>> e.kind # shared by all dogs
"canine"
>>> d.name # unique to d
"Fido"
>>> e.name # unique to e
"Buddy"
Solution
Now that we know that class variables are shared by all instances of the class, we can simply define the connection and cursor like shown below.
class Database(object):
connection = None
cursor = None
def __init__(self):
if Database.connection is None:
try:
Database.connection = mysql.connector.connect(host="127.0.0.1", user="root", password="", database="db_test")
Database.cursor = Database.connection.cursor()
except Exception as error:
print("Error: Connection not established {}".format(error))
else:
print("Connection established")
self.connection = Database.connection
self.cursor = Database.cursor
As a result, the connection to the database is created once at the beginning and can then be used by every further instance.
Kind of like this. It's a cheap way of using a global.
class Database(object):
connection = None
def __init__(self):
if not Database.connection:
Database.connection = mysql.connector.connect(host="127.0.0.1", user="root", password="", database="db_test")
def query(self,sql):
cursor = Database.connection.cursor()
cursor.execute(sql)
# Do database stuff here
This too does work and you are guaranteed to always have one instance of the database
def singleton(class_):
instances = {}
def get_instance(*args, **kwargs):
if class_ not in instances:
instances[class_] = class_(*args, **kwargs)
return instances[class_]
return get_instance
#singleton
class SingletonDatabase:
def __init__(self) -> None:
print('Initializing singleton database connection... ', random.randint(1, 100))
The Reason you have to do all this is if you just create
a connection once and leave it at that you then
will end up trying to use a connection which is dropped
so you create a connection and attach it to your app
then whenever you get a new request check if the connection
still exists, with before request hook if not then recreate the
connection and proceeed.
on create_app
def create_app(self):
if not app.config.get('connection_created'):
app.database_connection = Database()
app.config['connection_created'] = True
on run app
#app.before_request
def check_database_connection(self):
if not app.config.get('connection_created') or not app.database_connection:
app.database_connection = Database()
app.config['connection_created'] = True
this will insure that your application always runs with an active connection
and that it gets created only once per app
if connection is dropped on any subsequent call then it gets recreated again...
I have a project written in Python 2.7 where the main program needs frequent access to a sqlite3 db for writing logs, measurement results, getting settings,...
At the moment I have a db module with functions such as add_log(), get_setting(), and each function in there basically looks like:
def add_log(logtext):
try:
db = sqlite3.connect(database_location)
except sqlite3.DatabaseError as e:
db.close() # try to gracefully close the db
return("ERROR (ADD_LOG): While opening db: {}".format(e))
try:
with db: # using context manager to automatically commit or roll back changes.
# when using the context manager, the execute function of the db should be used instead of the cursor
db.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
except sqlite3.DatabaseError as e:
return("ERROR (ADD_LOG): While adding log to db: {}".format(e))
return "OK"
(some additional code and comments removed).
It seems I should write a class extends the base sqlite connection object function so that the connection is created only once (at the beginning of the main program), and then this object contains the functionality such as
class Db(sqlite3.Connection):
def __init__(self, db_location = database_location):
try:
self = sqlite3.connect(db_location)
return self
except sqlite3.DatabaseError as e:
self.close() # try to gracefully close the db
def add_log(self, logtext):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
It seems this should be fairly straightforward but, I can't seem to get it working.
It seems there is some useful advise here:
Python: How to successfully inherit Sqlite3.Cursor and add my customized method but I can't seem to understand how to use a similar construct for my purpose.
You are not that far away.
First of all, a class initializer cannot return anything but None (emphasis mine):
Because __new__() and __init__() work together in constructing objects (__new__() to create it, and __init__() to customise it), no non-None value may be returned by __init__(); doing so will cause a TypeError to be raised at runtime.
Second, you overwrite the current instance self of your Db object with a sqlite3.Connection object right in the initializer. That makes subclassing SQLite's connection object a bit pointless.
You just need to fix your __init__ method to make this work:
class Db(sqlite3.Connection):
# If you didn't use the default argument, you could omit overriding __init__ alltogether
def __init__(self, database=database_location, **kwargs):
super(Db, self).__init__(database=database, **kwargs)
def add_log(self, logtext, level, source):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
That lets you use instances of your class as context managers:
with Db() as db:
print [i for i in db.execute("SELECT * FROM logs")]
db.add_log("I LAUNCHED THAT PUG INTO SPACE!", 42, "Right there")
Maurice Meyer said in the comments of the question that methods such as execute() are cursor methods and, per the DB-API 2.0 specs, that's correct.
However, sqlite3's connection objects offer a few shortcuts to cursor methods:
This is a nonstandard shortcut that creates an intermediate cursor object by calling the cursor method, then calls the cursor’s execute method with the parameters given.
To expand on the discussion in the comments:
The remark about the default argument in my code example above was targeted at the requirement to override sqlite3.Connection's __init__ method.
The __init__ in the class Db is only needed to define the default value database_location on the database argument for the sqlite3.Connection initializer.
If you were willing to pass such a value upon every instantiation of that class, your custom connection class could look like this, and still work the same way, except for that argument:
class Db(sqlite3.Connection):
def add_log(self, logtext, level, source):
self.execute("insert into logs(level, source, log) values (?, ?, ?)", (level, source, logtext))
However, the __init__ method has nothing to do with the context manager protocol as defined in PEP 343.
When it comes to classes, this protocol requires to implement the magic methods __enter__ and __exit__
The sqlite3.Connection does something along these lines:
class Connection:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val is None:
self.commit()
else:
self.rollback()
Note: The sqlite3.Connection is provided by a C module, hence does not have a Python class definition. The above reflects what the methods would roughly look like if it did.
Lets say you don't want to keep the same connection open all the time, but rather have a dedicated connection per transaction while maintaining the general interface of the Db class above.
You could do something like this:
# Keep this to have your custom methods available
class Connection(sqlite3.Connection):
def add_log(self, level, source, log):
self.execute("INSERT INTO logs(level, source, log) VALUES (?, ?, ?)",
(level, source, log))
class DBM:
def __init__(self, database=database_location):
self._database = database
self._conn = None
def __enter__(self):
return self._connection()
def __exit__(self, exc_type, exc_val, exc_tb):
# Decide whether to commit or roll back
if exc_val:
self._connection().rollback()
else:
self._connection().commit()
# close connection
try:
self._conn.close()
except AttributeError:
pass
finally:
self._conn = None
def _connection(self):
if self._conn is None:
# Instantiate your custom sqlite3.Connection
self._conn = Connection(self._database)
return self._conn
# add shortcuts to connection methods as seen fit
def execute(self, sql, parameters=()):
with self as temp:
result = temp.execute(sql, parameters).fetchall()
return result
def add_log(self, level, source, log):
with self as temp:
temp.add_log(level, source, log)
This can be used in a context and by calling methods on the instance:
db = DBM(database_location)
with db as temp:
print [i for i in temp.execute("SELECT * FROM logs")]
temp.add_log(1, "foo", "I MADE MASHED POTATOES")
# The methods execute and add_log are only available from
# the outside because the shortcuts have been added to DBM
print [i for i in db.execute("SELECT * FROM logs")]
db.add_log(1, "foo", "I MADE MASHED POTATOES")
For further reading on context managers refer to the official documentation. I'll also recommend Jeff Knupp's nice introduction. Also, the aforementioned PEP 343 is worth having a look at for the technical specification and rationale behind that protocol.
I have a wrapper class that I subclass out to the many database connections we use. I want something that will behave similarly to the try/finally functionality, but still give me the flexibility and subclassing potential of a class. Here is my base class, I would like to replace the __del__ method because I had read somewhere that there is no guarantee when this will run.
class DatabaseConnBase:
def __init__(self, conn_func, conn_info: MutableMapping):
self._conn = None
self.conn_func = conn_func
self.conn_info = conn_info
def _initialize_connection(self):
self._conn = self.conn_func(**self.conn_info)
#property
def conn(self):
if not self._conn:
self._initialize_connection()
return self._conn
#property
def cursor(self):
if not self._conn:
self._initialize_connection()
return self._conn.cursor()
def __del__(self):
if self._conn:
self._conn.close()
def commit(self):
if not self._conn:
raise AttributeError(
'The connection has not been initialized, unable to commit '
'transaction.'
)
self._conn.commit()
def execute(
self,
query_or_stmt: str,
verbose: bool = True,
has_res: bool = False,
auto_commit: bool = False
) -> Optional[Tuple[Any]]:
"""
Creates a new cursor object, and executes the query/statement. If
`has_res` is `True`, then it returns the list of tuple results.
:param query_or_stmt: The query or statement to run.
:param verbose: If `True`, prints out the statement or query to STDOUT.
:param has_res: Whether or not results should be returned.
:param auto_commit: Immediately commits the changes to the database
after the execute is performed.
:return: If `has_res` is `True`, then a list of tuples.
"""
cur = self.cursor
if verbose:
logger.info(f'Using {cur}')
logger.info(f'Executing:\n{query_or_stmt}')
cur.execute(query_or_stmt)
if auto_commit:
logger.info('Committing transaction...')
self.commit()
if has_res:
logger.info('Returning results...')
return cur.fetchall()
As per the top answer on this related question: What is the __del__ method, How to call it?
The __del__ method will be called when your object is garbage collected, but as you noted, there are no guarantees on how long after destruction of references to the object this will happen.
In your case, since you check for the existence of the connection before attempting to close it, your __del__ method is safe to be called multiple times, so you could simply call it explicitly before you destroy the reference to your object.
I'm trying to implement a wrapper around a redis database that does some bookkeeping, and I thought about using descriptors. I have an object with a bunch of fields: frames, failures, etc., and I need to be able to get, set, and increment the field as needed. I've tried to implement an Int-Like descriptor:
class IntType(object):
def __get__(self,instance,owner):
# issue a GET database command
return db.get(my_val)
def __set__(self,instance,val):
# issue a SET database command
db.set(instance.name,val)
def increment(self,instance,count):
# issue an INCRBY database command
db.hincrby(instance.name,count)
class Stream:
_prefix = 'stream'
frames = IntType()
failures = IntType()
uuid = StringType()
s = Stream()
s.frames.increment(1) # float' object has no attribute 'increment'
Is seems like I can't access the increment() method in my descriptor. I can't have increment be defined in the object that the __get__ returns. This would require an additional db query if all I want to do is increment! I also don't want increment() on the Stream class, as later on when I want to have additional fields like strings or sets in Stream, then I'd need to type check the heck out of everything.
Does this work?
class Stream:
_prefix = 'stream'
def __init__(self):
self.frames = IntType()
self.failures = IntType()
self.uuid = StringType()
Why not define the magic method iadd as well as get and set. This will allow you to do normal addition with assignment on the class. It will also mean you can treat the increment separately from the get function and thereby minimise the database accesses.
So change:
def increment(self,instance,count):
# issue an INCRBY database command
db.hincrby(instance.name,count)
to:
def __iadd__(self,other):
# your code goes here
Try this:
class IntType(object):
def __get__(self,instance,owner):
class IntValue():
def increment(self,count):
# issue an INCRBY database command
db.hincrby(self.name,count)
def getValue(self):
# issue a GET database command
return db.get(my_val)
return IntValue()
def __set__(self,instance,val):
# issue a SET database command
db.set(instance.name,val)