I have a class that encapsulates database access by my application, and I want to allow other parts of the application to be notified when rows change in the database. Right now, I'm maintaining a list of callback functions, out of which I just-in-time create a DeferredList when I need to send a notification. This seems super kludgy -- is there a more idiomatic way?
Sample Code:
class Db(object):
def __init__(self):
self.observers = []
def _on_notify(self, notify):
# called by the db connection
DeferredList(*[Deferred().addCallback(observer) for observer in self.observers]).callback(dict(notify=notify, db=self))
def observe(self, callback):
self.observers.append(callback)
Related
I'm trying to create a class for interacting with a MySQL Database. This class will connect to the database storing the connection object and cursor. Allowing for multiple actions without reconnecting every time. Once you're done with it, the cursor and connection need to be closed. So I've implemented it as a context manager and it works great.
I need the support to be able to do multiple actions without reopening the connection each time, but most of the time I am only running one Query/command. So I end up having to write the below code to run a single query.
with MySQLConnector() as connector:
connector.RunQuery(QUERY)
I thought it would be nice to be able to run single line commands like this
MySQLConnector().RunQuery(QUERY)
but this leaves the connection open.
I then thought this implementation would allow me to do single and multiple actions all nicely context managed.
class MySQLConnector():
#staticmethod
def RunQuery(query):
with MySQLConnector() as connector:
connector.RunQuery(query)
def RunQuery(self, query):
self.cursor.execute(query)
self.connection.commit()
# Single Actions
MySQLConnector.RunQuery(QUERY)
# Mulitple Actions
with MySQLConnector() as connector:
connector.RunQuery(QUERY_1)
connector.RunQuery(QUERY_2)
However, the method function seems to override the static method and only one way will work at a time. I know I could just have two separately named functions but I was wondering if there was a better/more pythonic way to implement this?
As this answer points out, you can use some __init__ trickery to redefine the method if it is being used as non-static (as __init__ will not run for static methods). Change the non-static RunQuery to something like _instance_RunQuery, then set self.RunQuery = self._instance_RunQuery. Like this:
class A:
def __init__(self, val):
self.add = self._instance_add
self.val = val
#staticmethod
def add(a, b):
obj = A(a)
return obj.add(b)
def _instance_add(self, b):
return self.val + b
ten = A(10)
print(ten.add(5))
print(A.add(10, 5))
This question is more about how one uses OOP to read in databases. SQLite and sqlite3 are simply examples to work with, and are not the main thrust of the question:
I am creating a software package which allows users to query SQLite index files which have already been generated. It's basically syntactic to make it super user-friendly to query SQLite files indexed in a certain way, for a very particular case This should be quite straightforward, but I am somewhat confused how to "automatically" read in the SQLite
Here's an example (with pseudo-code):
import sqlite3
Class EasySQL:
def __init__(self, filepath):
self.filepath = filepath
def connect(self, filepath): ## perhaps this should be above in init?
return sqlite3.connect(self.filepath)
def query_indexA(self):
## query index A on SQLite3 connection
I would prefer the connection to the SQLite database to be "automatic" upon instantiation of the class:
### instantiate class
my_table1 = EasySQL("path/to/file")
At the moment, users one need to call the function .connect() after instantiation.
my_table = EasySQL("path/to/file")
the_object_to_do_queries = my_table.connect()
## now users can actually use this
the_object_to_do_queries.query_indexA()
This seems like bad form, and unnecessarily complicated.
How does one write the the initialization method to immediately create the SQLite3 connection?
Hopefully this question is clear. I am happy to edit if not.
The main point here is that EasySQL should not return the connection (which makes it mostly useless) but use it internally by keeping a reference to it:
class EasySQL(object):
def __init__(self, filepath):
self._filepath = filepath
self._db = sqlite3.connect(self.filepath)
def close(self):
if self._db:
self._db.close()
self._db = None
def query_indexA(self):
# XXX example implementation
cursor = self._db.cursor()
cursor.execute("some query here")
return cursor.fetchall()
Is this good Python practice?
import threading
import Queue
class Poppable(threading.Thread):
def __init__(self):
super(Poppable, self).__init__()
self._q = Queue.Queue()
# provide a limited subset of the Queue interface to clients
self.qsize = self._q.qsize
self.get = self._q.get
def run(self):
# <snip> -- do stuff that puts new items onto self._q
# this is why clients don't need access to put functionality
Does this approach of "promoting" member's functions up to the containing class's interface violate the style, or Zen, of Python?
Mainly I'm trying to contrast this approach with the more standard one that would involve declaring wrapper functions normally:
def qsize(self):
return self._q.qsize()
def get(self, *args):
return self._q.get(*args)
I don't think that is Python specific. In general, this is a good OOP practice. You expose just the functions you need the client to know, hiding the internals of the contained queue. This is a typical approach when wrapping an object, and totally compliant with principle of least knowledge.
If, instead of self.qsize the client had to call self._q.qsize, you cannot easily change _q with a different data type, which does not have a qsize method if that is needed later. So, your approach, makes the object more open to possible future changes.
I need some help in terms of 'pythonic' way of handling a specific scenario.
I'm writing an Ssh class (wraps paramiko) that provides the capability to connect to and executes commands on a device under test (DUT) over ssh.
class Ssh:
def connect(some_params):
# establishes connection
def execute_command(command):
# executes command and returns response
def disconnect(some_params):
# closes connection
Next, I'd like to create a Dut class that represents my device under test. It has other things, besides capability to execute commands on the device over ssh. It exposes a wrapper for command execution that internally invokes the Ssh's execute_command. The Ssh may change to something else in future - hence the wrapper.
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
def execute_command(command)
return self.ssh.execute_command(command)
Next, the device supports a custom command line interface for device under test. So, a class that accepts a DUT object as an input and exposes a method to execute the customised command.
def CustomCli:
def __init__(dut_object):
self.dut = dut_object
def _customize(command):
# return customised command
def execute_custom_command(command):
return self.dut.execute_command(_customize(command))
Each of the classes can be used independently (CustomCli would need a Dut object though).
Now, to simplify things for user, I'd like to expose a wrapper for CustomCli in the Dut class. This'll allow the creator of the Dut class to exeute a simple or custom command.
So, I modify the Dut class as below:
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
self.custom_cli = Custom_cli(self) ;# how to avoid this circular reference in a pythonic way?
def execute_command(command)
return self.ssh.execute_command(command)
def execute_custom_command(command)
return self.custom_cli.execute_custom_command(command)
This will work, I suppose. But, in the process I've created a circular reference - Dut is pointing to CustomCli and CustomCli has a reference to it's creator Dut instance. This doesn't seem to be the correct design.
What's the best/pythonic way to deal with this?
Any help would be appreciated!
Regards
Sharad
In general, circular references aren't a bad thing. Many programs will have them, and people just don't notice because there's another instance in-between like A->B->C->A. Python's garbage collector will properly take care of such constructs.
You can make circular references a bit easier on your conscience by using weak references. See the weakref module. This won't work in your case, however.
If you want to get rid of the circular reference, there are two way:
Have CustomCLI inherit from Dut, so you end up with just one instance. You might want to read up on Mixins.
class CLIMerger(Dut):
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIMixin(object):
# inherit from object, won't work on its own
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIDut(Dut, CLIMixin):
# now the mixin "works", but still could enhance other Duts the same way
pass
The Mixin is advantageous if you need several cases of merging a CLI and Dut.
Have an explicit interface class that combines CustomCli and Dut.
class DutCLI(object):
def __init__(self, *bla, **blah):
self.dut = Dut(*bla, **blah)
self.cli = CustomCLI(self.dut)
This requires you to write boilerplate or magic to forward every call from DutCLI to either dut or cli.
I have a small pyramid web service.
I have also a python class that creates an index of items and methods to search fast across them. Something like:
class MyCorpus(object):
def __init__(self):
self.table = AwesomeDataStructure()
def insert(self):
self.table.push_back(1)
def find(self, needle):
return self.table.find(needle)
I would like to expose the above class to my api.
I can create only one instance of that class (memory limit).
So I need to be able to instantiate this class before the server starts.
And my threads should be able to access it.
I also need some locking mechanism(conccurrent inserts are not supported).
What is the best way to achieve that?
Add an instance of your class to the global application registry during your Pyramid application's configuration:
config.registry.mycorpus = MyCorpus()
and later, for example in your view code, access it through a request:
request.registry.mycorpus
You could also register it as a utility with Zope Component Architecture using registry.registerUtility, but you'd need to define what interface MyCorpus provides etc., which is a good thing in the long run. Either way having a singleton instance as part of the registry makes testing your application easier; just create a configuration with a mock corpus.
Any locking should be handled by the instance itself:
from threading import Lock
class MyCorpus(object):
def __init__(self, Lock=Lock):
self.table = AwesomeDataStructure()
self.lock = Lock()
...
def insert(self):
with self.lock:
self.table.push_back(1)
Any global variable is shared between threads in Python, so this part is really easy: "... create only one instance of that class ... before the server starts ... threads should be able to access it":
corpus = MyCorpus() # in global scope in any module
Done! Then import the instance from anywhere and call your class' methods:
from mydata import corpus
corpus.do_stuff()
No need for ZCA, plain pythonic Python :)
(the general approach of keeping something large and very database-like within the webserver process feels quite suspicious though, I hope you know what you're doing. I mean - persistence? locking? sharing data between multiple processes? Redis, MongoDB and 1001 other database products have those problems solved)