I'm trying to create a class for interacting with a MySQL Database. This class will connect to the database storing the connection object and cursor. Allowing for multiple actions without reconnecting every time. Once you're done with it, the cursor and connection need to be closed. So I've implemented it as a context manager and it works great.
I need the support to be able to do multiple actions without reopening the connection each time, but most of the time I am only running one Query/command. So I end up having to write the below code to run a single query.
with MySQLConnector() as connector:
connector.RunQuery(QUERY)
I thought it would be nice to be able to run single line commands like this
MySQLConnector().RunQuery(QUERY)
but this leaves the connection open.
I then thought this implementation would allow me to do single and multiple actions all nicely context managed.
class MySQLConnector():
#staticmethod
def RunQuery(query):
with MySQLConnector() as connector:
connector.RunQuery(query)
def RunQuery(self, query):
self.cursor.execute(query)
self.connection.commit()
# Single Actions
MySQLConnector.RunQuery(QUERY)
# Mulitple Actions
with MySQLConnector() as connector:
connector.RunQuery(QUERY_1)
connector.RunQuery(QUERY_2)
However, the method function seems to override the static method and only one way will work at a time. I know I could just have two separately named functions but I was wondering if there was a better/more pythonic way to implement this?
As this answer points out, you can use some __init__ trickery to redefine the method if it is being used as non-static (as __init__ will not run for static methods). Change the non-static RunQuery to something like _instance_RunQuery, then set self.RunQuery = self._instance_RunQuery. Like this:
class A:
def __init__(self, val):
self.add = self._instance_add
self.val = val
#staticmethod
def add(a, b):
obj = A(a)
return obj.add(b)
def _instance_add(self, b):
return self.val + b
ten = A(10)
print(ten.add(5))
print(A.add(10, 5))
Related
I have a class that has several methods and I would like to write unit tests for them. The problem I'm facing is that this class has an __init__ method that queries a database, imagine something like this:
class MyClass:
accepted_values = ['a', 'b', 'c']
def __init__(self, database_name):
self.database = database_name
self.data = self.query_database()
def query_database(self):
data = query_this_database(self.database)
# clean data
return data
def check_values_in_db(self, column_name):
column = data.column_name
if any(item not in self.accepted_values for item in column):
print('Oh noes!')
else:
print('All good')
Now, given this, I would like to unit test the last method using some mock data, but I can't because if I initialize the class, it will want to query the database. This is further complicated by the fact that to actually make the query, one needs an API key, permissions, etc., which is exactly what I want to avoid during unit testing.
I'm relatively new to OOP and unit testing in general, so I'm not even sure if I structured my class properly: maybe the method query_database() should only be called at a later stage and not in __init__?
EDIT:
Asked to add some details so here goes:
This class belongs to an AWS lambda function that runs on a schedule. Every hour, this class queries the DB for the last hour, and checks a specific column against some pre-defined values.
If any value in the column does not belong to those pre-defined values, it sends an alert email. I would like to test this specific functionality, but without having to query the database, but just using mock values.
I edited the code accordingly to reflect what I mean.
You can still call a class method without making an instance of it, but you will have problems if you're trying to call attributes that you have defined in __init__.
You might also want to think about making a blank variable data outside of the __init__ and then call classinstance.data = classinstance.query_database() in your main code.
You can use unittest.mock to do this kind of stuff.
An uglier way is to override your class's init method in a mock class like:
class MyMockedClass(MyClass):
def __init__(self):
self.database = not_a_real_database()
self.data = not_real_data()
and then use this new class in your tests.
For your last question, it depends on your project structure and frameworks you might be using. You should ask codereview for advice on your project structure.
Please have a look at the below code
class MyStudent:
def __init__(self, student_id, student_name):
self._student_id = student_id
self._student_name = student_name
def get_student_id(self):
return self._student_id
def get_student_name(self):
return self._student_name
student1 = MyStudent(student_id=123, student_name="ABC")
student1.get_student_id()
student1.get_student_name()
I would like to run some code like adding the student to the DB whenever student1.get_student_id() or student1.get_student_name() is invoked (or when get() is accessed. Please correct me if i used the wrong descriptor). And I have to do this via Decorators only for multiple classes like below
#save_student_to_db
class MyStudent:......
How can this be achieved using Decorators? I need to use a single decorator for multiple classes which can have any method. Whenever any method (apart for the ones starting with _ or __) of any class is called the decorator should save the data to DB
If all classes implement the same methods, say get_student_id and get_student_name, and has the same attributes, say _student_id and _student_name, then you can do a class decorator like so:
from functools import wraps
from somewhere import Database
#wraps(fn)
def _method_decorator(fn):
def getter_wrapper(self):
db = Database()
db.save(self._student_id, self._student_name)
return fn(self)
return getter_wrapper
def save_student_to_db(cls):
cls.get_student_id = _method_decorator(cls.get_student_id)
cls.get_student_name = _method_decorator(cls.get_student_name)
return cls
About the database, you can instantiate it every time its needed like I proposed above or have a dependency injection framework do it for you. I've been using injectable for a while and it is quite simple and yet powerful, there is serum too.
This question is more about how one uses OOP to read in databases. SQLite and sqlite3 are simply examples to work with, and are not the main thrust of the question:
I am creating a software package which allows users to query SQLite index files which have already been generated. It's basically syntactic to make it super user-friendly to query SQLite files indexed in a certain way, for a very particular case This should be quite straightforward, but I am somewhat confused how to "automatically" read in the SQLite
Here's an example (with pseudo-code):
import sqlite3
Class EasySQL:
def __init__(self, filepath):
self.filepath = filepath
def connect(self, filepath): ## perhaps this should be above in init?
return sqlite3.connect(self.filepath)
def query_indexA(self):
## query index A on SQLite3 connection
I would prefer the connection to the SQLite database to be "automatic" upon instantiation of the class:
### instantiate class
my_table1 = EasySQL("path/to/file")
At the moment, users one need to call the function .connect() after instantiation.
my_table = EasySQL("path/to/file")
the_object_to_do_queries = my_table.connect()
## now users can actually use this
the_object_to_do_queries.query_indexA()
This seems like bad form, and unnecessarily complicated.
How does one write the the initialization method to immediately create the SQLite3 connection?
Hopefully this question is clear. I am happy to edit if not.
The main point here is that EasySQL should not return the connection (which makes it mostly useless) but use it internally by keeping a reference to it:
class EasySQL(object):
def __init__(self, filepath):
self._filepath = filepath
self._db = sqlite3.connect(self.filepath)
def close(self):
if self._db:
self._db.close()
self._db = None
def query_indexA(self):
# XXX example implementation
cursor = self._db.cursor()
cursor.execute("some query here")
return cursor.fetchall()
I need some help in terms of 'pythonic' way of handling a specific scenario.
I'm writing an Ssh class (wraps paramiko) that provides the capability to connect to and executes commands on a device under test (DUT) over ssh.
class Ssh:
def connect(some_params):
# establishes connection
def execute_command(command):
# executes command and returns response
def disconnect(some_params):
# closes connection
Next, I'd like to create a Dut class that represents my device under test. It has other things, besides capability to execute commands on the device over ssh. It exposes a wrapper for command execution that internally invokes the Ssh's execute_command. The Ssh may change to something else in future - hence the wrapper.
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
def execute_command(command)
return self.ssh.execute_command(command)
Next, the device supports a custom command line interface for device under test. So, a class that accepts a DUT object as an input and exposes a method to execute the customised command.
def CustomCli:
def __init__(dut_object):
self.dut = dut_object
def _customize(command):
# return customised command
def execute_custom_command(command):
return self.dut.execute_command(_customize(command))
Each of the classes can be used independently (CustomCli would need a Dut object though).
Now, to simplify things for user, I'd like to expose a wrapper for CustomCli in the Dut class. This'll allow the creator of the Dut class to exeute a simple or custom command.
So, I modify the Dut class as below:
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
self.custom_cli = Custom_cli(self) ;# how to avoid this circular reference in a pythonic way?
def execute_command(command)
return self.ssh.execute_command(command)
def execute_custom_command(command)
return self.custom_cli.execute_custom_command(command)
This will work, I suppose. But, in the process I've created a circular reference - Dut is pointing to CustomCli and CustomCli has a reference to it's creator Dut instance. This doesn't seem to be the correct design.
What's the best/pythonic way to deal with this?
Any help would be appreciated!
Regards
Sharad
In general, circular references aren't a bad thing. Many programs will have them, and people just don't notice because there's another instance in-between like A->B->C->A. Python's garbage collector will properly take care of such constructs.
You can make circular references a bit easier on your conscience by using weak references. See the weakref module. This won't work in your case, however.
If you want to get rid of the circular reference, there are two way:
Have CustomCLI inherit from Dut, so you end up with just one instance. You might want to read up on Mixins.
class CLIMerger(Dut):
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIMixin(object):
# inherit from object, won't work on its own
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIDut(Dut, CLIMixin):
# now the mixin "works", but still could enhance other Duts the same way
pass
The Mixin is advantageous if you need several cases of merging a CLI and Dut.
Have an explicit interface class that combines CustomCli and Dut.
class DutCLI(object):
def __init__(self, *bla, **blah):
self.dut = Dut(*bla, **blah)
self.cli = CustomCLI(self.dut)
This requires you to write boilerplate or magic to forward every call from DutCLI to either dut or cli.
I tried to manipulate the __mro__ but it it read-only
The use case is as follow:
The Connection object created from pyodbc (a DBAPI) used to provide a property called 'autocommit'. Lately I have wrapped a SQLAlchemy db connection pool around the pyodbc for better resource management. The new db pool will return a _ConnectionFairy, a connection proxy class, which no longer exposes the autocommit property.
I would very much want to leave the thrid party code alone. So inheritance of _ConnectionFairy is not really an option (I might need to override the Pool class to change how it creates a connection proxy. For source code, please see here)
A rather not-so elegant solution is to change all occurance of
conn.autocommit = True
to
# original connection object is accessible via .connection
conn.connection.autocommit = True
So, I would like to know if it is possible at all to inject a set of getter, setter and property to an instance of _ConnectionFairy
You can "extend" almost any class using following syntax:
def new_func(self, param):
print param
class a:
pass
a.my_func = new_func
b = a()
b.my_func(10)
UPDATE
If you want to create some kind of wrappers for some methods you can use getattr and setattr to save original method and replace it with your implementation. I've done it in my project but in a bit different way:
Here is an example:
class A:
def __init__(self):
setattr(self, 'prepare_orig', getattr(self,'prepare'))
setattr(self, 'prepare', getattr(self,'prepare_wrapper'))
def prepare_wrapper(self,*args,**kwargs):
def prepare_thread(*args,**kwargs):
try:
self.prepare_orig(*args,**kwargs)
except:
print "Unexpected error:", sys.exc_info()[0]
t = threading.Thread(target=prepare_thread, args=args, kwargs=kwargs)
t.start()
def prepare(self):
pass
The idea of this code that other developer can just implement prepare method in derived classed and it will be executed in the background. It is not what you asked but I hope it will help you in some way.