OOP: Using conditional statement while initializing a class - python

This question geared toward OOP best practices.
Background:
I've created a set of scripts that are either automatically triggered by cronjobs or are constantly running in the background to collect data in real time. In the past, I've used Python's smtplib to send myself notifications when errors occur or a job is successfully completed. Recently, I migrated these programs to the Google Cloud platform which by default blocks popular SMTP ports. To get around this I used linux's mail command to continue sending myself the reports.
Originally, my hacky solution was to have two separate modules for sending alerts that were initiated based on an argument I passed to the main script.
Ex:
$ python mycode.py my_arg
if sys.argv[1] == 'my_arg':
mailer = Class1()
else:
mailer = Class2()
I want to improve upon this and create a module that automatically handles this without the added code. The question I have is whether it is "proper" to include a conditional statement while initializing the class to handle the situation.
Ex:
Class Alert(object):
def __init__(self, sys.platform, other_args):
# Google Cloud Platform
if sys.platform == "linux":
#instantiate Class1 variables and methods
#local copy
else:
#instantiate Class2 variables and methods
My gut instinct says this is wrong but I'm not sure what the proper approach would be.
I'm mostly interested in answers regarding how to create OO classes/modules that handle environmental dependencies to provide the same service. In my case, a blocked port requires a different set of code altogether.
Edit: After some suggestions here are my favorite readings on this topic.
http://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html

This seems like a wonderful use-case for a factory class, which encapsulates the conditional, and always returns an instance of one of N classes, all of which implement the same interface, so that the rest of your code can use it without caring about the concrete class being used.

This is a way to do it. But I would rather use something like creating a dynamic class instance. To do that, you could have only one class instead of selecting from two different classes. The class would then take some arguments and return the result depending the on the arguments provided. There are quite some examples out there and I'm sure you can use them in your use-case. Try searching for how to create a dynamic class in python.

Related

monkey-patching/decorating/wrapping an entire python module

I would like, given a python module, to monkey patch all functions, classes and attributes it defines. Simply put, I would like to log every interaction a script I do not directly control has with a module I do not directly control. I'm looking for an elegant solution that will not require prior knowledge of either the module or the code using it.
I found several high-level tools that help wrapping, decorating, patching etc... and i've went over the code of some of them, but I cannot find an elegant solution to create a proxy of any given module and automatically proxy it, as seamlessly as possible, except for appending logic to every interaction (record input arguments and return value, for example).
in case someone else is looking for a more complete proxy implementation
Although there are several python proxy solutions similar to those OP is looking for, I could not find a solution that will also proxy classes and arbitrary class objects, as well as automatically proxy functions return values and arguments. Which is what I needed.
I've got some code written for that purpose as part of a full proxying/logging python execution and I may make it into a separate library in the future. If anyone's interested you can find the core of the code in a pull request. Drop me a line if you'd like this as a standalone library.
My code will automatically return wrapper/proxy objects for any proxied object's attributes, functions and classes. The purpose is to log and replay some of the code, so i've got the equivalent "replay" code and some logic to store all proxied objects to a json file.

Why is merging Python system classes with custom classes less desirable than hooking the import mechanism?

I am working on a project that aims to augment the Python socket messages with partial ordering information. The library I'm building is written in Python, and needs to be interposed on an existing system's messages sent through the socket functions.
I have read some of the resources out there, namely the answer by #Omnifarious at this question python-importing-from-builtin-library-when-module-with-same-name-exist
There is an extremely ugly and horrible thing you can do that does not
involve hooking the import mechanism. This is something you should
probably not do, but it will likely work. It turns your calendar
module into a hybrid of the system calendar module and your calendar
module.
I have implemented the import mechanism solution, but we have decided this is not the direction we'd like to take, since it relies too much on the environment. The solution to merge classes into a hybrid, rather than relying on the import mechanisms, seems to be the best approach in my case.
Why has the hybrid been called an ugly and horrible solution? I'd like to start implementing it in my project but I am wary of the warnings. It does seem a bit hackish, but since it would be part of an installation script, wouldn't it be OK to run this once?
Here is a code snippet where the interposition needs to intercept the socket message before it's sent:
class vector_clock:
def __init__(self):
"""
Initiate the clock with the object
"""
self.clock = [0,0]
def sendMessage(self):
"""
Send Message to the server
"""
self.msg = "This is the test message to that will be interposed on"
self.vector_clock.increment(0) # We are clock position 0
# Some extraneous formatting details removed for brevity….
# connectAndSend needs interpositioning to include the vector clock
self.client.connectAndSend(totalMsg);
self.client.s.close()
From my understanding of your post, you wish to modify the existing socket library to inject your own functionality into it.
Yes, this is completely doable, and possibly it is even the easiest solution to your problem, but you have to consider all of the implications of what you are doing.
The most important point is that you are not just modifying socket for yourself, but for anything that is run in any part of your process which uses the socket library unless it uses it's own class loader. I understand that there is probably some existing library you are using which uses socket and you want to inject this functionality into it, but this will affect EVERYTHING.
From this you have to consider the question: is your change 100% backwards compatible. Unless you can guarantee that you know every single use case of socket by any library used by your process (hint: you can't), then you need to make sure that it completely preserves all existing functionality or else somewhere down the road stuff in some core library is going to mysteriously break and you will have no idea why and no way to debug it. An example of something 100% backwards compatible (or as close as it is possible to get) is injecting a decorator which saves timing information to one of your own modules.
If you completely understand this and still think that your solution is a good one then I say "go for it". However, have you considered any alternatives?
If you just need to inject this functionality for a specific set of libraries that you use, then I would suggest doing something like patching: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch
You could subclass whatever core library you want to modify and then patch the library to use your class instead. At it's core, what patch does is it modifies the global bindings used in the target module to use a different class/module than the one it had originally used.
PS. I don't think yours is a situation which calls for hooking the import mechanism.

Should I still create a class, if it can only have one instance?

I am new to OOP and am writing a small tool in Python that checks Bitcoin prices using a JSON load from the web Bitcoin() class, it monitors the prices Monitor(), notifies the user when thresholds are met Notify() and uses a console-interface Interface() for now to do so.
I have created a Bitcoin() class that can read the prices and volumes from the JSON load. The __init__ definition connects to the web using socket. Since every instance of this class would result in a new socket, I would only need/want one instance of this class running.
Is a class still the best way to approach this?
What is the best way to get other classes and instances to interact with my Bitcoin() instance?
Should I global a Bitcoin() instance? Pass the instance as an argument to every class that needs it?
The first thing which concerns me is the SRP violation, your Bitcoin class probably shouldn't be responsible for:
opening socket,
parsing results,
rendering output.
I don't know the details but from my point of view you should split that functionality to smaller classes/functions (in case of using only modules), and one of them will be responsible for retrieving data from web. Please also keep in mind that global state is evil (singletons in some contexts could be described as global state).
Another thing which is a smell from my point of view is opening a socket inside the constructor. This isn't testable, of course you could mock/stub socket, but from my point of view it's better when class requires all it's dependencies as a constructor parameter. By doing it that way you could also notice some classes with to wide responsibility (if your constructor requires more that 3,4 parameters it definitely could be simplified).
http://www.youtube.com/watch?v=o9pEzgHorH0
I'm not sure how relevant this video is for your project (no code to actually read). But maybe you'll pick up the answer to your question. At least you'll learn something new and that's what were here for.
If I were you my code would be something like:
( a class for every set of jobs, which is not what you are doing )
class Interface:
''' Handle UI '''
...
class Connect:
''' Handle web interface '''
...
class Bitcoin:
''' Handle the calculations '''
...
class Notify:
''' Notifier '''
...
In short, split your classes into smaller simpler classes.
Now for your question:
Yes, because you have a "complex-ish" problem at hand and you're using Python, so it's definitely easier to create a OOP version than a non-OOP one. So, unless you have a good reason not to, Stick to OOP.
In your case, it might as well be passing the instance as an argument.
This is a good idea. This eliminates the problems caused by scopes if you don't have a very good understanding of them.
But remember you pass the reference, not the value, so manipulating the instance, can and will affect other classes the instance is passed to.
Note: Opening a socket in the constructor of the class is not a good idea. It might be better if you have it in a method.
The answer is maybe. Depends upon you whole architecture,
You should look at the singleton pattern, because you description yells Singleton all over.
http://de.wikipedia.org/wiki/Singleton_%28Entwurfsmuster%29
If you don't find any good reason against creating a class in your given architecture, then just go for it.
OOP is a tool, not a goal, you can make a decision whether to use it or not. If you use a Python module, you can achieve encapsulation without ever writing "class".
Sure, you can use python classes for this purpose. You can use module-level instances as well(no global keyword or explicit passing as arguments needed). It is a matter of taste IMHO.
Basically you're asking about Singleton pattern python-specific implementation, it has been answered here:
Python and the Singleton Pattern
Description of pattern itself can be found here: http://en.wikipedia.org/wiki/Singleton_pattern

Passing custom Python objects to nosetests

I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests?
Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this:
testlib
\-testmoda
\-testmodb
\-testmodc
In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both.
Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc.
Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful.
I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing.
Thanks in advance!
Rob
P.S. The connection objects are not pickle-able. :(
Could you use a factory create the connections, then have the functions test_something1() (taking no arguments) use the factory to get a connection?
As far as I can tell, there is no easy way to simply pass custom objects to Nose.
However, as Matt pointed out there are some viable workarounds to achieve similar results.
Basically, do this:
Setup a data dictionary as a package level global
Add custom objects to that dictionary
Create some factory functions to return those custom objects or create new ones if they're present/suitable
Refactor the existing testlib\testmod* modules to use the factory

Propagating application settings

Probably a very common question, but couldn't find suitable answer yet..
I have a (Python w/ C++ modules) application that makes heavy use of an SQLite database and its path gets supplied by user on application start-up.
Every time some part of application needs access to database, I plan to acquire a new session and discard it when done. For that to happen, I obviously need access to the path supplied on startup. Couple of ways that I see it happening:
1. Explicit arguments
The database path is passed everywhere it needs to be through an explicit parameter and database session is instantiated with that explicit path. This is perhaps the most modular, but seems to be incredibly awkward.
2. Database path singleton
The database session object would look like:
import foo.options
class DatabaseSession(object):
def __init__(self, path=foo.options.db_path):
...
I consider this to be the lesser-evil singleton, since we're storing only constant strings, which don't change during application runtime. This leaves it possible to override the default and unit test the DatabaseSession class if necessary.
3. Database path singleton + static factory method
Perhaps slight improvement over the above:
def make_session(path=None):
import foo.options
if path is None:
path = foo.options.db_path
return DatabaseSession(path)
class DatabaseSession(object):
def __init__(self, path):
...
This way the module doesn't depend on foo.options at all, unless we're using the factory method. Additionally, the method can perform stuff like session caching or whatnot.
And then there are other patterns, which I don't know of. I vaguely saw something similar in web frameworks, but I don't have any experience with those. My example is quite specific, but I imagine it also expands to other application settings, hence the title of the post.
I would like to hear your thoughts about what would be the best way to arrange this.
Yes, there are others. Your option 3 though is very Pythonic.
Use a standard Python module to encapsulate options (this is the way web frameworks like Django do it)
Use a factory to emit properly configured sessions.
Since SQLite already has a "connection", why not use that? What does your DatabaseSession class add that the built-in connection lacks?

Categories