I have a logfile object that I would like to effectively be omnipresent in the rest of my code so that it can pick up comments throughout the code. How can I go about coding this into a nice OOP format?
At the moment I sort of have instantiate it in the first class it's used in and then pass it to every other class when they are instantiated (I haven't even started to think about trying to hand the log back once I've finished using a class. This clearly seems unnecessarily messy!
class LogFile(object):
def __init__(self):
self.log = []
pass
def write_log(self, data):
self.log.append(data)
class A(object):
def __init__(self):
self.logger = LogFile()
self.do_some_stuff(stuff)
def do_some_stuff(self, stuff):
...
b = B(self.logger)
b.do_some_more_stuff(stuff)
self.logger.write_log(stuff)
class B(object):
def __init__(self, logger):
self.logger = logger
def do_some_more_stuff(self, stuff):
...
...
self.logger.write_log(more_stuff)
item = A()
item.do_some_stuff(stuff)
Thanks
Although I usually don't like to use Singletons, but for things like logging it is the easiest solution I know.
And the simplest way to implement it in python is to use the module-level variable, in the module mylogger.py:
class LogFile(object):
def __init__(self):
self.log = []
pass
def write_log(self, data):
self.log.append(data)
logger = LogFile()
And in the another module:
from mylogger import logger
class A(object):
def do_some_stuff(self, stuff):
...
logger.write_log(data)
Python's standard logging module uses similar approach having the root logger defined as a module-level variable.
Related
I am building a software and am using one class to only store data:
class Data():
data = [1,2,3]
Now the data in this class can be accessed and changed from other classes without instantiating the Data class which is exactly what I need.
In order to update the of the software properly I have to call functions in other classes whenever the data changes. I looked at the observer pattern in python but could not get it to work without making data an attribute of the class that's only available when instantiated. In other words all the observer pattern implementations I found required:
class Data():
def __init__(self):
self.data = [1,2,3]
Obviously, if Data is my publisher/observable it needs to be instantiated once to get the functionality (as far as I understand) but I am looking for an implementation like:
class Data():
data = [1,2,3]
def __init__(self):
self.subscribers = {}
def register(self, who, callback):
self.subscribers[who] = callback
def dispatch(self):
for susbriber, callback in self.subscribers.items():
callback()
For the sake of the example let's use this other class as Subscriber/Observer that can also change the data with another function. As this will be the main class handling the software, this is where I instantiate Data to get the observer behavior. It is important however that I would not have to instantiate it only to get data as data will be changed from a lot of other classes:
class B():
def __init__(self):
self.data = Data()
self.data.register(self, self.print_something)
def print_something(self):
print("Notification Received")
def change_data(self):
Data.data.append(100)
My question now is, how to automatically send the notification from the Publisher/Observable whenever data gets changed in any way?
I am running python 3.8 on Windows 10.
I am trying to exectute the below code but I get errors.
class base:
def callme(data):
print(data)
class A(base):
def callstream(self):
B.stream(self)
def callme(data):
print("child ", data)
class B:
def stream(data):
# below statement doesn't work but I want this to run to achieve run time
# polymorphism where method call is not hardcoded to a certain class reference.
(base)data.callme("streaming data")
# below statement works but it won't call child class overridden method. I
# can use A.callme() to call child class method but then it has to be
# hardcoded to A. which kills the purpose. Any class A or B or XYZ which
# inherits base call should be able to read stream data from stream class.
# How to achive this in Python? SO any class should read the stream data as
# long as it inherits from the base class. This will give my stream class a
# generic ability to be used by any client class as long as they inherit
# base class.
#base.callme("streaming data")
def main():
ob = A()
ob.callstream()
if __name__=="__main__":
main()
I got the output you say you're looking for (in a comment rather than the question -- tsk, tsk) with the following code, based on the code in your question:
class base:
def callme(self, data):
print(data)
class A(base):
def callstream(self):
B.stream(self)
def callme(self, data):
print("child", data)
class B:
#classmethod
def stream(cls, data):
data.callme("streaming data")
def main():
ob = A()
ob.callstream()
if __name__=="__main__":
main()
Basically, I just made sure the instance methods had self parameters, and since you seem to be using B.stream() as a class method, I declared it as such.
I'm writing a GUI library, and I'd like to let the programmer provide meta-information about their program which I can use to fine-tune the GUI. I was planning to use function decorators for this purpose, for example like this:
class App:
#Useraction(description='close the program', hotkey='ctrl+q')
def quit(self):
sys.exit()
The problem is that this information needs to be bound to the respective class. For example, if the program is an image editor, it might have an Image class which provides some more Useractions:
class Image:
#Useraction(description='invert the colors')
def invert_colors(self):
...
However, since the concept of unbound methods has been removed in python 3, there doesn't seem to be a way to find a function's defining class. (I found this old answer, but that doesn't work in a decorator.)
So, since it looks like decorators aren't going to work, what would be the best way to do this? I'd like to avoid having code like
class App:
def quit(self):
sys.exit()
Useraction(App.quit, description='close the program', hotkey='ctrl+q')
if at all possible.
For completeness' sake, the #Useraction decorator would look somewhat like this:
class_metadata= defaultdict(dict)
def Useraction(**meta):
def wrap(f):
cls= get_defining_class(f)
class_metadata[cls][f]= meta
return f
return wrap
You are using decorators to add meta data to methods. That is fine. It can be done e.g. this way:
def user_action(description):
def decorate(func):
func.user_action = {'description': description}
return func
return decorate
Now, you want to collect that data and store it in a global dictionary in form class_metadata[cls][f]= meta. For that, you need to find all decorated methods and their classes.
The simplest way to do that is probably using metaclasses. In metaclass, you can define what happens when a class is created. In this case, go through all methods of the class, find decorated methods and store them in the dictionary:
class UserActionMeta(type):
user_action_meta_data = collections.defaultdict(dict)
def __new__(cls, name, bases, attrs):
rtn = type.__new__(cls, name, bases, attrs)
for attr in attrs.values():
if hasattr(attr, 'user_action'):
UserActionMeta.user_action_meta_data[rtn][attr] = attr.user_action
return rtn
I have put the global dictionary user_action_meta_data in the meta class just because it felt logical. It can be anywhere.
Now, just use that in any class:
class X(metaclass=UserActionMeta):
#user_action('Exit the application')
def exit(self):
pass
Static UserActionMeta.user_action_meta_data now contains the data you want:
defaultdict(<class 'dict'>, {<class '__main__.X'>: {<function exit at 0x00000000029F36C8>: {'description': 'Exit the application'}}})
I've found a way to make decorators work with the inspect module, but it's not a great solution, so I'm still open to better suggestions.
Basically what I'm doing is to traverse the interpreter stack until I find the current class. Since no class object exists at this time, I extract the class's qualname and module instead.
import inspect
def get_current_class():
"""
Returns the name of the current module and the name of the class that is currently being created.
Has to be called in class-level code, for example:
def deco(f):
print(get_current_class())
return f
def deco2(arg):
def wrap(f):
print(get_current_class())
return f
return wrap
class Foo:
print(get_current_class())
#deco
def f(self):
pass
#deco2('foobar')
def f2(self):
pass
"""
frame= inspect.currentframe()
while True:
frame= frame.f_back
if '__module__' in frame.f_locals:
break
dict_= frame.f_locals
cls= (dict_['__module__'], dict_['__qualname__'])
return cls
Then in a sort of post-processing step, I use the module and class names to find the actual class object.
def postprocess():
global class_metadata
def findclass(module, qualname):
scope= sys.modules[module]
for name in qualname.split('.'):
scope= getattr(scope, name)
return scope
class_metadata= {findclass(cls[0], cls[1]):meta for cls,meta in class_metadata.items()}
The problem with this solution is the delayed class lookup. If classes are overwritten or deleted, the post-processing step will find the wrong class or fail altogether. Example:
class C:
#Useraction(hotkey='ctrl+f')
def f(self):
print('f')
class C:
pass
postprocess()
I am wondering if there is a way to do what I am trying, best explained with an example:
Contents of a.py:
class A(object):
def run(self):
print('Original')
class Runner(object):
def run(self):
a = A()
a.run()
Contents of b.py:
import a
class A(a.A):
def run(self):
# Do something project-specific
print('new class')
class Runner(a.Runner):
def other_fcn_to_do_things(self):
pass
Basically, I have a file with some base classes that I would like to use for a few different projects. What I would like would be for b.Runner.run() to use the class A in b.py, without needing to override the run method. In the example above, I would like to code
import b
r = b.Runner()
print(r.run())
to print "new class". Is there any way to do that?
This seems a little convoluted. The Runner classes are probably unnecessary, unless there's something else more complex going on that was left out of your example. If you're set on not overriding the original run(), you could call it in another method in B. Please take a look at this post and this post on super().
It would probably make more sense to do something like this:
a.py:
class A(object):
def run(self):
# stuff
print ('Original')
b.py:
import a
class B(A):
def run(self):
return super(A, self).run()
# can also do: return A.run()
def run_more(self):
super(A, self).run()
# other stuff
print('new class')
I have a situation where I'm trying to modify the arguments passed to a decorator on one of my class methods. The code looks something like this:
class MyClass(object):
#tryagain(retries=3)
def mymethod(self, arg):
... do stuff ...
My problem is I'd like to alter the "retries" variable to something less than 3 when running my unit tests, but keep it at "3" for the production code. Unfortunately, it doesn't look like I can do something like this:
#tryagain(retries=self.retries)
def mymethod(self, arg):
... do stuff ...
or
#tryagain(retries=MyClass.retries)
def mymethod(self, arg):
... do stuff ...
because the class isn't defined at the point the arguments are passed to the decorator (as near as I can tell).
I also tried to add the variable within the module like so:
retries = 1
def MyClass(object):
#tryagain(retries=retries)
def mymethod(self, arg):
... do stuff ...
but then I can't seem to modify the value of "retries" from within my unit tests. Is there another way to accomplish what I'm trying to do?
I assume you try to reduce the number of retrials to increase test speed.
If so, modifying the number of retries variable doesn't seem to be the best approach. Instead, you could unit test the function mymethod without decorator first, and then create a mock function of mymethod. Let's call it mock_mymethod, decorate it with #tryagain and test if the logic of `tryagain actually works.
Check the mock module to see how to create a mock instance, this article about mock is also worth reading.
You could use an environment variable, set from your calling code (it might be good to put a default in here
import os
# ...
class MyClass(object):
#tryagain(retries=int(os.environ['project_num_retries']))
def mymethod(self, arg):
print("mymethod")
Or use a "globals"-type module, for example: project_settings.py containing:
num_retries = 3
Then
import project_settings
class MyClass(object):
#tryagain(retries=project_settings.num_retries)
def mymethod(self, arg):
print("mymethod")
But I'm not sure decorating your code with test information is how you really should go about it -- what about:
class MyClass(object):
def mymethod(self, arg):
print("mymethod")
Then in something like unittests.py:
DEV_TESTS = True # Change to False for production
num_retries = 3 if not DEV_TESTS else 1
import <your class>
class UnitTests():
def __init__(self):
self.c = <your_class>.MyClass()
#tryagain(retries=num_retries)
def test_mymethod(self):
self.c.mymethod("Foo")
t = UnitTests()
t.test_mymethod()
If you were so inclined, this unittests.py could be used with something like python's unittest package with:
DEV_TESTS = True # Change to False for production
num_retries = 3 if not DEV_TESTS else 1
import unittest
import <your class>
class UnitTests(unittest.TestCase):
def setUp(self):
self.c = <your class>.MyClass()
#tryagain(retries=num_retries)
def test_mymethod(self):
self.c.mymethod("Foo")
Note, I used the following simple example of a #tryagain decorator, yours may be more complicated and require some tuning of the examples:
def tryagain(retries):
def wrap(f):
def wrapped_f(*args,**kwargs):
for _ in xrange(retries):
f(*args,**kwargs)
return wrapped_f
return wrap