Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I made decorators to cache the data and in particular to list the files contained in a cache file. I specify that my code works perfectly but I dont know if it is a good practice because I decorate a method of my class with my decorator "# cache.listcachedir (...)" which in reality does not call upon my method but return a result (see code above).
My decorator (in cache.py):
def listcachedir(directory):
def decorator(func):
#wraps(func)
def wrapper(self):
# Join base cache dir to directory
fdir = self.locate(directory)
if os.path.isdir(fdir):
return os.listdir(fdir)
else:
raise CacheNotFoundError()
return wrapper
return decorator
In my other py file:
class Analitics:
def __init__(self, ):
self.base_cache_dir = ".../..."
...
def locate(directory):
return os.path.join(self.base_cache_dir, directory)
...
class Analyzer(Analitics):
def __init__(self):
Analitics.__init__(self)
#cache.listcachedir('my_cache')
def getCacheList(self): return # Return any the wrapper return result
if __name__=="__main__":
ana = Analyzer()
print(ana.getCacheList()) # Works
Yes, this is bad practice because it's needlessly confusing. You can define the function more simply as:
(cache.py)
def listcachedir(analitics, directory):
# Join base cache dir to directory
fdir = analitics.locate(directory)
if os.path.isdir(fdir):
return os.listdir(fdir)
else:
raise CacheNotFoundError()
and then:
class Analyzer(Analitics):
def __init__(self):
Analitics.__init__(self)
def getCacheList(self):
return listcachedir(self, 'my_cache')
This does exactly the same thing (including separating the listcachedir implementation into its own module), but without all the confusing layers of indirection.
I find the use of a decorator misleading here.
You don't use the func argument. I expect a decorator to do something with the function (or class) it decorates. Because if it does not, what's the point of defining the function that's being decorated?
You could write your code like this:
def make_cachemethod(directory):
def cachemethod(self):
fdir = self.locate(directory)
if os.path.isdir(fdir):
return os.listdir(fdir)
else:
raise CacheNotFoundError()
return cachemethod
class Analyzer(Analitics):
getCacheList = make_cachemethod('my_cache')
# more code here
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Python's threading module has two interfaces. In one you instantiate threading.Thread and pass to it the function you want to run, schematically:
class MyClass:
def __init__(self):
self.my_vars = {}
def my_func(self):
# does stuff
self.my_vars = # set result of running the function here
mc = MyClass()
t = threading.Thread(target=mc.my_func) # also, pass arguments
# set some options to t, like t.setDaemon(True)
t.start()
In the other you subclass threading.Thread and subclass the run method, schematically:
class MyClass(threading.Thread):
def __init__(self):
super(MyClass,self).__init__(*args, **kwargs)
self.my_vars = {}
def run(self):
# does stuff
self.my_vars = # set result of running the function here
t = MyThreadedClass()
t.start()
I started out using the first one, but at some point realized that I was having to write a lot of boilerplate every time I wanted to start a thread to run my_func: I kept having to remind myself the syntax to pass argments to my_func, plus I had to write multiple lines to set thread options etc etc. So I decided to move to the second style. In this way I just instantiate my class and then call .start(). Note that at this stage the difference is only in how easy it is to use these things, as my_func and run are exactly the same.
But now I'm realizing that this made my code harder to test. Before if I wanted to test my_func under so and so arguments, I just had to import the file where it is defined and then run it on some input. Or I could even do it from a jupyter notebook and play it its inputs to see its outputs. But with the second style, every time I want to do something as simple as run my_func, it comes with a thread attached.
So question: is there a way I can organize my code that it's cleaner for an application to run it on its own thread, but that has no thread when I want to call it for example from a notebook?
Make objects of your class callable:
from abc import ABCMeta, abstractmethod
import threading
class Callable:
__metaclass__ = ABCMeta
#abstractmethod
def __call__(self): raise NotImplementedError
class MyCallable(Callable):
def __init__(self, x):
self.x = x
def __call__(self):
print('x=', self.x)
# without a thread:
callable = MyCallable(7)
callable()
# on a thread:
callableThread = threading.Thread(target=callable)
callableThread.start()
callableThread.join()
You can do without the formality of using abstract base classes and abstract methods -- just make sure your class defines a __call__ method that does the work you need to do currently being done by your my_func.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have a class in which I collect data to list with the help of wide range of methods (say 23). Every method uses list and could modify it. My question is how can I call (in class, respectively) all methods of class in more generally accepted way?
class Example(object):
def __init__(self):
self.lst = []
def multiply(self):
for i in xrange(10):
self.lst.append(i**2)
def get_list(self):
return self.lst
# Calling:
ex = Example()
ex.multiply
print ex.get_list
# What I want is call multiply method inside class and just do this
print ex.get_list
Example class illustrates my idea. I know that it is possible to solve my problem through iterating with Example.__dict__values(), calling all methods in one class's method or with inspect module, but I am not sure that there are not more pure-Pythonic ways.
UPDATE:
All I want is to collect configuration data for yapf formatter.
The main problem is how to call all methods in class - I don't want to implement all configuration analysis of input file in one method. OOP and patterns is my guide.
UPDATE 2:
Answer for Jared Goguen. I want to create class to collect data to dictionary and send it to CreateStyleFromConfig method.
And when it will done, I want just to get get_style method from class without calling all methods inside it:
config = ConfData() # Class which collects all configurations from file
config.get_style()
ConfData class contains methods with specific for data name. For example:
def align_closing_bracket_with_visual_indent(self):
# Do some work..
pass
So, I guess there are two potential solution to this, but I don't really like either of them. I think you might be approaching the problem the wrong way.
You could use an external decorator track and a class variable tracker to keep track of which methods you want to call.
def track(tracker):
def wrapper(func):
tracker.append(func)
return func
return wrapper
class Example:
tracker = []
#track(tracker)
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
#track(tracker)
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
def collect_data(self):
return dict(tup for method in self.tracker for tup in method(self))
print Example().collect_data()
# {'key_b1': 'val_b1', 'key_b2': 'val_b2', 'key_a1': 'val_a1', 'key_a2': 'val_a2'}
With this approach, you can have utility methods in your class that you don't want to call.
Another approach would be to inspect the directory of your class and logically determine which methods you want to call.
from inspect import ismethod
class Example:
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
def collect_data(self):
data = {}
for attr in dir(self):
if not attr.startswith('_') and attr != 'collect_data':
possible_method = getattr(self, attr)
if ismethod(possible_method):
data.update(possible_method())
return data
This approach is similar to the one mentioned in your post (i.e. iterating over __dict__) and is weak because any instance methods that you don't want to call need to start with '_'. You can adapt this approach to use some other naming convention, but it might not be readable to anyone else.
Either of these methods could implement the collect_data portion as a super-class, allowing you to create minimal sub-classes. This doesn't really help much with the first approach.
class MethodTracker(object):
def collect_data(self):
return dict(tup for method in self.tracker for tup in method(self))
class Example(MethodTracker):
tracker = []
#track(tracker)
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
#track(tracker)
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
With the second approach, the resulting sub-class is minimal. Also, you can do a little reflection to allow the super-class to have utility methods that don't start with '_'.
from inspect import ismethod
class MethodTracker(object):
def collect_data(self):
data = {}
for attr in dir(self):
if not attr.startswith('_') and not hasattr(MethodTracker, attr):
possible_method = getattr(self, attr)
if ismethod(possible_method):
data.update(possible_method())
return data
def decoy_method(self):
return 'This is not added to data.'
class Example(MethodTracker):
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
If I have some (and only one) class, A, in my file with some basic method,
class A(B):
def some_overrided_method()
return {'output': True}
what of the following would be better to use?
Static method inside class
class A(B):
def some_overrided_method(self)
return {'output': self.do_smth()}
#staticmethod
def do_smth(self):
return True
Function outside class
def do_smth():
return True
class A(B):
def some_overrided_method(self)
return {'output': do_smth()}
Of just some nested method inside
class A(B):
def some_overrided_method(self)
def do_smth():
return True
return {'output': do_smth()}
If it doesn't do anything with the class/instance, there is no point in it being a method of the class.
Just use a normal function.
A staticmethod is very rarely useful.
The only reason I can think of for using a staticmethod is if the function makes no sense to use outside of the class.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have an class and I want to test it with the built in unittest module. In particular I want to test if I can create intances without throwing errors and if I can use them.
The problem is that the creation of this objects is quite slow, so I can create the object in the setUpClass method and reuse them.
#classmethod
def setUpClass(cls):
cls.obj = MyClass(argument)
def TestConstruction(self):
obj = MyClass(argument)
def Test1(self):
self.assertEqual(self.obj.metohd1(), 1)
the point is
I am creating 2 times the expensive object
setUp is calles before TestConstruction, so I cannot check the failure inside TestConstruction
I will be happy for example if there is a way to set TestConstruction to be executed before the other tests.
Why not test both initialization and functionality in the same test?
class MyTestCase(TestCase):
def test_complicated_object(self):
obj = MyClass(argument)
self.assertEqual(obj.method(), 1)
Alternatively, you can have one test for the case object initialization, and one test case for the other tests. This does mean you have to create the object twice, but it might be an acceptable tradeoff:
class CreationTestCase(TestCase):
def test_complicated_object(self):
obj = MyClass(argument)
class UsageTestCase(TestCase):
#classmethod
def setupClass(cls):
cls.obj = MyClass(argument)
def test_complicated_object(self):
self.assertEqual(obj.method(), 1)
Do note that if you methods mutate the object, you're going to get into trouble.
Alternatively, you can do this, but again, I wouldn't recommend it
class MyTestCase(TestCase):
_test_object = None
#classmethod
def _create_test_object(cls):
if cls._test_object is None:
cls._test_object = MyClass(argument)
return cls._test_object
def test_complicated_object(self):
obj = self._create_test_object()
self.assertEqual(obj.method(), 1)
def more_test(self):
obj = self._create_test_object()
# obj will be cached, unless creation failed
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm working on a project where being able to discover the order of declaration of functions within a class would be quite useful. Basically, I'd like to be able to guarantee that all functions within a class are executed in the order they are declared.
The end result is a web page in which the order of the output of the functions matches the order in which the functions are declared. The class will inherit from a generic base class that defines it as a web page. The web application will dynamically load the .py file.
class Register(object):
def __init__(self):
self._funcs = []
def __call__(self, func):
self._funcs.append(func)
return func
class MyClass(object):
_register = Register()
#_register
def method(self, whatever):
yadda()
# etc
from types import MethodType, FunctionType
methodtypes = set((MethodType, FunctionType, classmethod, staticmethod))
def methods_in_order(cls):
"Given a class or instance, return its methods in the order they were defined."
methodnames = (n for n in dir(cls) if type(getattr(cls, n)) in methodtypes)
return sorted((getattr(cls, n) for n in methodnames),
key=lambda f: getattr(f, "__func__", f).func_code.co_firstlineno)
Usage:
class Foo(object):
def a(): pass
def b(): pass
def c(): pass
print methods_in_order(Foo)
[<unbound method Foo.a>, <unbound method Foo.b>, <unbound method Foo.c>]
Also works on an instance:
print methods_in_order(Foo())
If any inherited methods were defined in a different source file, the ordering may not be consistent (since the sort relies upon each method's line number in its own source file). This could be rectified by manually walking the class's method resolution order. This would be a fair bit more complicated so I won't take a shot here.
Or if you want only the ones directly defined on the class, which seems like it might be useful for your described application, try:
from types import MethodType, FunctionType
methodtypes = set((MethodType, FunctionType, classmethod, staticmethod))
def methods_in_order(cls):
"Given a class or instance, return its methods in the order they were defined."
methodnames = (n for n in (cls.__dict__ if type(cls) is type else type(cls).__dict__)
if type(getattr(cls, n)) in methodtypes)
return sorted((getattr(cls, n) for n in methodnames),
key=lambda f: getattr(f, "__func__", f).func_code.co_firstlineno)
This assumes a new-style class.