Is there a way in Python to either ``type'' functions or for functions to
inherit test suites? I am doing some work evaluating several different
implementations of different functions with various criteria (for
example I may evaluate different sort functions based on speed for the
array size and memory requirements). And I
want to be able to automate the testing of the functions. So I would
like a way to identify a function as being an implementation of
a certain operator so that the test suite can just grab all functions
that are implementation of that operator and run them through the
tests.
My initial thought was to use classes and subclasses, but the class
syntax is a bit finnicky for this purpose because I would first have
to create an instance of the class before I could call it as a
function... that is unless there is a way to allow init to return
a type other than None.
Can metaclasses or objects be used in this fashion?
Functions are first class objects in Python and you can treat them as such, e.g. add some metadata via setattr:
>>> def function1(a):
... return 1
...
>>> type(function1)
<type 'function'>
>>> setattr(function1, 'mytype', 'F1')
>>> function1.mytype
'F1'
Or the same using a simple parametrized decorator:
def mytype(t):
def decorator(f):
f.mytype = t
return f
return decorator
#mytype('F2')
def function2(a, b, c):
return 2
I apologize, as I cannot comment but just to clarify you stated " I would first have to create an instance of the class before I could call it as a function... " Does this not accomplish what you are trying to do?
class functionManager:
def __init__(testFunction1 = importedTests.testFunction1):
self.testFunction1() = testFunction1
functionManager = functionManager()
Then just include the line from functionManagerFile import functionManager wherever you wanna use it?
Related
I heard from one guy that you should not use magic methods directly. and I think in some use cases I would have to use magic methods directly. So experienced devs, should I use python magic methods directly?
I intended to show some benefits of not using magic methods directly:
1- Readability:
Using built-in functions like len() is much more readable than its relevant magic/special method __len__(). Imagine a source code full of only magic methods instead of built-in function... thousands of underscores...
2- Comparison operators:
class C:
def __lt__(self, other):
print('__lt__ called')
class D:
pass
c = C()
d = D()
d > c
d.__gt__(c)
I haven't implemented __gt__ for neither of those classes, but in d > c when Python sees that class D doesn't have __gt__, it checks to see if class C implements __lt__. It does, so we get '__lt__ called' in output which isn't the case with d.__gt__(c).
3- Extra checks:
class C:
def __len__(self):
return 'boo'
obj = C()
print(obj.__len__()) # fine
print(len(obj)) # error
or:
class C:
def __str__(self):
return 10
obj = C()
print(obj.__str__()) # fine
print(str(obj)) # error
As you see, when Python calls that magic methods implicitly, it does some extra checks as well.
4- This is the least important but using let's say len() on built-in data types such as str gives a little bit of speed as compared to __len__():
from timeit import timeit
string = 'abcdefghijklmn'
print(timeit("len(string)", globals=globals(), number=10_000_000))
print(timeit("string.__len__()", globals=globals(), number=10_000_000))
output:
0.5442426
0.8312854999999999
It's because of the lookup process(__len__ in the namespace), If you create a bound method before timing, it's gonna be faster.
bound_method = string.__len__
print(timeit("bound_method()", globals=globals(), number=10_000_000))
I'm not a senior developer, but my experience says that you shouldn't call magic methods directly.
Magic methods should be used to override a behavior on your object. For example, if you want to define how does your object is built, you override __init__. Afterwards when you want to initialize it, you use MyNewObject() instead of MyNewObject.__init__().
For me, I tend to appreciate the answer given by Alex Martelli here:
When you see a call to the len built-in, you're sure that, if the program continues after that rather than raising an exception, the call has returned an integer, non-negative, and less than 2**31 -- when you see a call to xxx.__len__(), you have no certainty (except that the code's author is either unfamiliar with Python or up to no good;-).
If you want to know more about Python's magic methods, I strongly recommend taking a look on this documentation made by Rafe Kettler: https://rszalski.github.io/magicmethods/
No you shouldn't.
it's ok to be used in quick code problems like in hackerrank but not in production code. when I asked this question I used them as first class functions. what I mean is, I used xlen = x.__mod__ instead of xlen = lamda y: x % y which was more convenient. it's ok to use these kinda snippets in simple programs but not in any other case.
I'm working on a some classes, and for the testing process it would be very useful to be able to run the class methods in a for loop. I'm adding methods and changing their names, and I want this to automatically change in the file where I run the class for testing.
I use the function below to get a list of the methods I need to run automatically (there are some other conditional statements I deleted for the example to make sure that I only run certain methods that require testing and which only have self as an argument)
def get_class_methods(class_to_get_methods_from):
import inspect
methods = []
for name, type in (inspect.getmembers(class_to_get_methods_from)):
if 'method' in str(type) and str(name).startswith('_') == False:
methods.append(name)
return methods
Is it possible to use the returned list 'methods' to run the class methods in a for loop?
Or is there any other way to make sure i can run my class methods in my testingrunning file without having to alter or add things i changed in the class?
Thanks!
Looks like you want getattr(object, name[, default]):
class Foo(object):
def bar(self):
print("bar({})".format(self))
f = Foo()
method = getattr(f, "bar")
method()
As a side note : I'm not sure that dynamically generating lists of methods to test is such a good idea (looks rather like an antipattern to me) - now it's hard to tell without the whole project's context so take this remarks with the required grain of salt ;)
A Python class's __call__ method lets us specify how a class member should be behave as a function. Can we do the "opposite", i.e. specify how a class member should behave as an argument to an arbitrary other function?
As a simple example, suppose I have a ListWrapper class that wraps lists, and when I call a function f on a member of this class, I want f to be mapped over the wrapped list. For instance:
x = WrappedList([1, 2, 3])
print(x + 1) # should be WrappedList([2, 3, 4])
d = {1: "a", 2: "b", 3:"c"}
print(d[x]) # should be WrappedList(["a", "b", "c"])
Calling the hypothetical __call__ analogue I'm looking for __arg__, we could imagine something like this:
class WrappedList(object):
def __init__(self, to_wrap):
self.wrapped = to_wrap
def __arg__(self, func):
return WrappedList(map(func, self.wrapped))
Now, I know that (1) __arg__ doesn't exist in this form, and (2) it's easy to get the behavior in this simple example without any tricks. But is there a way to approximate the behavior I'm looking for in the general case?
You can't do this in general.*
You can do something equivalent for most of the builtin operators (like your + example), and a handful of builtin functions (like abs). They're implemented by calling special methods on the operands, as described in the reference docs.
Of course that means writing a whole bunch of special methods for each of your types—but it wouldn't be too hard to write a base class (or decorator or metaclass, if that doesn't fit your design) that implements all those special methods in one place, by calling the subclass's __arg__ and then doing the default thing:
class ArgyBase:
def __add__(self, other):
return self.__arg__() + other
def __radd__(self, other):
return other + self.__arg__()
# ... and so on
And if you want to extend that to a whole suite of functions that you create yourself, you can give them all similar special-method protocols similar to the builtin ones, and expand your base class to cover them. Or you can just short-circuit that and use the __arg__ protocol directly in those functions. To avoid lots of repetition, I'd use a decorator for that.
def argify(func):
def _arg(arg):
try:
return arg.__arg__()
except AttributeError:
return arg
#functools.wraps(func)
def wrapper(*args, **kwargs):
args = map(_arg, args)
kwargs = {kw: _arg(arg) for arg in args}
return func(*args, **kwargs)
return wrapper
#argify
def spam(a, b):
return a + 2 * b
And if you really want to, you can go around wrapping other people's functions:
sin = argify(math.sin)
… or even monkeypatching their modules:
requests.get = argify(requests.get)
… or monkeypatching a whole module dynamically a la early versions of gevent, but I'm not going to even show that, because at this point we're getting into don't-do-this-for-multiple-reasons territory.
You mentioned in a comment that you'd like to do this to a bunch of someone else's functions without having to specify them in advance, if possible. Does that mean every function that ever gets constructed in any module you import? Well, you can even do that if you're willing to create an import hook, but that seems like an even worse idea. Explaining how to write an import hook and either AST-patch each function creation node or insert wrappers around the bytecode or the like is way too much to get into here, but if your research abilities exceed your common sense, you can figure it out. :)
As a side note, if I were doing this, I wouldn't call the method __arg__, I'd call it either arg or _arg. Besides being reserved for future use by the language, the dunder-method style implies things that aren't true here (special-method lookup instead of a normal call, you can search for it in the docs, etc.).
* There are languages where you can, such as C++, where a combination of implicit casting and typed variables instead of typed values means you can get a method called on your objects just by giving them an odd type with an implicit conversion operator to the expected type.
I am working on a piece of scientific code in which we want to implement several solvers and compare them. We are using a config file in which we can declare the name of the solver we wish to apply. As we are constantly adding new solvers, I've contained them in a class such that we can pull the name from the config file and apply that one with getattr, without having to add anything new to the code other than the solver method itself. I've not implemented any attributes or any methods other than the solvers in this class.
I've also implemented an error message in case the chosen solver doesn't exist. The whole block looks like this:
try:
solver = getattr(Solvers,control['Solver'])
except AttributeError:
print('\n Invalid Solver Choice! Implemented solvers are: \n' \
+ str(set(dir(Solvers)) - set(dir(Empty)))) # Implemented solvers
raise
solver(inputs) # Call the desired solver
This is convenient as it automatically updates our error handling with the addition of a new method. My question relates to the error message there. Specifically, I want to return a list of the implemented solvers and only of the implemented solvers.
It doesn't suffice to simply list the output of dir(Solvers), since this includes a lot of other methods like __init__. Similarly, I can't set-subtract the results of dir(object), since this still ends up returning a few extra things like __dict__ and __module__. This is why I have the class Empty, which is just:
class Empty:
pass
I'm wondering if there exists a more elegant way to implement this than the kludgey Empty class.
One way is:
set(A.__dict__) - set(type.__dict__)
However, it will still return __weakref__. Instead, you can use:
set(A.__dict__) - set(type("", (), {}).__dict__)
It compares your class to type("", (), {}). This creates a new class object, like your Empty, but is a bit more subtle. For an example class:
>>> class A: pass
...
It gives:
>>> set(A.__dict__) - set(type("", (), {}).__dict__)
set()
And for:
>>> class B:
... def f(self): pass
...
It returns:
>>> set(B.__dict__) - set(type("", (), {}).__dict__)
{'f'}
You can do it with dir like:
>>> set(dir(B)) - set(dir(type("", (), {})))
{'f'}
Explicit is better than implicit, and you may want to have non-solver methods in you class. The simplest explicit-yet-dry solution is to just use a decorator to mark which methods should be considered as "solver" methods:
def solver(fun):
fun.is_solver = True
return fun
class Solvers(object):
#solver
def foo(self):
return "foo"
#solver
def bar(self):
return "bar"
#classmethod
def list_solvers(cls):
return [name for name, attr
in cls.__dict__.items()
if getattr(attr, "is_solver", False)]
I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions?
There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.)
No. It would be better to make them functions and if they are related, place them into their own module. For instance, if you have a class like this:
class Something(object):
#staticmethod
def foo(x):
return x + 5
#staticmethod
def bar(x, y):
return y + 5 * x
Then it would be better to have a module like,
# something.py
def foo(x):
return x + 5
def bar(x, y):
return y + 5 * x
That way, you use them in the following way:
import something
print something.foo(10)
print something.bar(12, 14)
Don't be afraid of namespaces. ;-)
If your functions are dependent on each other or global state, consider also the third approach:
class Something(object):
def foo(self, x):
return x + 5
def bar(self, x, y):
return y + 5 * self.foo(x)
something = Something()
Using this solution you can test a function in isolation, because you can override behavior of another function or inject dependencies using constructor.
Classes are only useful when you have a set of functionality than interacts with a set of data (instance properties) that needs to be persisted between function calls and referenced in a discrete fashion.
If your class contains nothing other than static methods, then your class is just syntactic cruft, and straight functions are much clearer and all that you need.
Not only are there no advantages, but it makes things slower than using a module full of methods. There's much less need for static methods in python than there is for them in java or c#, they are used in very special cases.
I agree with Benjamin. Rather than having a bunch of static methods, you should probably have a bunch of functions. And if you want to organize them, you should think about using modules rather than classes. However, if you want to refactor your code to be OO, that's another matter.
Depends on the nature of the functions. If they're not strongly unrelated (minimal amount of calls between them) and they don't have any state then yes I'd say dump them into a module. However, you could be shooting yourself in the foot if you ever need to modify the behavior as you're throwing inheritance out the window. So my answer is maybe, and be sure you look at your particular scenario rather then always assuming a module is the best way to collect a set of methods.