Nested function as attribute in Python - python

Not sure if this is a dupe or not. Here it goes.
I need to write some Python code that looks like:
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
and then have a test engine class that scans all these test classes, loads all the test_case functions and for each invokes get_categories to find out if the test belongs t the group of interest for the specific run.
The problem is that get_categories is not seen as an attribute of test_case, and even if I manually assign it
class TestClass:
def test_case(self):
def get_categories(self):
return [“abc”,”bcd”]
# do the test here
test_case.get_categories = get_categories
this is only going to happen when test_case first runs, too late for me.
The reason why this function can’t go on the class (or at least why I want it to be also available at the per-function level) is that a TestClass can have multiple test cases.
Since this is an already existing testing infrastructure, and the categories mechanism works (other than the categories-on-function scenario, which is of lesser importance), a rewrite is not in the plans.
Language tricks dearly appreciated.

Nested functions don't become attributes any more than any other assignment.
I suspect your test infrastructure is doing some severely weird things if this isn't supported (and uses old-style classes!), but you could just do this:
class TestClass:
def test_case(self):
# ...
def _get_categories(self):
return [...]
test_case.get_categories = _get_categories
del _get_categories
Class bodies are executable code like any other block.

What you need is nested classes. Functions aren't made to do what you are trying to do, so you have to move up a notch. Function attributes are mainly used as markup, whereas classes can have anything you want.
class TestClass(object):
class TestCase(object):
#classmethod
def get_categories(cls):
return ['abc', 'efg']
Note that I used #classmethod so that you could use it without instantiating TestCase(); modify if you want to do test_case = TestCase().

Related

Can I add methods to class by condition and call them?

Here's my problem:
I have a class. And I have two objects of that class: ObjectOne and ObjectTwo
I'd like my class to have certain methods for ObjectOne and different methods for ObjectTwo.
I'd also like to choose those methods from a variety depending on some condition.
and of course, I need to call the methods I have 'from the outside code'
As I see the solution on my own (just logic, no code):
I make a default class. And I make a list of functions defined somewhere.
IF 'some condition' is True I construct a child class that takes one of those functions and adds it into class as class method. Otherwise I add some default set of methods. Then I make ObjectOne of this child class.
The question is: can I do that at all? And how do I do that? And how do I call such a method once it is added? They all would surely be named differently...
I do not ask for a piece of working code here. If you could give me a hint on where to look or maybe a certain topic to learn, this would do just fine!
PS: In case you wonder, the context is this: I am making a simple game prototype, and my objects represent two game units (characters) that fight each other automatically. Something like an auto-chess. Each unit may have unique abilities and therefore should act (make decisions on the battlefield) depending on the abilities it has. At first I tried to make a unified decision-making routine that would include all possible abilities at once (such as: if hasDoubleStrike else if... etc). But it turned out to be a very complex task, because there are tens of abilities overall, each unit may have any two, so the number of combinations is... vast. So, now I am trying to distribute this logic over separate units: each one would 'know' only of its own two abilities.
I mean I believe this is what would generally be referred to as a bad idea, but... you could have an argument passed into the class's constructor and then define the behavior/existence of a function depending on that condition. Like So:
class foo():
def __init__(self, condition):
if condition:
self.func = lambda : print('baz')
else:
self.func = lambda : print('bar')
if __name__ == '__main__':
obj1 = foo(True)
obj2 = foo(False)
obj1.func()
obj2.func()
Outputs:
baz
bar
You'd likely be better off just having different classes or setting up some sort of class hierarchy.
So in the end the best solution was the classical factory method and factory class. Like this:
import abc
import Actions # a module that works as a library of standard actions
def make_creature(some_params):
creature_factory = CreatureFactory()
tempCreature = creature_factory.make_creature(some_params)
return tempCreature
class CreatureFactory:
def make_creature(some_params):
...
if "foo" in some_params:
return fooChildCreature()
class ParentCreature(metaclass=abc.ABCMeta):
someStaticParams = 'abc'
#abc.abstractmethod
def decisionMaking():
pass
class fooChildCreature(ParentCreature):
def decisionMaking():
Actions.foo_action()
Actions.bar_action()
# some creature-specific decision making here that calls same static functions from 'Actions'
NewCreature = make_creature(some_params)
This is not ideal, this still requires much manual work to define decision making for various kinds of creatures, but it is still WAY better than anything else. Thank you very much for this advice.

Python static method wrapper is unnecessary? [duplicate]

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

How can I separate the functions of a class into multiple files?

I have a main class that has a ton of different functions in it. It's getting hard to manage. I'd like to be able to separate those functions into a separate file, but I'm finding it hard to come up with a good way to do so.
Here's what I've done so far:
File main.py
import separate
class MainClass(object):
self.global_var_1 = ...
self.global_var_2 = ...
def func_1(self, x, y):
...
def func_2(self, z):
...
# tons of similar functions, and then the ones I moved out:
def long_func_1(self, a, b):
return separate.long_func_1(self, a, b)
File separate.py
def long_func_1(obj, a, b):
if obj.global_var_1:
...
obj.func_2(z)
...
return ...
# Lots of other similar functions that use info from MainClass
I do this because if I do:
obj_1 = MainClass()
I want to be able to do:
obj_1.long_func_1(a, b)
instead of:
separate.long_func_1(obj_1, a, b)
I know this seems kind of nit-picky, but I want just about all of the code to start with obj_1., so there isn't confusion.
Is there a better solution that what I'm currently doing? The only issues that I have with my current setup are:
I have to change arguments for both instances of the function
It seems needlessly repetitive
I know this has been asked a couple of times, but I couldn't quite understand the previous answers and/or I don't think the solution quite represents what I'm shooting for. I'm still pretty new to Python, so I'm having a tough time figuring this out.
Here is how I do it:
Class (or group of) is actually a full module. You don't have to do it this way, but if you're splitting a class on multiple files I think this is 'cleanest' (opinion).
The definition is in __init__.py, methods are split into files by a meaningful grouping.
A method file is just a regular Python file with functions, except you can't forget 'self' as a first argument. You can have auxiliary methods here, both taking self and not.
Methods are imported directly into the class definition.
Suppose my class is some fitting GUI (this is actually what I did this for first time). So my file hierarchy may look something like
mymodule/
__init__.py
_plotstuff.py
_fitstuff.py
_datastuff.py
So plot stuff will have plotting methods, fit stuff contains fitting methods, and data stuff contains methods for loading and handling of data - you get the point. By convention I mark the files with a _ to indicate these really aren't meant to be imported directly anywhere outside the module. So _plotsuff.py for example may look like:
def plot(self,x,y):
#body
def clear(self):
#body
etc. Now the important thing is file __init__.py:
class Fitter(object):
def __init__(self,whatever):
self.field1 = 0
self.field2 = whatever
# Imported methods
from ._plotstuff import plot, clear
from ._fitstuff import fit
from ._datastuff import load
# static methods need to be set
from ._static_example import something
something = staticmethod(something)
# Some more small functions
def printHi(self):
print("Hello world")
Tom Sawyer mentions PEP-8 recommends putting all imports at the top, so you may wish to put them before __init__, but I prefer it this way. I have to say, my Flake8 checker does not complain, so likely this is PEP-8 compliant.
Note the from ... import ... is particularly useful to hide some 'helper' functions to your methods you don't want accessible through objects of the class. I usually also place the custom exceptions for the class in the different files, but import them directly so they can be accessed as Fitter.myexception.
If this module is in your path then you can access your class with
from mymodule import Fitter
f = Fitter()
f.load('somefile') # Imported method
f.plot() # Imported method
It is not completely intuitive, but not too difficult either. The short version for your specific problem was you were close - just move the import into the class, and use
from separate import long_func_1
and don't forget your self!
How to use super addendum
super() is a useful nifty function allowing parent method access in a simple and readable manner from the child object. These kind of classes are big to begin with, so inheritance not always make sense, but if it does come up:
For methods defined in the class itself, within __init__.py, you can use super() normally, as is.
If you define you method in another module (which is kind of the point here), you can't use super as is since the function is not defined in the context of your cell, and will fail. The way to handle this is to use the self argument, and add the context yourself:
def print_super(self):
print('Super is:', super(type(self), self))
Note you cannot omit the second argument, since out of context super does not bind the object method (which you usually want for calls like super(...).__init__()).
If this is something you want to do in many methods in different modules, you may want to provide a super method in the __init__.py file for use:
def MySuper(self):
return super()
usable by self in all methods.
I use the approach I found here. It shows many different approaches, but if you scroll down to the end, the preferred method is to basically go the opposite direction of #Martin Pieter's suggestion which is have a base class that inherits other classes with your methods in those classes.
So the folder structure is something like:
_DataStore/
__init__.py
DataStore.py
_DataStore.py
So your base class would be:
File DataStore.py
import _DataStore
class DataStore(_DataStore.Mixin): # Could inherit many more mixins
def __init__(self):
self._a = 1
self._b = 2
self._c = 3
def small_method(self):
return self._a
Then your Mixin class:
File _DataStore.py
class Mixin:
def big_method(self):
return self._b
def huge_method(self):
return self._c
Your separate methods would be located in other appropriately named files, and in this example it is just _DataStore.
I am interested to hear what others think about this approach. I showed it to someone at work and they were scared by it, but it seemed to be a clean and easy way to separate a class into multiple files.
Here is an implementation of Martijn Pieters's comment to use subclasses:
File main.py
from separate import BaseClass
class MainClass(BaseClass):
def long_func_1(self, a, b):
if self.global_var_1:
...
self.func_2(z)
...
return ...
# Lots of other similar functions that use info from BaseClass
File separate.py
class BaseClass(object):
# You almost always want to initialize instance variables in the `__init__` method.
def __init__(self):
self.global_var_1 = ...
self.global_var_2 = ...
def func_1(self, x, y):
...
def func_2(self, z):
...
# tons of similar functions, and then the ones I moved out:
#
# Why are there "tons" of _similar_ functions?
# Remember that functions can be defined to take a
# variable number of/optional arguments, lists/tuples
# as arguments, dicts as arguments, etc.
from main import MainClass
m = MainClass()
m.func_1(1, 2)
....

How do you decide which level a function should be at in python?

I have a file called file_parsers.py and it contains the following class:
class FileParser():
def __init__(self, file_text):
self.file_text = file_text
def do_something(self):
my_value = func_with_no_state()
I'm not sure what questions to ask when deciding whether func_with_no_state() should be inside the class or outside of the class as a file-level function?
Also, is it easier to stub this function when it is at a file-level or inside the class?
So... Does any other class use func_with_no_state? If not, it should be hidden within FileParser. If something else does use it, you have a bigger question. If OtherClass uses func_with_no_state pretty frequently (on par with FileParser) then it would be a good idea to keep func_with_no_state outside so that both classes can use it. But if FileParser is by far the main user, then OtherClass could just pull the function from FileParser's definition.

Using a metaclass to substitute a class definition?

Python 3.6
I'm trying to modify the behavior of a third party library.
I don't want to directly change the source code.
Considering this code below:
class UselessObject(object):
pass
class PretendClassDef(object):
"""
A class to highlight my problem
"""
def do_something(self):
# Allot of code here
result = UselessObject()
return result
I'd like to substitute my own class for UselessObject
I'd like to know if using a metaclass in my module to intercept the creation of UselessObject is a valid idea?
EDIT
This answer posted by Ashwini Chaudhary on the same question, may be of use to others. As well as the below answer.
P.S. I also discovered that 'module' level __metaclass__ does't work in python 3. So my initial question of it 'being a valid idea' is False
FWIW, here's some code that illustrates Rawing's idea.
class UselessObject(object):
def __repr__(self):
return "I'm useless"
class PretendClassDef(object):
def do_something(self):
return UselessObject()
# -------
class CoolObject(object):
def __repr__(self):
return "I'm cool"
UselessObject = CoolObject
p = PretendClassDef()
print(p.do_something())
output
I'm cool
We can even use this technique if CoolObject needs to inherit UselessObject. If we change the definition of CoolObject to:
class CoolObject(UselessObject):
def __repr__(self):
s = super().__repr__()
return "I'm cool, but my parent says " + s
we get this output:
I'm cool, but my parent says I'm useless
This works because the name UselessObject has its old definition when the CoolObject class definition is executed.
This is not a job for metaclasses.
Rather, Python allows you to do this through a technique called "Monkeypatching", in which you, at run time, substitute one object for another in run time.
In this case, you'd be changing the thirdyparty.UselessObject for your.CoolObject before calling thirdyparty.PretendClassDef.do_something
The way to do that is a simple assignment.
So, supposing the example snippet you gave on the question is the trirdyparty module, on the library, your code would look like:
import thirdyparty
class CoolObject:
# Your class definition here
thirdyparty.UselesObject = Coolobject
Things you have to take care of: that you change the object pointed by UselessObject in the way it is used in your target module.
If for example, your PretendedClassDef and UselessObject are defined in different modules, you have to procees in one way if UselessObject is imported with from .useless import UselessObject (in this case the example above is fine), and import .useless and later uses it as useless.UselessObject - in this second case, you have to patch it on the useless module.
Also, Python's unittest.mock has a nice patch callable that can properly perform a monkeypatching and undo it if by some reason you want the modification to be valid in a limited scope, like inside a function of yours, or inside a with block. That might be the case if you don't want to change the behavior of the thirdyparty module in other sections of your program.
As for metaclasses, they only would be of any use if you would need to change the metaclass of a class you'd be replacing in this way - and them they only could have any use if you'd like to insert behavior in classes that inherit from UselessObject. In that case it would be used to create the local CoolObject and you'd still perform as above, but taking care that you'd perform the monkeypatching before Python would run the class body of any of the derived classes of UselessObject, taking extreme care when doing any imports from the thirdparty library (that would be tricky if these subclasses were defined on the same file)
This is just building on PM 2Ring's and jsbueno's answers with more contexts:
If you happen to be creating a library for others to use as a third-party library (rather than you using the third-party library), and if you need CoolObject to inherit UselessObject to avoid repetition, the following may be useful to avoid an infinite recursion error that you might get in some circumstances:
module1.py
class Parent:
def __init__(self):
print("I'm the parent.")
class Actor:
def __init__(self, parent_class=None):
if parent_class!=None: #This is in case you don't want it to actually literally be useless 100% of the time.
global Parent
Parent=parent_class
Parent()
module2.py
from module1 import *
class Child(Parent):
def __init__(self):
print("I'm the child.")
class LeadActor(Actor): #There's not necessarily a need to subclass Actor, but in the situation I'm thinking, it seems it would be a common thing.
def __init__(self):
Actor.__init__(self, parent_class=Child)
a=Actor(parent_class=Child) #prints "I'm the child." instead of "I'm the parent."
l=LeadActor() #prints "I'm the child." instead of "I'm the parent."
Just be careful that the user knows not to set a different value for parent_class with different subclasses of Actor. I mean, if you make multiple kinds of Actors, you'll only want to set parent_class once, unless you want it to change for all of them.

Categories