I am often using classmethods instead of the default constructor in python for example:
class Data(object):
def __init__(self, x, y, z):
self.x = x etc..
#classmethod
def from_xml(cls, xml_file):
x, y, z = import_something_from_xml(xml_file)
return cls(x,y,z)
this approach works good,
but since i often have large classmethod-constructors I want to split them up in smaller functions. My problem with that is, that these smaller functions can be seen in the Class namespace, Is there any way to avoid this ?
You can mark the smaller helper functions as private:
#classmethod
def __import_something_from_xml(cls, data):
#logic
return a, b, c
and you would run:
#classmethod
def from_xml(cls, xml_file):
x, y, z = cls.__import_something_from_xml(xml_file)
return cls(x,y,z)
Keep in mind this is only naming convention and this method can be accessed from Data namespace.
Or you can designate a helper class:
class XMLDataHelper:
#staticmethod
def import_something_from_xml(data):
#logic
return a, b, c
And the code would look like this
#classmethod
def from_xml(cls, xml_file):
x, y, z = XMLDataHelper.import_something_from_xml(xml_file)
return cls(x,y,z)
Related
class parent:
#abstractmethod
def process(self, x, y, z):
pass
def post_process(self):
self.process(x, y, z)
class child1(parent):
def process(self, x, y, z):
# do stuff with x and y, and ignore z
class child1(parent):
def process(self, x, y, z):
# do stuff with x, y, z
This is an existing design in the codebase I am working in. I don't like it for readability because the variable z isn't used for all child classes, and I suspect it was designed this way so that post_process doesn't need to check which child class is calling.
I am thinking of changing it to:
class parent:
def post_process(self):
if isinstance(self, child1):
self.process(x, y)
elif isinstance(self, child2):
self.process(x, y, z)
class child1(parent):
def process(self, x, y):
# do stuff with x and y
class child1(parent):
def process(self, x, y, z):
# do stuff with x, y, z
This does the case work of checking which child class is calling post_process, but I feel it is better for readability and avoids having to pass in unnecessary arguments.
Are there other ways besides this to resolve this issue?
I have two classes that share a lot of common stuff except one function f(x).
class A(object):
def __init__(self):
// some stuff
def g(self):
// some other stuff
def f(self, x):
// many lines of computations
q = ...
y = ...
return y
class B(A):
def f(self, x):
// same many lines as in A
q = ...
y = ...
// a few extra lines
z = ... # z needs both y and q
return z
In this case, do I have to define f(x) from scratch in class B? Is there some trick to re-use the code in A.f(x)?
One way I can think of is to make q an instance property self.q, then do the following
def f(self.x):
y = A.f(self, x)
// a few extra lines
z = ... # using y and self.q
return z
Or maybe let A.f(x) return both q and y, then call A.f(self, x) in B's definition of f(x).
Are these approaches the standard way to do it? Is there something nicer?
Let's assume you want to organize your code around classes. If that's the case, then I would strongly recommend using super to reference the super class:
class MyA(object):
def f(self, x):
print 'MyA'
return x
class MyB(MyA):
def f(self, x):
print 'MyB'
print super(MyB, self).f(x)
This approach allows you to stick with classes and is an idiomatic way of referencing inherited classes.
If you don't need to organize your code this way or otherwise have reasons to break things out into functions that are usable by other parts of your code which don't care about these classes, then you can move your f logic into a function.
Here's an example:
def f(x):
return x
class MyA(object):
def f(self, x):
return f(x)
class MyB(MyA):
def f(self, x):
y = f(x)
...
return y
Say I have a class definition which takes some arguments and creates additional data with it, all within the __init__ method:
class Foo():
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
self.bar = generate_bar(x, y, z)
def generate_bar(self, x, y, z):
return x+y+z
I only want to run the generate_bar() method once, when an instance of the class is created, so it wouldn't make sense for that method to be callable on the instance. Is there a sleeker way to ensure that a method is only available to __init__, or do I just have to assume anyone who looks at my code will understand that there's never a situation in which it would be appropriate to call generate_bar() on an instance of the class?
If you are not using any instance state, just make it a separate function:
def _generate_bar(x, y, z):
return x + y + z
class Foo():
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
self.bar = _generate_bar(x, y, z)
The leading underscore, by convention, signals it is an internal function not to be used by external consumers of your module.
You could nest the function inside the __init__ but this doesn't really help with readability:
class Foo():
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def generate_bar():
return x + y + z
self.bar = generate_bar()
Here generate_bar() doesn't even need arguments, it could access x, y and z from the enclosing scope.
For "hidden" functions, it is customary to use a single underscore to signify your intent:
def _generate_bar(self, x, y, z): ...
These are still accessible, but are not visible via tab completion.
See this SO explanation:
class foo:
def __init__(self):
self.response = self._bar()
def _bar(self):
return "bar"
f = foo()
>>> f.response
bar
You can verify yourself that the function _bar is not visible to the object via tab completion
NOTE on the question below. I think the 'proper' pythonic idiom is to a) create module functions, such as foo_math below, and then call their specific action against an instance within the class itself. The bottom piece of code reflects that approach.
I want to define a classmethod which takes two arguments and returns a value. I want the same method to be able to be called on a class instance with the instance value pass as one of the arguments. Can I do this without defining two distinct methods as I have done here?
class Foo(object):
__init__(x):
self.x = x
#classmethod
def foo_math(cls, x, y):
return x + y
def math(self, y):
return Foo.foo_math(self.x, y)
What I would like is:
>>> Foo.math(3, 4)
7
>>> f = Foo()
>>> f.x = 3
>>> f.math(4)
7
Short of subtyping int, here is my conclusion to this question:
def foo_math(cls, x, y):
return x + y
class Foo(object):
__init__(x):
self.x = x
def foo_math(self, y):
return foo_math(self, y)
i don't recommend doing this, but if you really want, it's this (thank you other guy on stackoverflow for first part):
class staticorinstancemethod(object):
def __init__(self, func):
self.func = func
def __get__(self, instance, owner):
return functools.partial(self.func, instance)
then, do something like
class F(object):
#staticorinstancemethod
def math(instOrNone, v1, v2=None):
return instOrNone.x + v1 if instOrNone else v1 + v2
but maybe you just want to define the __add__ and __radd__ methods...
I don't think that you can call a method from a class without defining an object of that class (class methods don't belong inside the methods of any one class), so things like Foo.math(3, 4) will return a NameError as Foo has not been defined.
With this in mind, you should modify your code to be like this (even though with the problem solved there are still some issues with the code):
# A class method would probably go here somewhere.
class Foo(object):
def __init__(self, x):
self.x = x
def foo_math(self, x, y):
return x + y
def math(self, y):
return self.foo_math(self.x, y)
Then you can do:
>>> f = Foo(3)
>>> f.math(4)
7
I have two classes that I would like to merge into a composite. These two classes will continue to be used standalone and I don't want to modify them.
For some reasons, I want to let my composite class creating the objects. I am thinking about something like the code below (it is just an example) but I think it is complex and I don't like it very much. I guess that it could be improved by some techniques and tricks that I ignore.
Please note that the composite is designed to manage a lot of different classes with different constructor signatures.
What would recommend in order to improve this code?
class Parent:
def __init__(self, x):
self.x = x
class A(Parent):
def __init__(self, x, a="a", b="b", c="c"):
Parent.__init__(self, x)
self.a, self.b, self.c = a, b, c
def do(self):
print self.x, self.a, self.b, self.c
class D(Parent):
def __init__(self, x, d):
Parent.__init__(self, x)
self.d = d
def do(self):
print self.x, self.d
class Composite(Parent):
def __init__(self, x, list_of_classes, list_of_args):
Parent.__init__(self, x)
self._objs = []
for i in xrange(len(list_of_classes)):
self._objs.append(self._make_object(list_of_classes[i], list_of_args[i]))
def _make_object(self, the_class, the_args):
if the_class is A:
a = the_args[0] if len(the_args)>0 else "a"
b = the_args[1] if len(the_args)>1 else "b"
c = the_args[2] if len(the_args)>2 else "c"
return the_class(self.x, a, b, c)
if the_class is D:
return the_class(self.x, the_args[0])
def do(self):
for o in self._objs: o.do()
compo = Composite("x", [A, D, A], [(), ("hello",), ("A", "B", "C")])
compo.do()
You could shorten it by removing type-checking _make_object, and letting class constructors take care of the default arguments, e.g.
class Composite(Parent):
def __init__(self, x, list_of_classes, list_of_args):
Parent.__init__(self, x)
self._objs = [
the_class(self.x, *the_args)
for the_class, the_args
in zip(list_of_classes, list_of_args)
if isinstance(the_class, Parent.__class__)
]
def do(self):
for o in self._objs: o.do()
This would also allow you to use it with new classes without modifying its code.