What's the best style, class method or global function? [closed] - python

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
It's often the case that I write a class, along with helper functions that are intimately connected to that class. For my current, a Window class to wrap some win32api calls, along with functions to, say, find windows. Should those helper functions be globals in the given module, or should they be class methods of the Window class. That is, should I have, in my module:
class Window(object):
def __init__(self, handle):
self.handle = handle
...
...
#classmethod
def find_windows(cls, params):
handles = ...
return map(cls, handles)
with the usage being:
from window import Window
windows = Window.find_windows("Specialty")
or should I do:
class Window(object):
def __init__(self, handle):
self.handle = handle
...
...
def find_windows(params):
handles = ...
return map(Window, handles)
with the usage being:
from window import Window, find_windows
windows = find_windows("Speciality")
Put more succinctly: should the grouping be at the class-level (e.g. they would be static methods in Java) or at the module level?

The first approach has the advantage that in case you subclass Window you can override yourfind_windows method (unlike static methods in java). However, this would only be useful if overriding would eventually make sense, otherwise I think it looks nicer having it as a function.
Edit: If you have multiple ways of finding Window objects, it would make sense to have an additonal class called WindowFinder or WindowManager which encapsulates query/finding logic.
This is a pattern used in django where if your Window class is let's say a db model, you than have Window.objects pointing to a WindowManager. The window manager has methods for building sql queries.
Then, you can do things like:
Window.objects.all()
or
Window.objects.filter(name="Speciality")

If find_windows() doesn't need access or knowledge of the inner workings of your Window class I would just make it a global function. There's little to be gained by increasing the dependencies between separate pieces of code, especially when it basically just an issue of where the source is located.

If I understand correctly, your find_windows function creates a list of Window instances from a list of handles.
It behaves a constructor, therefore, I would make it a function and not a classmethod of the Window class. As I mentioned in a comment, it feels more natural that way, but it's just a hunch.
EDIT
#Ioan Alexandru Cucu 's answer made me ponder the case where you subclass your Window as, say, a SubWindow.
If find_windows (or as suggested create_windows) is a classmethod, it will return a list of SubWindow instances, whereas it would only returns Window instances if it were an independent function as I suggested.
This can be considered as an interesting feature, and it would then make sense to keep find_windows as a classmethod. I would still put some kind of comment explaining the rationale in the docstring or elsewhere.
<tl;dr>: that depends.

Related

Creating subclasses vs. passing multiple arguments to specify variants of a function in Python [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I previously asked about combining multiple similar functions (~10 variants) with lots of repeated or similar code into something more concise and OO. Following up on that, I've created a central class whose methods have enough flexibility to permute into most of the functions I need given the right arguments:
class ThingDoer:
def __init__(self, settings):
# sets up some attrs
...
def do_first_thing(self, args):
identical_code_1
similar_code_1(args)
def do_other_thing(self, otherargs):
similar_code_2(args)
identical_code_2
def generate_output(self):
identical_code
return some_output
The actual thing is rather longer of course, with more different sets of args and otherargs etc.
I then use this class in a relatively clean function to get my output:
def do_things(name, settings, args, otherargs):
name = ThingDoer(settings)
name.do_first_thing(args)
name.do_second_thing(otherargs)
return name.generate_output()
My question is how to handle the many variants. Two obvious options I see are 1) have a dictionary of options specified differently for each variant, which gets passed to the single do_things function, or 2) have different subclasses of ThingDoer for each variant which process some of the different args and otherargs ahead of time and have do_things use the desired subclass specified as an argument.
Option 1:
# Set up dictionaries of parameter settings
option_1 = {args: args1, otherargs: otherargs1 ...}
option_2 = {args: args2, otherargs: otherargs2 ...}
...
# And then call the single function passing appropriate dictionary
do_things(name, settings, **option)
Option 2:
# Set up subclasses for each variant
class ThingDoer1(ThingDoer):
def __init__(self):
super().__init__()
self.do_first_thing(args1)
self.do_other_thing(otherargs1)
class ThingDoer2(ThingDoer):
def __init__(self):
super().__init__()
self.do_first_thing(args2)
self.do_other_thing(otherargs2)
...
# And then call the single function passing specific subclass to use (function will be modified slightly from above)
do_things(name, subclass, settings)
There are other options too of course.
Which of these (or something else entirely) would be the best way to handle the situation? and why?
The questions you have to ask yourself are:
Who will be using this code?
Who will be maintaining the divergent code for different variants?
We went through a similar process for dealing with multiple different devices. The programmer maintaining the variant-specific code was also the primary user of this library.
Because of work priorities we did not flesh out device-specific code until it was needed.
We settled on using a class hierarchy.
We built a superclass modeled on the first device variant we built code to address a particular piece of functionality and wrapped this in automated testing.
When we extended functionality to a new device that did not pass testing with existing code, we created overriding, modified methods in the new device's subclass to address failures.
If the functionality was new, then we added it to the base class for whatever device model we were working on at the time and retrofitted changes to subclasses for old devices if testing failed and they needed the new functionality.
Generally speaking, it depends on what level of customization you want to expose to those that will consume your API. Suppose we're writing some code to send an HTTP request (just an example obviously there are plenty of libraries for this).
If callers only care about easily configurable values, using keyword arguments with sane defaults is probably the way to go. I.e. your code might end up looking like:
from typing import Dict
def do_request(url: str,
headers: Dict[str, str],
timeout: float = 10.0,
verify_ssl: bool = False,
raise_on_status_error: bool = False):
# ...
do_request('https://www.google.com')
If you want to expose more customized behavior you might benefit from defining a base class with several methods that can be overridden (or not) to provide more specific behavior. So something like this:
class HTTPClient(object):
def do_request(self, url: str, *args, **kwargs):
self.before_request(url, *args, **kwargs)
result = self.send(url, *args, **kwargs)
self.after_request(result)
return result
def send(self, url: str, *args, **kwargs):
# Same stuff as the above do_request function.
def before_request(self, *args, **kwargs):
# Subclasses can override this to do things before making a request.
pass
def after_request(self, response):
# Subclasses can override this to do things to the result of a request.
pass
client = HTTPClient()
response = client.do_request('https://www.google.com')
Subclasses can implement before_request and after_request if they want, but by default, the behavior would be the same as the above functional equivalent.
Hopefully this helps! Sorry in advance if this isn't really relevant for your use case.

Python 2.7 - Is this a valid use of __metaclass__? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
The problem is as follows. There's a Base class that will be extended by
several classes which may also be extended.
All these classes need to initialize certain class variables. By the nature of the problem, the initialization should be incremental and indirect. The "user" (the programmer writing Base extensions) may want to "add" certain "config" variables, which may or may not have a (Boolean) property "xdim", and provide default values for them. The way this will be stored in class variables is implementation-dependent. The user should be able to say "add these config vars, with these defaults, and this xdim" without concerning herself with such details.
With that in mind, I define helper methods such as:
class Base(object):
#classmethod
def addConfig(cls, xdim, **cfgvars):
"""Adds default config vars with identical xdim."""
for k,v in cfgvars.items():
cls._configDefaults[k] = v
if xdim:
cls._configXDims.update(cfgvars.keys())
(There are several methods like addConfig.)
The initialization must have a beginning and an end, so:
import inspect
class Base(object):
#classmethod
def initClassBegin(cls):
if cls.__name__ == 'Base':
cls._configDefaults = {}
cls._configXDims = set()
...
else:
base = inspect.getmro(cls)[1]
cls._configDefaults = base._configDefaults.copy()
cls._configXDims = base._configXDims.copy()
...
#classmethod
def initClassEnd(cls):
...
if 'methodX' in vars(cls):
...
There are two annoying problems here. For one thing, none of these methods can be called inside a class body, as the class does not exist yet. Also, the initialization must be properly begun and ended (forgetting to begin it will simply raise an exception; forgetting to end it will have unpredictable results, since some of the extended class variables may shine through). Furthermore, the user must begin and end the initialization even if there is nothing to initialize (becauseinitClassEnd performs some initializations based on the existence of certain methods in the derived class).
The initialization of a derived class will look like this:
class BaseX(Base):
...
BaseX.initClassBegin()
BaseX.addConfig(xdim=True, foo=1, bar=2)
BaseX.addConfig(xdim=False, baz=3)
...
BaseX.initClassEnd()
I find this kind of ugly. So I was reading about metaclasses and I realized they can solve this kind of problem:
class BaseMeta(type):
def __new__(meta, clsname, clsbases, clsdict):
cls = type.__new__(meta, clsname, clsbases, clsdict)
cls.initClassBegin()
if 'initClass' in clsdict:
cls.initClass()
cls.initClassEnd()
return cls
class Base(object):
__metaclass__ = BaseMeta
...
Now I'm asking the user to provide an optional class method initClass and call addConfig and other initialization class methods inside:
class BaseX(Base):
...
#classmethod
def initClass(cls):
cls.addConfig(xdim=True, foo=1, bar=2)
cls.addConfig(xdim=False, baz=3)
...
The user doesn't even need to know that initClassBegin/End exist.
This works fine in some simple test cases I wrote but I'm new to Python (6 months or so) and I've seen warnings about metaclasses being dark arts to be avoided. They don't seem so misterious to me, but I though I'd ask.
Is this a justifiable use of metaclasses? It is even correct?
NOTE: The question about correctness was not in my mind originally. What happened is that my first implementation seemed to work, but it was subtly wrong. I caught the mistake on my own. It wasn't a typo but a consequence of not understanding completely how metaclasses work; it got me thinking that there might be other things that I was missing, so I asked, unwisely, "Is it even correct?" I wasn't asking anybody to test my code. I should have said "Do you see a problem with this approach?"
BTW, the error was that initially I did not define a proper BaseMeta class, but just a function:
def baseMeta(clsname, clsbases, clsdict):
cls = type.__new__(type, clsname, clsbases, clsdict)
...
The problem will not show in the initialization of Base; that will work fine. But a class derived from Base will fail, because that class will take its metaclass from the class of Base which istype, not BaseMeta.
Anyway, my main concern was (and is) about the appropriateness of the metaclass solution.
NOTE: The question was placed "on hold", apparently because some members did not understand what I was asking. It seems to me it was clear enough.
But I'll reword my questions:
Is this a justifiable use of metaclasses?
Is my implementation of BaseMeta correct? (No, I'm not asking "Does it work?"; it does. I'm asking "Is it in accordance with
the usual practices?").
xyres had no trouble with the questions. He answered them respectively 'yes' and 'no', and contributed helpful comments and advise. I accepted his response (a few hours after he posted it).
Are we happy now?
Generally, metaclasses are used to perform the following things:
To manipulate a class before it is created. Done by overriding __new__ method.
To manipulate a class after it is created. Done by overriding __init__ method.
To manipulate a class everytime it is called. Done by overriding __call__ method.
When I write manipulate I mean setting some attributes or methods on a class, or calling some methods when it's created, etc.
In your question you have mentioned that you need to call initClassBegin/End whenever a class inheriting Base is created. This sounds like a perfect case for using metaclasses.
Although, there are a few places where I'd like to correct you:
Override __init__ instead of __new__.
Inside __new__ you are calling type.__new__(...) which returns a class. It means you are actually manipulating a class after it is created, not before. So, the better place to do this is __init__.
Make initClassBegin/End private.
Since, you mentioned that you're new to Python, I thought I should point this out. You mention that the user/programmer doesn't need to know about initClassBegin and iniClassEnd methods. So, why not make them private? Just prefix an underscore and you're done: _initClassBegin and _initClassEnd are now private.
I found this blog post very helpful: Python metaclasses by example. The author has mentioned some use cases where you'd want to use metaclasses.

How to make objects accessible from anywhere within it's class [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
As a general strategy, is there a way to add objects to init, without initialization, while the code is being executed. For example, instead of:
class Test:
def __init__(self):
self.a = False
self.b = False
def set_values(self, in_boolean):
if in_boolean:
self.a = True
else:
self.b = True
Do this:
class Test:
def __init__(self):
self.a = False
def set_values(self, in_boolean):
if in_boolean:
self.a = True
else:
self.b = True
# This object is only created if this condition is
# met. Otherwise, this is not created in __init__.
Does one need to initialize any and all objects in __init__ if they want to save an object there?
If this is not possible, what is an alternative method for creating global objects that are created within a class method?
I'll explain a scenario when I would want to use this so as to better illustrate the question:
Say I am executing a method within a class. Depending on certain conditions, an object may or may not be generated within that class that I would like to be able to access from all other methods within the class. Because the object may or may not be created, I don't wish to initialize it in __init__.
To sum it up: If I want to 'save' an object on my class, do I need to initialize it in __init__?
EDIT
Ok so my problem was that I believed one only created "self." objects in init. As I understand it now, one can make a "self." object anywhere in the class, not just in init. This would make said object accessible from anywhere else in the class, which is ultimately what I am looking for here. Maybe the question should have been:
How to I make objects accessible from anywhere in it's class?
In Python, you don't need to 'declare' a variable before you use it at all. If you try to access a variable that doesn't exist, you can just wrap it in a try...except AttributeError and call it a day.
The __init__ on a class is just like any other method on Python, it doesn't have access to any sort of functionality that the others don't. The only difference is that it has the benefit of being automatically called whenever you instantiate your class, saving you the trouble of having to write a constructor-like class every time and call it manually.

Common practice of __new__ constructor? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know (?) about theory behind __new__ constructor in Python, but what I ask about is common practice -- for what purpose is this constructor really (!) used?
I've read about initializing immutable objects (the logic is moved from __init__ to __new__), anything else? Factory pattern?
Once again, please note the difference:
for what task __new__ can be used -- I am not interested
for what tasks __new__ is used -- I am :-)
I don't write anything in Python, my knowledge is from reading, not from experience.
Where you can actually answer the question: Common practice of new constructor?
The point of __new__ is to create an empty object instance that __init__ then initializes. Reimplementing __new__ you have full control of the instance you create, but you stop short of actually using the __init__ method to do any further processing. I can give you two cases where this is useful: automatic creation of methods and deserialization from disk of a class with a smart constructor. These are not the only ways you can solve these two problems. Metaclasses are another, more flexible way, but as any tool, you have different degrees of complexity you may want to get.
Automatic creation of methods
suppose you want to have a class that has a given set of properties. You can take control how these properties are initialized with code like this
class Foo(object):
properties = []
def __new__(cls, *args):
instance = object.__new__(cls, *args)
for p in cls.properties:
setattr(instance, p, 0)
return instance
class MyFoo(Foo):
properties = ['bar', 'baz']
def __init__(self):
pass
f=MyFoo()
print dir(f)
the properties you want are directly initialized to zero. You can do a lot of smart tricks, like doing the properties list dynamically. All objects instantiated will have those methods. A more complex case of this pattern is present in Django Models, where you declare the fields and get a lot of automatic stuff for free, thanks to __new__ big brother, metaclasses.
Deserialization from disk
Suppose you have a class with a given constructor that fills the fields of the class from an object, such as a process:
class ProcessWrapper(object):
def __init__(self, process):
self._process_pid = process.pid()
def processPid(self):
return self._process_pid
If you now serialize this information to disk and want to recover it, you can't initialize via the constructor. So you write a deserialization function like this, effectively bypassing the __init__ method you can't run.
def deserializeProcessWrapperFromFile(filename):
# Get process pid from file
process = ProcessWrapper.__new__()
process._process_pid = process_pid
return process

Inheritance best practice : *args, **kwargs or explicitly specifying parameters [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I often find myself overwriting methods of a parent class, and can never decide if I should explicitly list given parameters or just use a blanket *args, **kwargs construct. Is one version better than the other? Is there a best practice? What (dis-)advantages am I missing?
class Parent(object):
def save(self, commit=True):
# ...
class Explicit(Parent):
def save(self, commit=True):
super(Explicit, self).save(commit=commit)
# more logic
class Blanket(Parent):
def save(self, *args, **kwargs):
super(Blanket, self).save(*args, **kwargs)
# more logic
Perceived benefits of explicit variant
More explicit (Zen of Python)
easier to grasp
function parameters easily accessed
Perceived benefits of blanket variant
more DRY
parent class is easily interchangeable
change of default values in parent method is propagated without touching other code
Liskov Substitution Principle
Generally you don't want you method signature to vary in derived types. This can cause problems if you want to swap the use of derived types. This is often referred to as the Liskov Substitution Principle.
Benefits of Explicit Signatures
At the same time I don't think it's correct for all your methods to have a signature of *args, **kwargs. Explicit signatures:
help to document the method through good argument names
help to document the method by specifying which args are required and which have default values
provide implicit validation (missing required args throw obvious exceptions)
Variable Length Arguments and Coupling
Do not mistake variable length arguments for good coupling practice. There should be a certain amount of cohesion between a parent class and derived classes otherwise they wouldn't be related to each other. It is normal for related code to result in coupling that reflects the level of cohesion.
Places To Use Variable Length Arguments
Use of variable length arguments shouldn't be your first option. It should be used when you have a good reason like:
Defining a function wrapper (i.e. a decorator).
Defining a parametric polymorphic function.
When the arguments you can take really are completely variable (e.g. a generalized DB connection function). DB connection functions usually take a connection string in many different forms, both in single arg form, and in multi-arg form. There are also different sets of options for different databases.
...
Are You Doing Something Wrong?
If you find you are often creating methods which take many arguments or derived methods with different signatures you may have a bigger issue in how you're organizing your code.
My choice would be:
class Child(Parent):
def save(self, commit=True, **kwargs):
super(Child, self).save(commit, **kwargs)
# more logic
It avoids accessing commit argument from *args and **kwargs and it keeps things safe if the signature of Parent:save changes (for example adding a new default argument).
Update : In this case, having the *args can cause troubles if a new positional argument is added to the parent. I would keep only **kwargs and manage only new arguments with default values. It would avoid errors to propagate.
If you are certain that Child will keep the signature, surely the explicit approach is preferable, but when Child will change the signature I personally prefer to use both approaches:
class Parent(object):
def do_stuff(self, a, b):
# some logic
class Child(Parent):
def do_stuff(self, c, *args, **kwargs):
super(Child, self).do_stuff(*args, **kwargs)
# some logic with c
This way, changes in the signature are quite readable in Child, while the original signature is quite readable in Parent.
In my opinion this is also the better way when you have multiple inheritance, because calling super a few times is quite disgusting when you don't have args and kwargs.
For what it's worth, this is also the preferred way in quite a few Python libs and frameworks (Django, Tornado, Requests, Markdown, to name a few). Although one should not base his choices on such things, I'm merely implying that this approach is quite widespread.
Not really an answer but more a side note: If you really, really want to make sure the default values for the parent class are propagated to the child classes you can do something like:
class Parent(object):
default_save_commit=True
def save(self, commit=default_save_commit):
# ...
class Derived(Parent):
def save(self, commit=Parent.default_save_commit):
super(Derived, self).save(commit=commit)
However I have to admit this looks quite ugly and I would only use it if I feel I really need it.
I prefer explicit arguments because auto complete allows you to see the method signature of the function while making the function call.
In addition to the other answers:
Having variable arguments may "decouple" the parent from the child, but creates a coupling between the object created and the parent, which I think is worse, because now you created a "long distance" couple (more difficult to spot, more difficult to maintain, because you may create several objects in your application)
If you're looking for decoupling, take a look at composition over inheritance

Categories