In my app, I want all my datetime.__str__() to return differently to the default. Is it ok to simply inherit and overwrite the method?
class datetime(datetime):
def __str__(self):
return self.strftime('%d-%m-%y %H:%M:%S')
Any advice would be great.
Generally, you will want to name your new class something other than one defined in the builtin modules, but yes, that is how you do it. Please for the sake of your sanity do not create a class definition using the same name as a predefined class.
I just tried to do the class datetime(datetime) bit, and it does work, at least in the interpreter, but any python expert will probably laugh or shudder.
It's quite a philosophical question :) Generally in Python we don't like such things.
You now must use your own class and always remember to use it instead of default datetime (which is hard to maintain if they have the same name). Ruby guys would just monekypatch datetime, which I consider even worse.
I would personally not even inherit it with different name (it would be confusing me also), but make some shortcut function outside any class and use it directly.
Related
In my research I found that in Python 3 these three types of class definition are synonymous:
class MyClass:
pass
class MyClass():
pass
class MyClass(object):
pass
However, I was not able to find out which way is recommended. Which one should I use as a best practice?
I would say: Use the third option:
class MyClass(object):
pass
It explicitly mentions that you want to subclass object (and doesn't the Zen of Python mention: "Explicit is better than implicit.") and you don't run into nasty errors in case you (or someone else) ever run the code in Python 2 where these statements are different.
In Python 2, there's 2 types of classes. To use the new-style, you have to inherit explicitly from object. If not, the old-style implementation is used.
In Python 3, all classes extend object implicitly, whether you say so yourself or not.
You probably will want to use the new-style class anyway but if you code is supposed to work with both python 2 and 3 you'll have to explicitly inherit from object:
class Foo(object):
pass
To jump on the other answer, yes the Zen of Python state that
Explicit is better than implicit.
I think this mean we should avoid possible confusion in code like we should in language in general, remember code is communication.
If you only work with python 3, and your code/project explicitly state that, there is no possible confusion, all class without explicit inheritance automatically inherit from object. If for some obscure reason the base class change in the future (let's imagine from object to Object), the same code will work. And the Zen of Python also says that
Simple is better than complex.
(of course complex is quite an overstatement in this example but still...)
So again if you code only support python3, you should use the simplest form:
class Foo:
pass
The form with just () is quite useless since it doesn't give any valuable information.
Is there any difference in the following two pieces of code? If not, is one preferred over the other? Why would we be allowed to create class attributes dynamically?
Snippet 1
class Test(object):
def setClassAttribute(self):
Test.classAttribute = "Class Attribute"
Test().setClassAttribute()
Snippet 2
class Test(object):
classAttribute = "Class Attribute"
Test()
First, setting a class attribute on an instance method is a weird thing to do. And ignoring the self parameter and going right to Test is another weird thing to do, unless you specifically want all subclasses to share a single value.*
* If you did specifically want all subclasses to share a single value, I'd make it a #staticmethod with no params (and set it on Test). But in that case it isn't even really being used as a class attribute, and might work better as a module global, with a free function to set it.
So, even if you wanted to go with the first version, I'd write it like this:
class Test(object):
#classmethod
def setClassAttribute(cls):
cls.classAttribute = "Class Attribute"
Test.setClassAttribute()
However, all that being said, I think the second is far more pythonic. Here are the considerations:
In general, getters and setters are strongly discouraged in Python.
The first one leaves a gap during which the class exists but has no attribute.
Simple is better than complex.
The one thing to keep in mind is that part of the reason getters and setters are unnecessary in Python is that you can always replace an attribute with a #property if you later need it to be computed, validated, etc. With a class attribute, that's not quite as perfect a solution—but it's usually good enough.
One last thing: class attributes (and class methods, except for alternate constructor) are often a sign of a non-pythonic design at a higher level. Not always, of course, but often enough that it's worth explaining out loud why you think you need a class attribute and making sure it makes sense. (And if you've ever programmed in a language whose idioms make extensive use of class attributes—especially if it's Java—go find someone who's never used Java and try to explain it to him.)
It's more natural to do it like #2, but notice that they do different things. With #2, the class always has the attribute. With #1, it won't have the attribute until you call setClassAttribute.
You asked, "Why would we be allowed to create class attributes dynamically?" With Python, the question often is not "why would we be allowed to", but "why should we be prevented?" A class is an object like any other, it has attributes. Objects (generally) can get new attributes at any time. There's no reason to make a class be an exception to that rule.
I think #2 feels more natural. #1's implementation means that the attribute doesn't get set until an actual instance of the class gets created, which to me seems counterintuitive to what a class attribute (vs. object attribute) should be.
I'm a newbie in writing OO program, and I cannot find any good solution of the problem I'm facing. May anyone please help?
I'm sourcing some modules which I cannot freely modify it, and I would like to add a method on a superclass so that I can call on instances of subclasses. So in my module:
import externalLib as myLib
class Superclass(myLib.Superclass):
def myNewMethod(self):
print(self.name) # Print a member variable
def __main__():
obj = myLib.Subclass(args)
obj.myNewMethod() # Expect prints the same member variable in subclass
Result: "Subclass" has no attribute or method named "myNewMethod".
Extending all the subclass is not possible to me, as there are too
many subclasses.
I could solve the problem by defining the function
under my module instead of the Superclass, but I just think that way
is not like an OO-architecture.
Is there any better solution? Or any keywords or OO design concept can I refer to?
Thanks!
Yes, there is one keyword - "wrong". OO is model, where what you want to do should NOT be done.
If you have REALLY good reason for that, you can do it much simpler:
import externalLib as myLib
def myNewMethod(self):
print(self.name)
myLib.Superclass.myNewMethod = myNewMethod
Why didn't your code work?
When you defined Superclass inheriting from myLib.Superclass, it stayed ONLY in this module. When you defined your Superclass, name "Superclass" was bind with your new class only in global scope, but old value didn't change, co Superclass in myLib/externalLib scope stayed the same. I can see how you got impression that it may work, if you worked with classic-OO languages like Java or C++.
Little known fact - Java/C++ OO model is not really object-oriented. It does such impression, but OOP model is REALLY implemented in Smalltalk.
You are looking to monkeypatch the original class. Because methods are just attributes on a class, you can always add more:
import externalLib as myLib
def myNewMethod(self):
print(self.name)
myLib.Superclas.myNewMethod = myNewMethod
This is a style conventions question.
PEP8 convention for a class definition would be something like
class MyClass(object):
def __init__(self, attri):
self.attri = attri
So say I want to write a module-scoped function which takes some data, processes it, and then creates an instance of MyClass.
PEP8 says that my function definitions should have lowercase_underscore style names, like
def get_my_class(arg1, arg2, arg3):
pass
But my inclination would be to make it clear that I'm talking about MyClass instances like so
def get_MyClass(arg1, arg2, arg3):
pass
For this case, it looks trivially obvious that my_class and MyClass are related, but there are some cases where it's not so obvious. For example, I'm pulling data from a spreadsheet and have a SpreadsheetColumn class that takes the form of a heading attribute and a data list attribute. Yet, if you didn't know I was talking about an instance of the SpreadsheetColumn class, you might think that I'm talking about a raw column of cells as they might appear in an Excel sheet.
I'm wondering if it's reasonable to violate PEP8 to use get_MyClass. Being new to Python, I don't want to create a habit for a bad naming convention.
I've searched PEP8 and Stack Overflow and didn't see anything that addressed the issue.
Depending on the usage of the function, it might be more appropriate to turn it into a classmethod or staticmethod. Then it's association with the class is clear, but you don't violate any naming conventions.
e.g.:
class MyClass(object):
def __init__(self,arg):
self.arg = arg
#classmethod
def from_sum(cls,*args):
return cls(sum(args))
inst = MyClass.from_sum(1,2,3,4)
print inst.arg #10
Let's take a step back. Usually, you don't want to do this at all, so the naming convention is the least of your worries.
First, normally, you don't care what actual class or type something is. This is what duck typing is all about. You don't want a SpreadsheetColumn instance, you want something that you can use as a spreadsheet column. It may be an instance of SpreadsheetColumn, or of a subclass, or of some proxy class, or of some mock class for testing—whatever it is, you don't care, as long as it looks and works like a column.
Notice that, even in static languages like Java and C#, factory functions (or objects) usually don't create an instance of a specific class, they create an instance of any class that implements a specific interface. In Python, that's usually implicit. (And, when it's not, it's usually because you're using something like PEAK or Twisted, and you should follow their coding style for protocols or interfaces.)
So, your factory function should be called get_column, not get_SpreadsheetColumn.
When the function is more of an "alternate constructor" than a factory, then mgilson's answer is the way to go. See chain() and chain.from_iterable() in itertools from a good standard library example.
But notice that this isn't very common in the standard library, most of the popular modules on PyPI, etc. And there's a good reason. Usually, you can just use a single constructor with default-valued parameters, keyword parameters, or at worst *args and **kwargs. If this makes the API too confusing for human readers, or too ambiguous to code, that's when you need an alternate constructor. Otherwise, you don't.
Sometimes, you really do need a factory that creates objects of a concrete type, and that concrete type is a part of the interface that the caller needs to know about. As I mentioned above, this is pretty rare even in static languages, and it's even rarer in Python, but it does come up. And then, you really do need an answer to your original question.
In that case, I think I would name the function something ugly and unusual like get_MyClass or get_MyClass_instance. It ought to stick out immediately, because anyone reading my code will probably need to figure out why I'm explicitly getting a MyClass instead of a thing in order to understand the rest of my code.
I don't even know how to explain this, so here is the code I'm trying.
from couchdb.schema import Document, TextField
class Base(Document):
type = TextField(default=self.__name__)
#self doesn't work, how do I get a reference to Base?
class User(Base):
pass
#User.type be defined as TextField(default="Test2")
The reason I'm even trying this is I'm working on creating a base class for an orm I'm using. I want to avoid defining the table name for every model I have. Also knowing what the limits of python is will help me avoid wasting time trying impossible things.
The class object does not (yet) exist while the class body is executing, so there is no way for code in the class body to get a reference to it (just as, more generally, there is no way for any code to get a reference to any object that does not exist). Test2.__name__, however, already does what you're specifically looking for, so I don't think you need any workaround (such as metaclasses or class decorators) for your specific use case.
Edit: for the edited question, where you don't just need the name as a string, a class decorator is the simplest way to work around the problem (in Python 2.6 or later):
def maketype(cls):
cls.type = TextField(default=cls.__name__)
return cls
and put #maketype in front of each class you want to decorate that way. In Python 2.5 or earlier, you need instead to say maketype(Base) after each relevant class statement.
If you want this functionality to get inherited, then you have to define a custom metaclass that performs the same functionality in its __init__ or __new__ methods. Personally, I would recommend against defining custom metaclasses unless they're really indispensable -- instead, I'd stick with the simpler decorator approach.
You may want to check out the other question python super class relection
In your case, Test2.__base__ will return the base class Test. If it doesn't work, you may use the new style: class Test(object)