In Python if you have a class that extends 2 or more classes, how does it know which class method to call if they all have a method titled save?
class File(models.Model, Storage, SomethingElse):
def run(self):
self.save()
What if Storage has a save(), and what is SomethingElse() has a save()?
Can anyone briefly explain?
Python supports a limited form of multiple inheritance as well. A
class definition with multiple base classes looks as follows:
class DerivedClassName(Base1, Base2, Base3):
.
.
.
The only rule necessary to explain the semantics is the resolution rule used for class attribute references. This is
depth-first, left-to-right. Thus, if an attribute is not found in
DerivedClassName, it is searched in Base1, then (recursively) in the
base classes of Base1, and only if it is not found there, it is
searched in Base2, and so on.
So in your example if all 3 classes have method save instances of File will use method save from models.Model
In practice, when this occurs, you'll likely want to write your own save which either replaces or uses one of the base class methods.
Let's say you want to just call them:
class MyFile(models.Model, Storage, SomethingElse): #file is a builtin class. Confusion will abound
def run(self):
self.save()
def save():
super(Storage, self).save() # start search for method in Storage
super(models.Model,self).save() # start search for method in models.Model
NOTE HOWEVER that if the mro (see: http://www.python.org/download/releases/2.3/mro/) of models.Model doesn't contain a save, and Storage does, you'll end up calling the same method twice.
A fuller exploration is here: http://rhettinger.wordpress.com/2011/05/26/super-considered-super/ (and now linked to from the official docs).
Related
I have been trying to get my head around classmethods for a while now. I know how they work but I don't understand why use them or not use them.
For example.
I know i can use an instance method like this:
class MyClass():
def __init__(self):
self.name = 'Chris'
self.age = 27
def who_are_you(self):
print('Hello {}, you are {} years old'.format(self.name, self.age)
c = MyClass()
c.who_are_you()
I also know that by using the classmethod I can call the who_are_you() without creating an instance of my class:
class MyClass():
name = 'Chris'
age = 27
#classmethod
def who_are_you(cls):
print('Hello {}, you are {} years old'.format(cls.name, cls.age)
MyClass.who_are_you()
I dont get why you would pick one method over the other
In your second example, you've hard-coded the name and age into the class. If name and age are indeed properties of the class and not a specific instance of the class, than using a class method makes sense. However, if your class was something like Human of which there are many instances with different names and ages, then it wouldn't be possible to create a class method to access the unique names and ages of the specific instance. In that case, you would want to use an instance method.
In general:
If you want to access a property of a class as a whole, and not the property of a specific instance of that class, use a class method.
If you want to access/modify a property associated with a specific instance of the class, then you will want to use an instance method.
Class methods are called when you don't have, or don't need, or can't have, an instance. Sometimes, a class can serve as a singleton when used this way. But probably the most common use of class methods is as a non-standard constructor.
For example, the Python dict class has a non-standard constructor called dict.fromkeys(seq, [value]). Clearly, there can be no instance involved - the whole point is to create an instance. But it's not the standard __init__() constructor, because it takes data in a slightly different format.
There are similar methods in the standard library: int.from_bytes, bytes.fromhex and bytearray.fromhex() and float.fromhex().
If you think about the Unix standard library, the fdopen function is a similar idea - it constructs a file from a descriptor, instead of a string path. Python's open() will accept file handles instead of paths, so it doesn't need a separate constructor. But the concept is more common than you might suspect.
#classmethod declares that method is static, therefore you could use it without creating new instance of class. One the other hand, in first example you have to create instance before youcould use method.
Static methods are very useful for controllers in MVC pattern, etc, while nonstatic methods are used in models.
More about #classmethod and #staticmethod here
https://stackoverflow.com/a/12179752/5564059
I used to register sqlalchemy events in the classmethod __declare_last__.
My code looked like this:
#classmethod
def __declare_last__(cls):
#event.listens_for(cls, 'after_udpate')
def receive_after_update(mapper, conn, target):
...
This worked correctly until I upgraded to SQLAlchemy 1.0, with which this hook was not called and my events were thus not registered.
I've read the 1.0 document about __declare_last__ and discovered nothing related.
After searching the source code of SQLAlchemy1.0.4 for __declare_last__, I've located the place where both __declare_last__ and __declare_first__ is found and registered.
def _setup_declared_events(self):
if _get_immediate_cls_attr(self.cls, '__declare_last__'):
#event.listens_for(mapper, "after_configured")
def after_configured():
self.cls.__declare_last__()
if _get_immediate_cls_attr(self.cls, '__declare_first__'):
#event.listens_for(mapper, "before_configured")
def before_configured():
self.cls.__declare_first__()
Then I used pdb to step through this method and found that _get_immediate_cls_attr(self.cls, '__declare_last__') was returning None for a class with this hook method inherited.
So I jumped to the definition of _get_immediate_cls_attr which contained a docstring that solved my problem:
def _get_immediate_cls_attr(cls, attrname, strict=False):
"""return an attribute of the class that is either present directly
on the class, e.g. not on a superclass, or is from a superclass but
this superclass is a mixin, that is, not a descendant of
the declarative base.
This is used to detect attributes that indicate something about
a mapped class independently from any mapped classes that it may
inherit from.
So I just added a mixin class, moved __declare_last__ to it and made the original base class inherit the mixin, and now __declare_last__ finally got called again.
Especially in unittests we use this "design pattern" I call "get class from class level"
framworktest.py:
class FrameWorkHttpClient(object):
....
class FrameWorkTestCase(unittest.TestCase):
# Subclass can control the class which gets used in get_response()
HttpClient=FrameWorkHttpClient
def get_response(self, url):
client=self.HttpClient()
return client.get(url)
mytest.py:
class MyHttpClient(FrameWorkHttpClient):
....
class MyTestCase(FrameWorkTestCase):
HttpClient=MyHttpClient
def test_something(self):
response=self.get_response()
...
The method get_response() gets the class from self not by importing it. This way a subclass can modify the class and use a different HttpClient.
What's the name of this (get class from class level) "design pattern"?
Is this a way of "inversion of control" or "dependency injection"?
Your code is very similar to Factory method pattern. The only difference is that your variant uses factory class variable instead of factory method.
I believe this has the same purpose as just simple polymorphism implemented using Python-specific syntax. Instead of having a virtual method returning a new instances, you have the instance type stored as "an overridable variable" in a class/subclass.
This can be rewritten as a virtual method (sorry I am not fluent in Python so this is just pseudocode)
virtual HttpClient GetClient()
return new FrameworkHttpClient()
then in the subclass, you change the implementation of the method to return a different type:
override HttpClient GetClient()
return new MyHttpClient()
If you want to call this a pattern, I would say it is similar to Strategy GoF pattern. In your particular case, the algorithm being abstracted away is the creation of the particular HttpClient implementation.
And after second thought - as you stated, indeed this can be looked at as an IoC example.
I'm not exactly a design pattern 'Guru', but to me it looks a bit like the Template Method Pattern. You are defining the 'skeleton' of the get_response method in your base class, and leaving one step (defining which class to use) to the subclasses.
If this can be considered the template pattern, it is an example of inversion of control.
You want to let the sub classes decide which class to instantiate.
This is what the factory method pattern already offers:
Define an interface for creating an object, but let the classes that implement the interface decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses. (GoF)
Your solving the same problem by replacing a variable of the parent class.
It works but your solution has at least two drawbacks (compared to the classic pattern):
you introduce a temporal coupling (design smell). Client must call the instructions in the right order. (first initialize the HttpClient then invoke get_response)
your test case is not immutable. Immutable class are simplest than the mutable ones. And in my opinion test should always be simple.
I have a module (db.py) which loads data from different database types (sqlite,mysql etc..) the module contains a class db_loader and subclasses (sqlite_loader,mysql_loader) which inherit from it.
The type of database being used is in a separate params file,
How does the user get the right object back?
i.e how do I do:
loader = db.loader()
Do I use a method called loader in the db.py module or is there a more elegant way whereby a class can pick its own subclass based on a parameter? Is there a standard way to do this kind of thing?
Sounds like you want the Factory Pattern. You define a factory method (either in your module, or perhaps in a common parent class for all the objects it can produce) that you pass the parameter to, and it will return an instance of the correct class. In python the problem is a bit simpler than perhaps some of the details on the wikipedia article as your types are dynamic.
class Animal(object):
#staticmethod
def get_animal_which_makes_noise(noise):
if noise == 'meow':
return Cat()
elif noise == 'woof':
return Dog()
class Cat(Animal):
...
class Dog(Animal):
...
is there a more elegant way whereby a class can pick its own subclass based on a parameter?
You can do this by overriding your base class's __new__ method. This will allow you to simply go loader = db_loader(db_type) and loader will magically be the correct subclass for the database type. This solution is mildly more complicated than the other answers, but IMHO it is surely the most elegant.
In its simplest form:
class Parent():
def __new__(cls, feature):
subclass_map = {subclass.feature: subclass for subclass in cls.__subclasses__()}
subclass = subclass_map[feature]
instance = super(Parent, subclass).__new__(subclass)
return instance
class Child1(Parent):
feature = 1
class Child2(Parent):
feature = 2
type(Parent(1)) # <class '__main__.Child1'>
type(Parent(2)) # <class '__main__.Child2'>
(Note that as long as __new__ returns an instance of cls, the instance's __init__ method will automatically be called for you.)
This simple version has issues though and would need to be expanded upon and tailored to fit your desired behaviour. Most notably, this is something you'd probably want to address:
Parent(3) # KeyError
Child1(1) # KeyError
So I'd recommend either adding cls to subclass_map or using it as the default, like so subclass_map.get(feature, cls). If your base class isn't meant to be instantiated -- maybe it even has abstract methods? -- then I'd recommend giving Parent the metaclass abc.ABCMeta.
If you have grandchild classes too, then I'd recommend putting the gathering of subclasses into a recursive class method that follows each lineage to the end, adding all descendants.
This solution is more beautiful than the factory method pattern IMHO. And unlike some of the other answers, it's self-maintaining because the list of subclasses is created dynamically, instead of being kept in a hardcoded mapping. And this will only instantiate subclasses, unlike one of the other answers, which would instantiate anything in the global namespace matching the given parameter.
I'd store the name of the subclass in the params file, and have a factory method that would instantiate the class given its name:
class loader(object):
#staticmethod
def get_loader(name):
return globals()[name]()
class sqlite_loader(loader): pass
class mysql_loader(loader): pass
print type(loader.get_loader('sqlite_loader'))
print type(loader.get_loader('mysql_loader'))
Store the classes in a dict, instantiate the correct one based on your param:
db_loaders = dict(sqlite=sqlite_loader, mysql=mysql_loader)
loader = db_loaders.get(db_type, default_loader)()
where db_type is the paramter you are switching on, and sqlite_loader and mysql_loader are the "loader" classes.
I have read several documentation already but the definition of "class" and "instance" didnt get really clear for me yet.
Looks like that "class" is like a combination of functions or methods that return some result is that correct? And how about the instance? I read that you work with the class you creat trough the instance but wouldnt be easier to just work direct with the class?
Sometimes geting the concepts of the language is harder than working with it.
Your question is really rather broad as classes and instances/objects are vital parts of object-oriented programming, so this is not really Python specific. I recommend you buy some books on this as, while initially basic, it can get pretty in-depth. In essense, however:
The most popular and developed model of OOP is a class-based model, as opposed to an object-based model. In this model, objects are entities that combine state (i.e., data), behavior (i.e., procedures, or methods) and identity (unique existence among all other objects). The structure and behavior of an object are defined by a class, which is a definition, or blueprint, of all objects of a specific type. An object must be explicitly created based on a class and an object thus created is considered to be an instance of that class. An object is similar to a structure, with the addition of method pointers, member access control, and an implicit data member which locates instances of the class (i.e. actual objects of that class) in the class hierarchy (essential for runtime inheritance features).
So you would, for example, define a Dog class, and create instances of particular dogs:
>>> class Dog():
... def __init__(self, name, breed):
... self.name = name
... self.breed = breed
... def talk(self):
... print "Hi, my name is " + self.name + ", I am a " + self.breed
...
>>> skip = Dog('Skip','Bulldog')
>>> spot = Dog('Spot','Dalmatian')
>>> spot.talk()
Hi, my name is Spot, I am a Dalmatian
>>> skip.talk()
Hi, my name is Skip, I am a Bulldog
While this example is silly, you can then start seeing how you might define a Client class that sets a blueprint for what a Client is, has methods to perform actions on a particular client, then manipulate a particular instance of a client by creating an object and calling these methods in that context.
Sometimes, however, you have methods of a class that don't really make sense being accessed through an instance of the class, but more from the class itself. These are known as static methods.
I am not sure of what level of knowledge you have, so I apologize if this answer is too simplified (then just ignore it).
A class is a template for an object. Like a blueprint for a car. The instance of a class is like an actual car. So you have one blueprint, but you can have several different instances of cars. The blueprint and the car are different things.
So you make a class that describes what an instance of that class can do and what properties it should have. Then you "build" the instance and get an object that you can work with.
It's fairly simple actually. You know how in python they say "everything is an object". Well in simplistic terms you can think of any object as being an 'instance' and the instructions to create an object as the class. Or in biological terms DNA is the class and you are an instance of DNA.
class HumanDNA(): # class
... class attributes ...
you = HumanDNA() # instance
See http://homepage.mac.com/s_lott/books/python/htmlchunks/ch21.html
Object-oriented programming permits us
to organize our programs around the
interactions of objects. A class
provides the definition of the
structure and behavior of the objects;
each object is an instance of a class.
Objects ("instances") are things which interact, do work, persist in the file system, etc.
Classes are the definitions for the object's behavior.
Also, a class creates new objects that are members of that class (share common structure and behavior)
In part it is confusing due to the dynamically typed nature of Python, which allows you to operate on a class and an instance in essentially the same way. In other languages, the difference is more concrete in that a class provides a template by which to create an object (instance) and cannot be as directly manipulated as in Python. The benefit of operating on the instance rather than the class is that the class can provide a prototype upon which instances are created.