What is the difference between a constructer and initializer in python? [duplicate] - python

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Python (and Python C API): new versus init
I'm at college just now and the lecturer was using the terms constructors and initializers interchangeably. I'm pretty sure that this is wrong though.
I've tried googling the answer but not found the answer I'm looking for.

In most OO languages, they are the same step, so he's not wrong for things like java, c++, etc. In python they are done in two steps: __new__ is the constructor; __init__ is the initializer.
Here is another answer that goes into more detail about the differences between them.

In almost all usual cases, Python does not have constructors in the same sense used by other OO languages because manually managing memory is generally discouraged. Instead, what you should usually do is define an __init__ method on the class. This method is called to initialize the new instance object automatically, first thing after it is constructed. Thus, it is not really a constructor, and talking about it as a constructor might confuse some people.
Of course some people want to call it a constructor because it is used a little bit like a constructor - fundamentally you can call it whatever you want as long as everyone understands what you are actually referring to. But in general, to be explicit and make yourself understood, call it an init method or something other than a constructor. Fundamentally, different languages just come with somewhat different terminology and speaking very clearly will always require adjustment to your subject matter and audience.
In Python it is possible to manage instance creation and destruction at a finer granularity, though you won't want to unless you know what you're doing. This is done by defining __new__ and __del__ methods to hook object instantiation and del statements. Whether these qualify as constructors and destructors precisely is a little more debatable (Python docs call the del method a destructor, but tend to be vaguer on what constitutes a constructor, e.g. including many functions which return object instances). I'd still encourage you to use the specific terminology for the language at hand, and in comparative discussions to define your terms up front. As always, your choice of terms while speaking involves tradeoffs between the audience being able to easily follow you and the audience potentially being led into confusion: if you are talking about memory management probably be as specific as possible, but if you are talking loosely then just use some word your audience understands and be ready to clarify.
Your instructor is being unclear at worst, I'm not aware of any one canonical definition of these terms but they might cause confusion for people who have learned very specific definitions from other languages.

http://docs.python.org/reference/datamodel.html#basic-customization
__new__ - constructor.
__init__ - initializer.

Related

Main objective of class [duplicate]

This question already has answers here:
What is the purpose of the `self` parameter? Why is it needed?
(26 answers)
Closed 6 years ago.
I've recently learned about class, method, and (self) functions. While I seem to understand the syntax side of things, I find having to type [self] in front of variables bothersome with very little benefit.
In what scenario would this be beneficial compared to simply using individual functions without class reference?
In what scenario would this be beneficial compared to simply using individual functions without class reference?
whoa… what a question! I believe you should start with a lecture on object oriented programing.
Simply said: the benefits of using a class over laying out functions in a module, is to enable many features of object oriented programing (encapsulation of data through a behaviour, class inheritance, properties…).
And in more length: the main idea behind OOP, is that you create a public programing interface, that other developers (including you) will use, and you can hide the inner workings, so once the public interface is all implemented, you don't care how things work internally, and can even change without breaking the code that use that.
So in this case, you create a class, and you implement your own algorithms on your own data, but all what people really care is how to change your data using those algorithms.
And those algorithms are being called methods and the data members.
In many languages, when you create a method (i.e. a function that is bound to an "object"), the reference to the current instance of the object is implicit (and might be optionally explicit in Java or C++, which is the this variable).
In python, the language designers chose for it to be explicit, and called by convention self. Then in rare cases, you can choose to call it this if you want (which can be useful when doing nested classes).
Finally, I talked about "classes", "instances" and "objects". An object is an instance of a class. What that means is that the class is here to lay out how things work (what are the members, what are the methods…). Then you do instanciate your object by calling the constructor. So then, you can have plenty of objects that share the same class. Which means they work similarly, but they have different data.
But here I only scratch the surface, and only want to show you that there are a lot of concepts behind the self and the classes.
For more on the topic, go read books and take programming classes:
https://wiki.python.org/moin/BeginnersGuide
https://www.python.org/about/gettingstarted/
https://en.wikibooks.org/wiki/A_Beginner%27s_Python_Tutorial/Classes
http://www.bodenseo.com/course/python_training_course.html
https://www.udemy.com/python-for-beginners/
https://www.coursera.org/learn/python

Does this approach to Python duck-typing mixed with isinstance() make sense?

Let's say we have the following classes:
class Duck(object):
pass
class OldFashionedDuck(Organism, Duck):
def look(self):
self.display_biological_appearance()
def walk(self):
self.keep_balance_on_two_feet()
def quack(self):
self.make_noise_with_lungs("Quack!")
class ArtificialDuck(Robot, Duck):
def look(self):
self.display_imitation_biological_appearance()
def walk(self):
self.engage_leg_clockwork()
def quack(self):
self.play_sound("quack.au")
In this example, OldFashionedDuck and ArtificialDuck have no common implementation, but by construction they will both return True for isinstance(..., Duck).
This is not perfect, but it is something that I thought might help respect both duck typing, and (via empty mixin inheritance) allow isinstance(). In essence it offers a contract to meet an interface, so it's not really calling isinstance() based on the class that does all the work, but based on an interface that anyone can opt-in to.
I've seen articles based on "isinstance() considered harmful," because it breaks duck typing. However, I at least as a programmer would like to know, if not necessarily where an object gets a function from, but whether it implements an interface.
Is this approach useful, and if so can it be improved upon?
I've seen articles based on "isinstance() considered harmful," because it breaks duck typing. However, I at least as a programmer would like to know, if not necessarily where an object gets a function from, but whether it implements an interface.
I think you're missing the point.
When we talk about "duck typing", what we really mean is not formalizing our interfaces. What you're trying to do, therefore, boils down to attempting to answer the question "how can I formalize my interface while still not formalizing my interface?".
We expect an object that was given to us to implement an interface - one that we described, not by making a base class, but by writing a bunch of documentation and describing behaviour (and if we're feeling especially frisky, setting up some kind of test suite) - because we said that this is what we expect (again in our documentation). We verify that the object implements the interface by attempting to use it as though it does, and treating any resulting errors as the caller's responsibility. Calling code that gives us the wrong object is naughty, and that's where the bug needs to be fixed. (Again, tests help us track these things down.)
In short, testing isinstance(this_fuzzball_that_was_handed_to_me, Duck) doesn't really help matters:
It could pass the isinstance check, but implement the methods in a way that violates our expectations (or, say, return NotImplemented). Only real testing will do here.
It could pass the check, but actually completely fail to implement one or more of the methods; after all, the base Duck doesn't contain any implementations, and Python has no reason to check for them in a derived class.
Perhaps more importantly, it could fail the check, even though it's a perfectly usable-as-a-duck fuzzball. Maybe it's some unrelated object that had quack, walk and look functions directly, manually attached to it as attributes (as opposed to them being attributes of its class, which become methods when looked up).
Okay, so, "don't do that", you say. But now you're making more work for everyone; if clients don't always opt in, then it's useless and dangerous for the duck-using code to make the check. And meanwhile, what are you gaining?
This is related to the EAFP principle: don't try to figure out whether something is a Duck by looking at it; figure out if it's a Duck by treating it as a Duck and dealing with the gory mess if it isn't.
But if you don't care about the duck typing philosophy, and absolutely must force some semblance of rigor onto things, you might be interested in the standard library abc module.
Even though I don't want to overestimate this term, but: Your approach is not pythonic. Do duck-typing, or don't do it.
If you want to be sure that your implementation of a "interface" implements everything it should: test it!
For smaller projects, it's easy to remember what you need. And you can simply try it.
I agree that for larger projects, or even cooperation in teams, it's better to make sure that your type has everything it needs. In such a scenario, you definitely should use unit-tests to make sure your type is complete. Even without duck-typing, you need tests, so probably you won't need any extra-tests.
Guido van Rossum has talked about some interesting thoughts about duck-typing in this talk. It's so inspiring, and definitely worth watching.

When to use attributes vs. when to use properties in python? [duplicate]

This question already has answers here:
What's the difference between a Python "property" and "attribute"?
(7 answers)
Closed 2 months ago.
Just a quick question, I'm having a little difficulty understanding where to use properties vs. where use to plain old attributes. The distinction to me is a bit blurry. Any resources on the subject would be superb, thank you!
Properties are more flexible than attributes, since you can define functions that describe what is supposed to happen when setting, getting or deleting them. If you don't need this additional flexibility, use attributes – they are easier to declare and faster.
In languages like Java, it is usually recommended to always write getters and setters, in order to have the option to replace these functions with more complex versions in the future. This is not necessary in Python, since the client code syntax to access attributes and properties is the same, so you can always choose to use properties later on, without breaking backwards compatibilty.
The point is that the syntax is interchangeable. Always start with attributes. If you find you need additional calculations when accessing an attribute, replace it with a property.
In addition to what Daniel Roseman said, I often use properties when I'm wrapping something i.e. when I don't store the information myself but wrapped object does. Then properties make excellent accessors.
Properties are attributes + a posteriori encapsulation.
When you turn an attribute into a property, you just define some getter and setter that you "attach" to it, that will hook the data access. Then, you don't need to rewrite the rest of your code, the way for accessing the data is the same, whatever your attribute is a property or not.
Thanks to this very clever and powerful encapsulation mechanism, in Python you can usually go with attributes (without a priori encapsulation, so without any getter nor setter), unless you need to do special things when accessing the data.
If so, then you just can define setters and getters, only if needed, and "attach" them to the attribute, turning it into a property, without any incidence on the rest of your code (whereas in Java, the first thing you usually do when creating a field, usually private, is to create it's associated getter and setter method).
Nice page about attributes, properties and descriptors here

Why has Python decided against constant references?

Note: I'm not talking about preventing the rebinding of a variable. I'm talking about preventing the modification of the memory that the variable refers to, and of any memory that can be reached from there by following the nested containers.
I have a large data structure, and I want to expose it to other modules, on a read-only basis. The only way to do that in Python is to deep-copy the particular pieces I'd like to expose - prohibitively expensive in my case.
I am sure this is a very common problem, and it seems like a constant reference would be the perfect solution. But I must be missing something. Perhaps constant references are hard to implement in Python. Perhaps they don't quite do what I think they do.
Any insights would be appreciated.
While the answers are helpful, I haven't seen a single reason why const would be either hard to implement or unworkable in Python. I guess "un-Pythonic" would also count as a valid reason, but is it really? Python does do scrambling of private instance variables (starting with __) to avoid accidental bugs, and const doesn't seem to be that different in spirit.
EDIT: I just offered a very modest bounty. I am looking for a bit more detail about why Python ended up without const. I suspect the reason is that it's really hard to implement to work perfectly; I would like to understand why it's so hard.
It's the same as with private methods: as consenting adults authors of code should agree on an interface without need of force. Because really really enforcing the contract is hard, and doing it the half-assed way leads to hackish code in abundance.
Use get-only descriptors, and state clearly in your documentation that these data is meant to be read only. After all, a determined coder could probably find a way to use your code in different ways you thought of anyways.
In PEP 351, Barry Warsaw proposed a protocol for "freezing" any mutable data structure, analogous to the way that frozenset makes an immutable set. Frozen data structures would be hashable and so capable being used as keys in dictionaries.
The proposal was discussed on python-dev, with Raymond Hettinger's criticism the most detailed.
It's not quite what you're after, but it's the closest I can find, and should give you some idea of the thinking of the Python developers on this subject.
There are many design questions about any language, the answer to most of which is "just because". It's pretty clear that constants like this would go against the ideology of Python.
You can make a read-only class attribute, though, using descriptors. It's not trivial, but it's not very hard. The way it works is that you can make properties (things that look like attributes but call a method on access) using the property decorator; if you make a getter but not a setter property then you will get a read-only attribute. The reason for the metaclass programming is that since __init__ receives a fully-formed instance of the class, you actually can't set the attributes to what you want at this stage! Instead, you have to set them on creation of the class, which means you need a metaclass.
Code from this recipe:
# simple read only attributes with meta-class programming
# method factory for an attribute get method
def getmethod(attrname):
def _getmethod(self):
return self.__readonly__[attrname]
return _getmethod
class metaClass(type):
def __new__(cls,classname,bases,classdict):
readonly = classdict.get('__readonly__',{})
for name,default in readonly.items():
classdict[name] = property(getmethod(name))
return type.__new__(cls,classname,bases,classdict)
class ROClass(object):
__metaclass__ = metaClass
__readonly__ = {'a':1,'b':'text'}
if __name__ == '__main__':
def test1():
t = ROClass()
print t.a
print t.b
def test2():
t = ROClass()
t.a = 2
test1()
While one programmer writing code is a consenting adult, two programmers working on the same code seldom are consenting adults. More so if they do not value the beauty of the code but them deadlines or research funds.
For such adults there is some type safety, provided by Enthought's Traits.
You could look into Constant and ReadOnly traits.
For some additional thoughts, there is a similar question posed about Java here:
Why is there no Constant feature in Java?
When asking why Python has decided against constant references, I think it's helpful to think of how they would be implemented in the language. Should Python have some sort of special declaration, const, to create variable references that can't be changed? Why not allow variables to be declared a float/int/whatever then...these would surely help prevent programming bugs as well. While we're at it, adding class and method modifiers like protected/private/public/etc. would help enforce compile-type checking against illegal uses of these classes. ...pretty soon, we've lost the beauty, simplicity, and elegance that is Python, and we're writing code in some sort of bastard child of C++/Java.
Python also currently passes everything by reference. This would be some sort of special pass-by-reference-but-flag-it-to-prevent-modification...a pretty special case (and as the Tao of Python indicates, just "un-Pythonic").
As mentioned before, without actually changing the language, this type of behaviour can be implemented via classes & descriptors. It may not prevent modification from a determined hacker, but we are consenting adults. Python didn't necessarily decide against providing this as an included module ("batteries included") - there was just never enough demand for it.

What is the difference between using decorators and extending a sub class by inheritance?

I was trying to wrap my brain around Decorators in python but can't understand why we cannot achieve the same thing by using sub classes?
You can achieve the same thing using subclasses, and in fact you don't even need subclasses - you can also achieve the same thing simply by wrapping a method in another method and reassigning it. There was a lot of discussion about whether or not the decorator syntax should be added to the language as it doesn't allow you to do anything new and requires programmers to learn one more new thing.
What the syntax dooes is formalize a pattern that many people were already using, and make it to a standard syntax that has a name and guidelines of how to use. It is not necessary for you to use decorators - you can achieve the same effect in other ways - but using the officially supported standard approach with a concise, easy-to-read syntax makes life a bit easier.
You do know that prepending a definition with #spam class|def name just means "define name as written here, then bind name to spam(name)"?
Decorators are very often applied to functions rather than classes. Sure, you could make a callable class and subclass that... you could also implement your own integer type. Neither is viable.
In quite a few cases, you propably could do something very similar by subclassing... except that decorators are defined once and can be applied to several classes, as opposed to writing a new subclass yourself in every case. Every solution to this inevitably would end up being equivalent to or very similar to decorators.
As robert points out in a comment, if you had an example, the answers could be more specific...

Categories