It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
As function overloading says:
Function overloading is absent in Python.
As far as I feel this a big handicap since its also an object-oriented (OO) language. Initially I found that unable to differentiate between the argument types was difficult, but the dynamic nature of Python made it easy (e.g. list, tuples, strings are much similar).
However, counting the number of arguments passed and then doing the job is like an overkill.
Now, unless you're trying to write C++ code using Python syntax, what would you need overloading for?
I think it's exactly opposite. Overloading is only necessary to make strongly-typed languages act more like Python. In Python you have keyword argument, and you have *args and **kwargs.
See for example: What is a clean, Pythonic way to have multiple constructors in Python?
As unwind noted, keyword arguments with default values can go a long way.
I'll also state that in my opinion, it goes against the spirit of Python to worry a lot about what types are passed into methods. In Python, I think it's more accepted to use duck typing -- asking what an object can do, rather than what it is.
Thus, if your method may accept a string or a tuple, you might do something like this:
def print_names(names):
"""Takes a space-delimited string or an iterable"""
try:
for name in names.split(): # string case
print name
except AttributeError:
for name in names:
print name
Then you could do either of these:
print_names("Ryan Billy")
print_names(("Ryan", "Billy"))
Although an API like that sometimes indicates a design problem.
You don't need function overloading, as you have the *args and **kwargs arguments.
The fact is that function overloading is based on the idea that passing different types you will execute different code. If you have a dynamically typed language like Python, you should not distinguish by type, but you should deal with interfaces and their compliance with the code you write.
For example, if you have code that can handle either an integer, or a list of integers, you can try iterating on it and if you are not able to, then you assume it's an integer and go forward. Of course it could be a float, but as far as the behavior is concerned, if a float and an int appear to be the same, then they can be interchanged.
Oftentimes you see the suggestion use use keyword arguments, with default values, instead. Look into that.
You can pass a mutable container datatype into a function, and it can contain anything you want.
If you need a different functionality, name the functions differently, or if you need the same interface, just write an interface function (or method) that calls the functions appropriately based on the data received.
It took a while to me to get adjusted to this coming from Java, but it really isn't a "big handicap".
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I come from C background and am learning Python. The lack of explicit type-safety is disturbing, but I am getting used to it. The lack of built-in contract-based programming (pure abstract classes, interfaces) is something to get used to, in the face of all the advantages of a dynamic language.
However, the inability to request const-cortectness is driving me crazy! Why are there no constants in Python? Why are class-level constants discouraged?
C and Python belongs to two different classes of languages.
The former one is statically typed. The latter is dynamic.
In a statically typed language, the type checker is able to infer the type of each expression and check if this match the given declaration during the "compilation" phase.
In a dynamically typed language, the required type information is not available until run-time. And the type of an expression may vary from one run to an other. Of course, you could add type checking during program execution. This is not the choice made in Python. This has for advantage to allow "duck typing". The drawback is the interpreter is not able to check for type correctness.
Concerning the const keyword. This is a type modifier. Restricting the allowed use of a variable (and sometime modifying allowed compiler optimization). It seems quite inefficient to check that at run-time for a dynamic language. At first analysis, that would imply to check if a variable is const or not for each affectation. This could be optimized, but even so, does it worth the benefit?
Beyond technical aspects, don't forget that each language has its own philosophy. In Python the usual choice is to favor "convention" instead of "restriction". As an example, constant should be spelled in all caps. There is no technical enforcement of that. It is just a convention. If you follow it, your program will behave as expected by "other programmers". If you decide to modify a "constant", Python won't complain. But you should feel like your are doing "something wrong". You break a convention. Maybe you have your reasons for doing so. Maybe you shouldn't have. Your responsibility.
As a final note, in dynamic languages, the "correctness" of a program is much more of the responsibility of your unit testings, than in the hand of the compiler. If you really have difficulties to made the step, you will find around some "code checkers". Those are PyLint, PyChecker, PyFlakes...
I don't know why this design decision was made but my personal guess is that there's no explicit const keyword because the key benefits of constants are already available:
Constants are good for documentation purposes. If you see a constant, you know that you can't change it. This is also possible by naming conventions.
Constants are useful for function calls. If you pass a constant as a parameter to a function, you can be sure that it isn't changed. In Python functions are "call-by-value" but since python variables are references you effectively pass a copy of a reference. Inside of the function you can mutate the reference but if you reassign it, the changes do not persist outside of the function scope. Therefore, if you pass a number as a variable, it is actually passed "like" a constant. You can assign a new value to the variable. But outside of the function, you still got the old number
Moreover if there was a const keyword, it would create an asymmetry: variables are declared without keyword but consts are declared with a keyword. The logical consequence would be to create a second keyword named var. This is probably a matter of taste. Personally I prefer the minimalistic approach to variable declarations.
You can probably achieve a little more type safety, if you work with immutable data structures like tuples. Be careful however, the tuple itself can not be modified. But if it contains references to mutable objects, these are still mutable even if they belong to a tuple.
Finally you might want to take a look at this snippet: http://code.activestate.com/recipes/65207-constants-in-python/?in=user-97991 I'm not sure if this is an implementation of "class-level constants". But I thought it might be useful.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I often write functions that take one argument with data to be manipulated, and one or more additional arguments with specifications for the manipulation. If the "specification" parameters are optional, it makes sense to place them after the obligatory data argument:
sort(data, key=..., reverse=True)
But suppose both arguments (or several of them) are obligatory? The functions of the re module place the regexp (the manipulation) before the string (the data to be manipulated). Optional arguments still come last, of course.
re.search(r"[regexp]+", text, flags=re.I)
So here's the question: Putting optional arguments aside, are there any clear conventions (official PEP or established common practice) for how to order obligatory arguments based on their function/purpose? Back when I was first learning Python, I remember reading some claim to the effect that one of Python's advantages is that it has clear conventions, inter alia on this particular thing. But I am unable to retrieve any such information now.
In case it is not clear: I am kindly asking for pointers to established conventions or standards, not for advice on which order is "best."
This is one hell of a good question and I'm already excited to see other answers. I've never come across any conventions about functions arguments in Python except in PEP 8 where it states that:
Always use self for the first argument to instance methods.
Always use cls for the first argument to class methods.
If a function argument's name clashes with a reserved keyword, it is
generally better to append a single trailing underscore rather than
use an abbreviation or spelling corruption. Thus class_ is better than
clss. (Perhaps better is to avoid such clashes by using a synonym.)
In my opinion, functions parameter should always be in order of importance. I know importance is subjective but you generally can determine which of the parameters is the most important to your function. Since StackOverflow is not about opinion take this with a grain of salt.
So putting aside opinions, I think your question is relevant but sometimes questions leads to other discussions. If you have numerous parameters, your function is probably doing too much stuff. According to a lot of ressources, you should try to use only one parameter and avoid using more than three where the latter is considered as the worst case scenario. So, most of the time ordering parameters won't impact readability that much if you keep that in mind.
I try to order parameters to help code readability (literate programming).
ex:
system.add(user, user_database) # add a user to a database
system.add(user_database, user) # add a database to a user(?)
This usually coincide with #scharette's opinion, the "most important" parameter is the object for which the function was created for. But as (he) stated it's subjective, it could be argued the database is the most important object in a database system library.
Reading code should reveal intent on first read; reading a set of function definitions should reveal what they do.
This question already has answers here:
Why is using 'eval' a bad practice?
(8 answers)
Closed 9 years ago.
I do know that one shouldn't use eval. For all the obvious reasons (performance, maintainability, etc.). My question is more on the side – is there a legitimate use for it? Where one should use it rather than implement the code in another way.
Since it is implemented in several languages and can lead to bad programming style, I assume there is a reason why it's still available.
First, here is Mathwork's list of alternatives to eval.
You could also be clever and use eval() in a compiled application to build your mCode interpreter, but the Matlab compiler doesn't allow that for obvious reasons.
One place where I have found a reasonable use of eval is in obtaining small predicates of code that consumers of my software need to be able to supply as part of a parameter file.
For example, there might be an item called "Data" that has a location for reading and writing the data, but also requires some predicate applied to it upon load. In a Yaml file, this might look like:
Data:
Name: CustomerID
ReadLoc: some_server.some_table
WriteLoc: write_server.write_table
Predicate: "lambda x: x[:4]"
Upon loading and parsing the objects from Yaml, I can use eval to turn the predicate string into a callable lambda function. In this case, it implies that CustomerID is a long string and only the first 4 characters are needed in this particular instance.
Yaml offers some clunky ways to magically invoke object constructors (e.g. using something like !Data in my code above, and then having defined a class for Data in the code that appropriately uses Yaml hooks into the constructor). In fact, one of the biggest criticisms I have of the Yaml magic object construction is that it is effectively like making your whole parameter file into one giant eval statement. And this is very problematic if you need to validate things and if you need flexibility in the way multiple parts of the code absorb multiple parts of the parameter file. It also doesn't lend itself easily to templating with Mako, whereas my approach above makes that easy.
I think this simpler design which can be easily parsed with any XML tools is better, and using eval lets me allow the user to pass in whatever arbitrary callable they want.
A couple of notes on why this works in my case:
The users of the code are not Python programmers. They don't have the ability to write their own functions and then just pass a module location, function name, and argument signature (although, putting all that in a parameter file is another way to solve this that wouldn't rely on eval if the consumers can be trusted to write code.)
The users are responsible for their bad lambda functions. I can do some validation that eval works on the passed predicate, and maybe even create some tests on the fly or have a nice failure mode, but at the end of the day I am allowed to tell them that it's their job to supply valid predicates and to ensure the data can be manipulated with simple predicates. If this constraint wasn't in place, I'd have to shuck this for a different system.
The users of these parameter files compose a small group mostly willing to conform to conventions. If that weren't true, it would be risky that folks would hi-jack the predicate field to do many inappropriate things -- and this would be hard to guard against. On big projects, it would not be a great idea.
I don't know if my points apply very generally, but I would say that using eval to add flexibility to a parameter file is good if you can guarantee your users are a small group of convention-upholders (a rare feat, I know).
In MATLAB the eval function is useful when functions make use of the name of the input argument via the inputname function. For example, to overload the builtin display function (which is sensitive to the name of the input argument) the eval function is required. For example, to call the built in display from an overloaded display you would do
function display(X)
eval([inputname(1), ' = X;']);
eval(['builtin(''display'', ', inputname(1), ');']);
end
In MATLAB there is also evalc. From the documentation:
T = evalc(S) is the same as EVAL(S) except that anything that would
normally be written to the command window, except for error messages,
is captured and returned in the character array T (lines in T are
separated by '\n' characters).
If you still consider this eval, then it is very powerful when dealing with closed source code that displays useful information in the command window and you need to capture and parse that output.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
without explicit (type) declaration I struggle to try to figure out how things work --- are there some good thumbs of rule/tips that you may have for reading python code better? Thanks!
In spite of the first impression that this question gives, I think it is indeed really intelligent because it reveals that you are subconscious of something that should interest any Python's developper but that I find very neglected in general and in explanations in particular, if not misunderstood.
I mean that IMO the base of Python is terrificly quaint and intelligent: it's the data model on which it has been conceived.
In this Python's data model, there are no variables in the sense of "chunks of memory whose contents can change", contrary to other languages, and in the sense that we don't manage this precise kind of variables in Python.
More precisely, all is object in Python, and every object is named and designed with an identifier, but neither the object nor the identifier are 'variables' in the said sense.
That doesn't mean that there are no little boxes, so called variables in other languages, temporarily hosting values that go in and out of them, in the depthes of the implementation.
.
Say an object is designed with the identifier XYA2.
Personally I use this appearance of letters to designate any identifier. An identifier is nothing else than a word written in a code. It is what appears in a code.
Note that this appearance of letters is the one used by this stackoverflow.com site to represent a code sample inside text, by clicking on the button {}. That's easy to remind.
Now, the object whose name is XYA2 is a real thing, a concrete set of bits lying in the memory of the computer to represent the desired conceptual value that it stands for.
This set is defined in C language in which Python is implemented.
Personnaly, I bold the letters when I want to designate an object.
Then the object of name XYA2 is, for me, refered to by XYA2
The identifier is XYA2
It is linked to an underlying and inaccessible pointer that points to the object.
This link is done by means of the symbol table. You will see very few references or allusions to symbol table in general, here on stackoverflow or elsewhere. However it's very important, I think.
The pointer linked to the identifier XYA2 points to the object XYA2
So, XYA2 is directly linked to the pointer and indirectly linked to the object.
Instead of saying "indirectly linked", we say "assigned". An object and its identifier are reciprocally assigned one to the other, but the medium of this link is the underlying pointer.
.
And now, something important.
Strictly speaking, a variable is a "chunk of memory whose content can change".
I personally do efforts to never use the word 'variable' in an other sense that this one.
The problem is that, because of the use of the word 'variable' in mathematics, this word is very often used indiscriminately and thrown in all the wind's directions by many developpers (not all) even when it isn't justified.
Thereby, it is commonly used by nearly everybody to designates the names, aka the identifiers in a code. But this practice is horribly confusing. It should be carefully avoided.
That said, an object in Python is not only an instance of some class, it is above all a concrete set of bits; set which IS NOT, as far as I know, a variable, in the sense of "chunk of memory whose content can change".
Hence my opinion that there aren't variables in Python, since the only entities we can access to and manipulate are identifiers and objects.
However, the processes under the hood in an executed Python program use quantities of pointers that are, as far as I know, real variables in the strict sense of this word.
So, in a sense, it could be said that my affirmation 'There are no variables in Python" is false.
It's a matter of point of view.
As a developer in Python, conceptually speaking, I don't manage variables. When I think to an algorithm, I don't think at the level of the pointers, even if I know they exist and that it's very important to know they exist. Being not at the level of the variables, but at the level of the Python's data model, I don't see why I should accept to believe that there are variables in a Python program. There are variables at the machine low-level, and Python is a very-high-level language.
.
Why did I write all this ?
1)
because the nature of the Python's data model has quantities of consequences that can't be understood if this data model isn't known. Among these consequences, some are interesting because they give incredible possibilities, others are traps (a well known example is: modifying an element in a copied list modifies also the element in the original list). That's why it's of first importance to learn about this data model.
For that, I recommend you to read these parts of the documentation:
3.1 of objects-values-and-types
4.1 of naming-and-binding
.
2)
To justify my answer to your perplexity: don't struggle about what happens under the hood:
there's a garbage colector, a reference counter, wagons of underlying dictionaries-like entities, a thunderous ballet of values in the secret of the underlying pointers, many verifications made by the interpreter... When something doesn't fit well , warning is given in the form of exception's messages.
Python has all the machinery under control
The only concern you must have is to think about the algorithm you want to achieve, and for that, knowing the data model is essential.
Welcome in the Python universe
Warning
I don't consider myself as a very skilled Python developper, I'm just an amateur who had a lot of problems before understanding some essential things about Python.
All the above description is my personal views about the data model of Python. If any point is incorrect in this description, I will be happy to learn more about it if the teaching is done with developped argumentation.
But I underline the fact that this vision of things allows me to understand and to answer to a lot of tough problems and to achieve some tricky mechanisms that Python is capable of. So, all can't be false in this above description.
You should take a look at PEP8 documentation This describes the Python formatting and style.
Read up on Duck Typing. One of the purposes of Duck Typing is that you shouldn't be thinking too much about the type of something anyway. What really concerns you is that the the variable can be used the way that you want it.
In Python, you don't need a type declaration because the name you assign is just a pointer to an object, and furthermore it can change at any time.
a = None
a = 1+5
a = my_function() # calls my function and assigns the return object to a
a = my_function # Assigns the function itself to a. You could actually pass it as a parameter
a = MyClass() # Runs the __init__() function of the class and assigns the return value to a
a = MyClass # Assigns the class itself to a.
This is all valid Python. You could run this sequentially, although changing up the type is frowned upon unless its totally clear as to why.
if you know the c++11 then it is similer to auto type.
The variable type is decided on the bases of its assignment.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 21 days ago.
Improve this question
What, in Your opinion is a meaningful docstring? What do You expect to be described there?
For example, consider this Python class's __init__:
def __init__(self, name, value, displayName=None, matchingRule="strict"):
"""
name - field name
value - field value
displayName - nice display name, if empty will be set to field name
matchingRule - I have no idea what this does, set to strict by default
"""
Do you find this meaningful? Post Your good/bad examples for all to know (and a general answer so it can be accepted).
I agree with "Anything that you can't tell from the method's signature". It might also mean to explain what a method/function returns.
You might also want to use Sphinx (and reStructuredText syntax) for documentation purposes inside your docstrings. That way you can include this in your documentation easily. For an example check out e.g. repoze.bfg which uses this extensively (example file, documentation example).
Another thing one can put in docstrings is also doctests. This might make sense esp. for module or class docstrings as you can also show that way how to use it and have this testable at the same time.
From PEP 8:
Conventions for writing good documentation strings (a.k.a.
"docstrings") are immortalized in PEP 257.
Write docstrings for all public modules, functions, classes, and methods. Docstrings are not necessary for non-public methods, but you
should have a comment that describes what the method does. This
comment should appear after the "def" line.
PEP 257 describes good docstring conventions. Note that most importantly, the """ that ends a multiline docstring should be on a
line by itself, and preferably preceded by a blank line.
For one liner docstrings, it's okay to keep the closing """ on the same line.
Check out numpy's docstrings for good examples (e.g. http://github.com/numpy/numpy/blob/master/numpy/core/numeric.py).
The docstrings are split into several sections and look like this:
Compute the sum of the elements of a list.
Parameters
----------
foo: sequence of ints
The list of integers to sum up.
Returns
-------
res: int
sum of elements of foo
See also
--------
cumsum: compute cumulative sum of elemenents
What should go there:
Anything that you can't tell from the method's signature. In this case the only bit useful is: displayName - if empty will be set to field name.
The most striking things I can think of to include in a docstring are the things that aren't obvious. Usually this includes type information, or capability requirements - eg. "Requires a file-like object". In some cases this will be evident from the signature, not so in other cases.
Another useful thing you can put in to your docstrings is a doctest.
I like to use the documentation to describe in as much detail as possible what the function does, especially the behavior at corner cases (a.k.a. edge cases). Ideally, a programmer using the function should never have to look at the source code - in practice, that means that whenever another programmer does have to look at source code to figure out some detail of how the function works, that detail probably should have been mentioned in the documentation. As Freddy said, anything that doesn't add any detail to the method's signature probably shouldn't be in a documentation string.
Generally purpose of adding adding doc string in starting of function is to describe function, what it does, what it would return, and description about parameters. You can add implementation details if required. Even you can add details about author who wrote the code for future developer.