This question already has answers here:
What does the Ellipsis object do?
(14 answers)
Closed 2 years ago.
What's the difference between the pass statement:
def function():
pass
and 3 dots:
def function():
...
Which way is better and faster to execute(CPython)?
pass has been in the language for a very long time and is just a no-op. It is designed to explicitly do nothing.
... is a token having the singleton value Ellipsis, similar to how None is a singleton value. Putting ... as your method body has the same effect as for example:
def foo():
1
The ... can be interpreted as a sentinel value where it makes sense from an API-design standpoint, e.g. if you overwrite __getitem__ to do something special if Ellipsis are passed, and then giving foo[...] special meaning. It is not specifically meant as a replacement for no-op stubs, though I have seen it being used that way and it doesn't hurt either
Not exactly an answer to your question, but perhaps a useful clarification. The pass statement should be use to indicate a block is doing nothing (a no-op). The ... (ellipsis) operator is actually a literal that can be used in different contexts.
An example of ellipsis usage would be with NumPy array indexing: a[..., 0]
Related
This question already has answers here:
Return in generator together with yield
(2 answers)
Why can't I use yield with return?
(5 answers)
Closed 3 years ago.
Simple method with just return keyword returns a None
def abc():
return
print(abc())
Output: None
Similarly,
def abc():
return None
print(abc())
Output: None
However if we use this in generator
def abc():
yield 1
return None
print(abc())
it gives
SyntaxError: 'return' with argument inside generator
where as
def abc():
yield 1
return
print(abc())
gives
<generator object abc at 0x7f97d7052b40>
Why do we have this difference in behavior?
A bare return is useful to break out early from a generator.
Meanwhile return None is just a special case of return <a value>, and before yield from (PEP 380) there was no support or use case for returning a value from a generator. So it was forbidden in order to leave the design space open: by forbidding returning values in generators, Python's designers made it possible to allow it later with new semantics as that would not break existing code.
Had they allowed a return value without doing anything with it, there was a risk userland code would break. That's why from a forward-compatibility perspective it's often better to restrict APIs as much as possible, everything you leave open users will take advantage of, and it becomes risky to change.
This question already has answers here:
How to use Python decorators to check function arguments?
(10 answers)
How do Python functions handle the types of parameters that you pass in?
(14 answers)
Closed 6 years ago.
Currently Python allows you to define functions like
def f(x, A):
I would like to be able to define functions like
def f(x: int, A: list):
because I think it would cut down on programmer errors.
How can I do this?
Right now I am resorting to
def f(x, A):
assert type(x) == int and type(A) == list, "Invalid parameters to function f"
But I think it might be easier if I could just change the signature of the def function.
Modifying def is possible, but not at all the right (simplest) approach.
The easiest way will be to use python 3. It supports the type annotations, though it does not do anything with them by default.
This answer combines python 3's annotations with a decorator and type filtering helper functions. There are plenty of decorator based solutions and recipes for type checking. Do a search for "python annotation type checking" if you want other examples.
Note that the above question includes answers with decorators for python 2 as well, if you cannot use python 3 for some reason.
This question already has answers here:
What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
(25 answers)
Closed 6 years ago.
This is a basic question. Is there a difference in doing
def foo(*args, **kwargs):
"""standard function that accepts variable length."""
# do something
foo(v1...vn, nv1=nv1...nvn=nvn)
def foo(arg, kwargs):
"""convention, call with tuple and dict."""
# do something
mytuple = (v1, ..vn)
mydict = {nv1=nv1, ...nvn=nvn}
foo(mytuple, mydict)
I could do the same thing with both, except that the later has a weird convention of creating a tuple and dictionary. But basically is there a difference? I can solve the same computational problem of handling infinite things because dict and tuple can take care of that for me anyway?
Is this more of an idiomatic part of Python i.e a good Syntactic Sugar for things that you do anyway? I.e function is going to handle this for you!
PS: Not sure of so many downvotes though I agree this is a copy of Why use packed *args/**kwargs instead of passing list/dict? and probably it should be corrected in the duplicate information. And that question has recieved upvotes. So am I being downvotes for not being able to find that?
args and kwargs are just names.
What really matters here is the * and **.
In your second example you can only call the function with 2 arguments, against the first example where you can call the function with basically infinite arguments.
This question already has answers here:
Can you explain closures (as they relate to Python)?
(13 answers)
Closed 6 years ago.
I am trying to understand the background of why the following works:
def part_string(items):
if len(items) == 1:
item = items[0]
def g(obj):
return obj[item]
else:
def g(obj):
return tuple(obj[item] for item in items)
return g
my_indexes = (2,1)
my_string = 'ABCDEFG'
function_instance = part_string(my_indexes)
print(function_instance(my_string))
# also works: print(part_string(my_indexes)(my_string))
how come I can pass my_string to function_instance object even though I already passed my_indexes attributes to part_string() when creating function_instance? why Python accepts my_string implicitly?
I guess it has something to do with the following, so more questions here:
what is obj in g(obj)? can this be something other e.g. g(stuff) (like with self which is just a convention)?
what if I want to pass 2 objects to function_instance? how do I refer to them in g(obj)?
Can You recommend some reading on this?
What you're encountering is a closure.
When you write part_string(my_indexes) you're creating a new function, and upon calling it, you use the old variables you gave to part_string together with the new variables given to function_instance.
You may name the inner function whatever you want, and there is no convention. (obj is used in here but it can be pie. There are no conventions for this except func for function closures (decorators).
If you wish to pass two variables to the function, you may define two variables to the g(obj) function:
def g(var1, var2):
...
Here's some more info regarding closures in python.
This question already has answers here:
What does Python's eval() do?
(12 answers)
Closed 9 years ago.
what does x=eval(input("hello")) mean, doesn't it suppose to be instead of eval() something like int? I thought of x as a variable that belong to some class that determine its type, does eval include all known classes like int float complex...?
eval, like the documentation says, evaluates the parameter as if it were python code. It can be anything that is a valid python expression. It can be a function, a class, a value, a loop, something malicious...
Rule of thumb: Unless there is no other choice, don't use it. If there is no other choice, don't use it anyway.
eval() will interpret the content typed by the user during the input(). So if the user type x+1 with x equals to 1 in locals, it will output 2 (see below).
An extract from the documentation:
The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace.
>>> x = 1
>>> print eval('x+1')
2
It can be dangerous, since the user can type whatever he wants, like... some Unix command. Don't use it, unless you know what you are doing (even so, it leads to serious security flaws).