The following works beautifully in Python:
def f(x,y,z): return [x,y,z]
a=[1,2]
f(3,*a)
The elements of a get unpacked as if you had called it like f(3,1,2) and it returns [3,1,2]. Wonderful!
But I can't unpack the elements of a into the first two arguments:
f(*a,3)
Instead of calling that like f(1,2,3), I get "SyntaxError: only named arguments may follow *expression".
I'm just wondering why it has to be that way and if there's any clever trick I might not be aware of for unpacking arrays into arbitrary parts of argument lists without resorting to temporary variables.
As Raymond Hettinger's answer points out, this may change has changed in Python 3 and here is a related proposal, which has been accepted.
Especially related to the current question, here's one of the possible changes to that proposal that was discussed:
Only allow a starred expression as the last item in the exprlist. This would simplify the
unpacking code a bit and allow for the starred expression to be assigned an iterator. This
behavior was rejected because it would be too surprising.
So there are implementation reasons for the restriction with unpacking function arguments but it is indeed a little surprising!
In the meantime, here's the workaround I was looking for, kind of obvious in retrospect:
f(*(a+[3]))
It doesn't have to be that way. It was just rule that Guido found to be sensible.
In Python 3, the rules for unpacking have been liberalized somewhat:
>>> a, *b, c = range(10)
>>> a
0
>>> b
[1, 2, 3, 4, 5, 6, 7, 8]
>>> c
9
Depending on whether Guido feels it would improve the language, that liberalization could also be extended to function arguments.
See the discussion on extended iterable unpacking for some thoughts on why Python 3 changed the rules.
Thanks to the PEP 448 - Additional Unpacking Generalizations,
f(*a, 3)
is now accepted syntax starting from Python 3.5. Likewise you can use the double-star ** for keyword argument unpacking anywhere and either one can be used multiple times.
f is expecting 3 arguments (x, y, z, in that order).
Suppose L = [1,2]. When you call f(3, *L), what python does behind the scenes, is to call f(3, 1, 2), without really knowing the length of L.
So what happens if L was instead [1,2,3]?
Then, when you call f(3, *L), you'll end up calling f(3,1,2,3), which will be an error because f is expecting exactly 3 arguments and you gave it 4.
Now, suppose L=[1,2]1. Look at what happens when you callf`:
>>> f(3,*L) # works fine
>>> f(*L) # will give you an error when f(1,2) is called; insufficient arguments
Now, you implicitly know when you call f(*L, 3) that 3 will be assigned to z, but python doesn't know that. It only knows that the last j many elements of the input to f will be defined by the contents of L. But since it doesn't know the value of len(L), it can't make assumptions about whether f(*L,3) would have the correct number of arguments.
This however, is not the case with f(3,*L). In this case, python knows that all the arguments EXCEPT the first one will be defined by the contents of L.
But if you have named arguments f(x=1, y=2, z=3), then the arguments being assigned to by name will be bound first. Only then are the positional arguments bound. So you do f(*L, z=3). In that case, z is bound to 3 first, and then, the other values get bound.
Now interestingly, if you did f(*L, y=3), that would give you an error for trying to assign to y twice (once with the keyword, once again with the positional)
Hope this helps
You can use f(*a, z=3) if you use f(*a, 3), it do not know how to unpack the parameter for you provided 2 parameters and 2 is the second.
Nice. This also works for tuples. Don't forget the comma:
a = (1,2)
f(*(a+(3,)))
Related
Presumably both mylist.reverse() and list.reverse(mylist) end up executing reverse_slice in listobject.c via list_reverse_impl or PyList_Reverse. But how do they actually get there? What are the paths from the Python expressions to the C code in that C file? What connects them? And which of those two reverse functions (if any) do they go through?
Update for the bounty: Dimitris' answer (Update 2: I meant the original version, before it was expanded now) and the comments under it explain parts, but I'm still missing a few things and would like to see a comprehensive answer.
How do the two paths from the two Python expressions converge? If I understand things correctly, disassembling and discussing the byte code and what happens to the stack, particularly LOAD_METHOD, would clarify this. (As somewhat done by the comments under Dimitris' answer.)
What is the "unbound method" pushed onto the stack? Is it a "C function" (which one?) or a "Python object"?
How can I tell that it's the list_reverse function in the listobject.c.h file? I don't think the Python interpreter is like "let's look for a file that sounds similar and for a function that sounds similar". I rather suspect that the list type is defined somewhere and is somehow "registered" under the name "list", and that the reverse function is "registered" under the name "reverse" (maybe that's what the LIST_REVERSE_METHODDEF macro does?).
I'm not interested (for this question) about stack frames, argument handling, and similar things (so maybe not much of what goes on inside call_function). Really what interests me here is what I said originally, the path from the Python expressions to that C code in that C file. And preferably how I can find such paths in general.
To explain my motivation: For another question I wanted to know what C code does the work when I call list.reverse(mylist). I was fairly confident I had found it by browsing around and searching for the names. But I want to be more certain and generally understand the connections better.
PyList_Reverse is part of the C-API, you'd call it if you were manipulating Python lists in C, it isn't used in any of the two cases.
These both go through list_reverse_impl (actually list_reverse which wraps list_reverse_impl) which is the C function that implements both list.reverse and list_instance.reverse.
Both calls are handled by call_function in ceval, getting there after the CALL_METHOD opcode generated for them is executed (dis.dis the statements to see it). call_function has gone under a good deal of changes in Python 3.8 (with the introduction of PEP 590) so what happens from there on is probably too big of a subject to get into in a single question.
Additional Questions:
How do the two paths from the two python expressions converge? If I understand things correctly, disassembling and discussing the byte code and what happens to the stack, particularly LOAD_METHOD, would clarify this.
Let's start after both expressions compile to their respective bytecode representations:
l = [1, 2, 3, 4]
Case A, for l.reverse() we have:
1 0 LOAD_NAME 0 (l)
2 LOAD_METHOD 1 (reverse)
4 CALL_METHOD 0
6 RETURN_VALUE
Case B, for list.reverse(l) we have:
1 0 LOAD_NAME 0 (list)
2 LOAD_METHOD 1 (reverse)
4 LOAD_NAME 2 (l)
6 CALL_METHOD 1
8 RETURN_VALUE
We can safely ignore the RETURN_VALUE opcode, doesn't really matter here.
Let's focus on the individual implementations for each op code, namely, LOAD_NAME, LOAD_METHOD and CALL_METHOD. We can see what gets pushed onto the value stack by viewing what operations are called on it. (Note, it is initialized to point to the value stack located inside the frame object for each expression.)
LOAD_NAME:
What is performed in this case is pretty straight-forward. Given our name, l or list in each case, (each name is found in `co->co_names, a tuple that stores the names we use inside the code object) the steps are:
Look for the name inside locals. If found, go to 4.
Look for the name inside globals. If found, go to 4.
Look for the name inside builtins. If found, go to 4.
If found, push value denoted by the name onto the stack. Else, NameError.
In case A, the name l is found in the globals. In case B, it is found in the builtins. So, after the LOAD_NAME, the stack looks like:
Case A: stack_pointer -> [1, 2, 3, 4]
Case B: stack_pointer -> <type list>
LOAD_METHOD:
First, I should not that this opcode is generated only if an attribute access is performed (i.e obj.attr). You could also grab a method and call it via a = obj.attr and then a() but that would result in a CALL_FUNCTION opcode generated (see further down for a bit more).
After loading the name of the callable (reverse in both cases) we search the object on the top of the stack (either [1, 2, 3, 4] or list) for a method named reverse. This is done with _PyObject_GetMethod, its documentation states:
Return 1 if a method is found, 0 if it's a regular attribute
from __dict__ or something returned by using a descriptor
protocol.
A method is only found in Case A, when we access the attribute (reverse) through an instance of the list object. In case B, the callable is returned after the descriptor protocol is invoked, so the return value is 0 (but we do of-course get the object back!).
Here we diverge on the value returned:
In Case A:
SET_TOP(meth);
PUSH(obj); // self
We have a SET_TOP followed by a PUSH. We moved the method to the top of the stack and then we push the value again. In this case, stack_pointer now looks:
stack_pointer -> [1, 2, 3, 4]
<reverse method of lists>
In Case B we have:
SET_TOP(NULL);
Py_DECREF(obj);
PUSH(meth);
Again a SET_TOP followed by a PUSH. The reference count of obj (i.e list) is decreased because, as far as I can tell, it isn't really needed anymore. In this case, the stack now looks like so:
stack_pointer -> <reverse method of lists>
NULL
For case B, we have an additional LOAD_NAME. Following the previous steps, the stack for Case B now becomes:
stack_pointer -> [1, 2, 3, 4]
<reverse method of lists>
NULL
Pretty similar.
CALL_METHOD:
This doesn't make any modifications to the stack. Both cases result in a call to call_function passing the thread state, the stack pointer and the number of positional arguments (oparg).
The only difference is in the expression used to pass the positional arguments.
For case A we need to account for the implicit self that should be inserted as the first positional argument. Since the op code generated for it doesn't signal that a positional argument has been passed (because none have been explicitly passed):
4 CALL_METHOD 0
we call call_function with oparg + 1 = 0 + 1 = 1 to signal that one positional argument exists on the stack ([1, 2, 3, 4]).
In case B, where we explicitly pass the instance as a first argument, this is accounted for:
6 CALL_METHOD 1
so the call to call_function can immediately pass oparg as the value for the positional arguments.
What is the "unbound method" pushed onto the stack? Is it a "C function" (which one?) or a "Python object"?
It is a Python object that wraps around a C function. The Python object is a method descriptor and the C function it wraps is list_reverse.
All built-in methods and function are implemented in C. During initialization, CPython initializes all builtins (see list here) and adds wrappers around all the methods. These wrappers (objects) are descriptors that are used to implement Methods and Functions.
When a method is retrieved from a class via one of its instances, it is said to be bound to that instance. This can be seen by peeking at the __self__ attribute assigned to it:
m = [1, 2, 3, 4].reverse
m() # use __self__
print(m.__self__) # [4, 3, 2, 1]
This method can still be called even without the instance that qualifies it. It is bound to that instance. (NOTE: This is handled by the CALL_FUNCTION opcode, not the LOAD/CALL_METHOD ones).
An unbound method is one that is not yet bound to an instance. list.reverse is unbound, it is waiting to be invoked through an instance to bind to it.
Something being unbound doesn't mean that it cannot be called, list.reverse is called just fine if you explicitly pass the self argument yourself as an argument. Remember that methods are just special functions that (among other things) implicitly pass self as a first argument after being bound to an instance.
How can I tell that it's the list_reverse function in the listobject.c.h file?
This is easy, you can see the list's methods getting initialized in listobject.c. LIST_REVERSE_METHODDEF is simply a macro that, when substituted, adds the list_reverse function to that list. The tp_methods of a list are then wrapped inside function objects as stated previously.
Things might seem complicated here because CPython uses an internal tool, argument clinic, to automate argument handling. This kinda moves definitions around, obfuscating slightly.
This question already has answers here:
Why does adding a trailing comma after an expression create a tuple?
(6 answers)
Closed last month.
I was looking at code in my course material and had to write a function which adds the value 99 to either a list or tuple. The final code looks like this:
def f(l):
print(l)
l += 99,
print(l)
f([1,2,3])
f((1,2,3))
This was used to show something different but I'm getting somewhat hung up on the line l += 99,. What this does, is create an iterable that contains the 99 and list as well as tuple support the simple "addition" of such an object to create a new instance/add a new element.
What I don't really get is what exactly is created using the syntax element,? If I do an assignment like x = 99, the type(x) will be tuple but if I try run x = tuple(99) it will fail as the 99 is not iterable. So is there:
Some kind of intermediate iterable object created using the syntax element,?
Is there a special function defined that would allow the calling of tuple without an iterable and somehow , is mapped to that?
Edit:
In case anyone wonders why the accepted answer is the one it is: The explanation for my second question made it. I should've been more clear with my question but that += is what actuallly got me confused and this answer includes information on this.
If the left-hand argument of = is a simple name, the type of argument currently bound to that name is irrelevant. tuple(99) fails because tuple's argument is not iterable; it has nothing to do with whether or not x already refers to an instance of tuple.
99, creates a tuple with a single argument; parentheses are only necessary to separate it from other uses of commas. For example, foo((99,100)) calls foo with a single tuple argument, while foo(99,100) calls foo with two distinct int arguments.
The syntax element, simply creates an "intermediate" tuple, not some other kind of object (though a tuple is of course iterable).
However, sometimes you need to use parentheses in order to avoid ambiguity. For this reason, you'll often see this:
l += (99,)
...even though the parentheses are not syntactically necessary. I also happen to think that is easier to read. But the parentheses ARE syntactically necessary in other situations, which you have already discovered:
list((99,))
tuple((99,))
set((99,))
You can also do these, since [] makes a list:
list([99])
tuple([99])
set([99])
...but you can't do these, since 99, is not a tuple object in these situations:
list(99,)
tuple(99,)
set(99,)
To answer your second question, no, there is not a way to make the tuple() function receive a non-iterable. In fact this is the purpose of the element, or (element,) syntax - very similar to [] for list and {} for dict and set (since the list, dict, and set functions all also require iterable arguments):
[99] #list
(99,) #tuple - note the comma is required
{99} #set
As discussed in the question comments, it surprising that you can increment (+=) a list using a tuple object. Note that you cannot do this:
l = [1]
l + (2,) # error
This is inconsistent, so it is probably something that should not have been allowed. Instead, you would need to do one of these:
l += [2]
l += list((2,))
However, fixing it would create problems for people (not to mention remove a ripe opportunity for confusion exploitation by evil computer science professors), so they didn't.
The tuple constructor requires an iterable (like it says in your error message) so in order to do x = tuple(99), you need to include it in an iterable like a list:
x = tuple([99])
or
x = tuple((99,))
What is considered to be a better programming practice when dealing with more object at time (but with the option to process just one object)?
A: LOOP INSIDE FUNCTION
Function can be called with one or more objects and it is iterating inside function:
class Object:
def __init__(self, a, b):
self.var_a = a
self.var_b = b
var_a = ""
var_b = ""
def func(obj_list):
if type(obj_list) != list:
obj_list = [obj_list]
for obj in obj_list:
# do whatever with an object
print(obj.var_a, obj.var_b)
obj_list = [Object("a1", "a2"), Object("b1", "b2")]
obj_alone = Object("c1", "c2")
func(obj_list)
func(obj_alone)
B: LOOP OUTSIDE FUNCTION
Function is dealing with one object only and when it is dealing with more objects in must be called multiple times.
class Object:
def __init__(self, a, b):
self.var_a = a
self.var_b = b
var_a = ""
var_b = ""
def func(obj):
# do whatever with an object
print(obj.var_a, obj.var_b)
obj_list = [Object("a1", "a2"), Object("b1", "b2")]
obj_alone = Object("c1", "c2")
for obj in obj_list:
func(obj)
func(obj_alone)
I personally like the first one (A) more, because for me it makes cleaner code when calling the function, but maybe it's not the right approach. Is there some method generally better than the other? And if not, what are the cons and pros of each method?
A function should have a defined input and output and follow the single responsibility principle. You need to be able to clearly define your function in terms of "I put foo in, I get bar back". The more qualifiers you need to make in this statement to properly describe your function probably means your function is doing too much. "I put foo in and get bar back, unless I put baz in then I also get bar back, unless I put a foo-baz in then it'll error".
In this particular case, you can pass an object or a list of objects. Try to generalise that to a value or a list of values. What if you want to pass a list as a value? Now your function behaviour is ambiguous. You want the single list object to be your value, but the function treats it as multiple arguments instead.
Therefore, it's trivial to adapt a function which takes one argument to work on multiple values in practice. There's no reason to complicate the function's design by making it adaptable to multiple arguments. Write the function as simple and clearly as possible, and if you need it to work through a list of things then you can loop it through that list of things outside the function.
This might become clearer if you try to give an actual useful name to your function which describes what it does. Do you need to use plural or singular terms? foo_the_bar(bar) does something else than foo_the_bars(bars).
Move loops outside functions (when possible)
Generally speaking, keep loops that do nothing but iterate over the parameter outside of functions. This gives the caller maximum control and assumes the least about how the client will use the function.
The rule of thumb is to use the most minimal parameter complexity that the function needs do its job.
For example, let's say you have a function that processes one item. You've anticipated that a client might conceivably want to process multiple items, so you changed the parameter to an iterable, baked a loop into the function, and are now returning a list. Why not? It could save the client from writing an ugly loop in the caller, you figure, and the basic functionality is still available -- and then some!
But this turns out to be a serious constraint. Now the caller needs to pack (and possibly unpack, if the function returns a list of results in addition to a list of arguments) that single item into a list just to use the function. This is confusing and potentially expensive on heap memory:
>>> def square(it): return [x ** 2 for x in it]
...
>>> square(range(6)) # you're thinking ...
[0, 1, 4, 9, 16, 25]
>>> result, = square([3]) # ... but the client just wants to square 1 number
>>> result
9
Here's a much better design for this particular function, intuitive and flexible:
>>> def square(x): return x ** 2
...
>>> square(3)
9
>>> [square(x) for x in range(6)]
[0, 1, 4, 9, 16, 25]
>>> list(map(square, range(6)))
[0, 1, 4, 9, 16, 25]
>>> (square(x) for x in range(6))
<generator object <genexpr> at 0x00000166D122CBA0>
>>> all(square(x) % 2 for x in range(6))
False
This brings me to a second problem with the functions in your code: they have a side-effect, print. I realize these functions are just for demonstration, but designing functions like this makes the example somewhat contrived. Functions typically return values rather than simply produce side-effects, and the parameters and return values are often related, as in the above example -- changing the parameter type bound us to a different return type.
When does it make sense to use an iterable argument? A good example is sort -- the smallest unit of operation for a sorting function is an iterable, so the problem of packing and unpacking in the square example above is a non-issue.
Following this logic a step further, would it make sense for a sort function to accept a list (or variable arguments) of lists? No -- if the caller wants to sort multiple lists, they should loop over them explicitly and call sort on each one, as in the second square example.
Consider variable arguments
A nice feature that bridges the gap between iterables and single arguments is support for variable arguments, which many languages offer. This sometimes gives you the best of both worlds, and some functions go so far as to accept either args or an iterable:
>>> max([1, 3, 2])
3
>>> max(1, 3, 2)
3
One reason max is nice as a variable argument function is that it's a reduction function, so you'll always get a single value as output. If it were a mapping or filtering function, the output is always a list (or generator) so the input should be as well.
To take another example, a sort routine wouldn't make much sense with varargs because it's a classically in-place algorithm that works on lists, so you'd need to unpack the list into the arguments with the * operator pretty much every time you invoke the function -- not cool.
There's no real need for a call like sort(1, 3, 4, 2) as there is with max, where the parameters are just as likely to be loose variables as they are a packed iterable. Varargs are usually used when you have a small number of arguments, or the thing you're unpacking is a small pair or tuple-type element, as often the case with zip.
There's definitely a "feel" to when to offer parameters as varargs, an iterable, or a single value (i.e. let the caller handle looping), but as long as you follow the rule of avoiding iterables unless they're essential to the function, it's hard to go wrong.
As a final tip, try to write your functions with similar contracts to the library functions in your language or the tools you use frequently. These are pretty much always designed well; mimic good design.
If you implement B then you will make it harder for yourself to achieve A.
If you implement A then it isn't too difficult to achieve B. You also have many tools already available to apply this function to a list of arguments (the loop method you described, using something like map, or even a multiprocessing approach if needed)
Therefore I would choose to implement A, and if it makes things neater or easier in a given case you can think about also implementing B (using A) also so that you have both.
Why does python only allow named arguments to follow a tuple unpacking expression in a function call?
>>> def f(a,b,c):
... print a, b, c
...
>>> f(*(1,2),3)
File "<stdin>", line 1
SyntaxError: only named arguments may follow *expression
Is it simply an aesthetic choice, or are there cases where allowing this would lead to some ambiguities?
i am pretty sure that the reason people "naturally" don't like this is because it makes the meaning of later arguments ambiguous, depending on the length of the interpolated series:
def dangerbaby(a, b, *c):
hug(a)
kill(b)
>>> dangerbaby('puppy', 'bug')
killed bug
>>> cuddles = ['puppy']
>>> dangerbaby(*cuddles, 'bug')
killed bug
>>> cuddles.append('kitten')
>>> dangerbaby(*cuddles, 'bug')
killed kitten
you cannot tell from just looking at the last two calls to dangerbaby which one works as expected and which one kills little kitten fluffykins.
of course, some of this uncertainty is also present when interpolating at the end. but the confusion is constrained to the interpolated sequence - it doesn't affect other arguments, like bug.
[i made a quick search to see if i could find anything official. it seems that the * prefix for varags was introduced in python 0.9.8. the previous syntax is discussed here and the rules for how it worked were rather complex. since the addition of extra arguments "had to" happen at the end when there was no * marker it seems like that simply carried over. finally there's a mention here of a long discussion on argument lists that was not by email.]
I suspect that it's for consistency with the star notation in function definitions, which is after all the model for the star notation in function calls.
In the following definition, the parameter *c will slurp all subsequent non-keyword arguments, so obviously when f is called, the only way to pass a value for d will be as a keyword argument.
def f(a, b, *c, d=1):
print "slurped", len(c)
(Such "keyword-only parameters" are only supported in Python 3. In Python 2 there is no way to assign values after a starred argument, so the above is illegal.)
So, in a function definition the starred argument must follow all ordinary positional arguments. What you observed is that the same rule has been extended to function calls. This way, the star syntax is consistent for function declarations and function calls.
Another parallelism is that you can only have one (single-)starred argument in a function call. The following is illegal, though one could easily imagine it being allowed.
f(*(1,2), *(3,4))
First of all, it is simple to provide a very similar interface yourself using a wrapper function:
def applylast(func, arglist, *literalargs):
return func(*(literalargs + arglist))
applylast(f, (1, 2), 3) # equivalent to f(3, 1, 2)
Secondly, enhancing the interpreter to support your syntax natively might add overhead to the very performance-critical activity of function application. Even if it only requires a few extra instructions in compiled code, due to the high usage of those routines, that might constitute an unacceptable performance penalty in exchange for a feature that is not called for all that often and easily accommodated in a user library.
Some observations:
Python processes positional arguments before keyword arguments (f(c=3, *(1, 2)) in your example still prints 1 2 3). This makes sense as (i) most arguments in function calls are positional and (ii) the semantics of a programming language need to be unambiguous (i.e., a choice needs to be made either way on the order in which to process positional and keyword arguments).
If we did have a positional argument to the right in a function call, it would be difficult to define what that means. If we call f(*(1, 2), 3), should that be f(1, 2, 3) or f(3, 1, 2) and why would either choice make more sense than the other?
For an official explanation, PEP 3102 provides a lot of insight on how function definitions work. The star (*) in a function definition indicates the end of position arguments (section Specification). To see why, consider: def g(a, b, *c, d). There's no way to provide a value for d other than as a keyword argument (positional arguments would be 'grabbed' by c).
It's important to realize what this means: as the star marks the end of positional arguments, that means all positional arguments must be in that position or to the left of it.
change the order:
def f(c,a,b):
print(a,b,c)
f(3,*(1,2))
If you have a Python 3 keyword-only parameter, like
def f(*a, b=1):
...
then you might expect something like f(*(1, 2), 3) to set a to (1 , 2) and b to 3, but of course, even if the syntax you want were allowed, it would not, because keyword-only parameters must be keyword-only, like f(*(1, 2), b=3). If it were allowed, I suppose it would have to set a to (1, 2, 3) and leave b as the default 1. So it's perhaps not syntactic ambiguity so much as ambiguity in what is expected, which is something Python greatly tries to avoid.
I was wondering why many functions - especially in numpy - utilize tuples as function parameters?
e.g.:
a = numpy.ones( (10, 5) )
What could possibly be the use for that? Why not simply have something such as the following, since clearly the first parameters will always denote the size of the array?
a = numpy.ones(10, 5)
Is it because there might be additional parameters, such as dtype? even if so,
a = numpy.ones(10, 5, dtype=numpy.int)
seems much cleaner to me, than using the convoluted tuple convention.
Thanks for your replies
Because you want to be able to do:
a = numpy.ones(other_array.shape)
and other_array.shape is a tuple. There are a few functions that are not consistent with this and work as you've described, e.g. numpy.random.rand()
I think one of the benefits of this is that it can lead to consistency between the various methods. I'm not that familiar with numpy, but it would seem to me the first use case that comes to mind is if numpy can return the size of an array, that size, as one variable, can be directly passed to another numpy method, without having to know anything about the internals of how that size item is built.
The other part of it is that size of an array may have two components but it's discussed as one value, not as two.
My guess: this is because in functions like np.ones, shape can be passed as a keyword argument when it's a single value. Try
np.ones(dtype=int, shape=(2, 3))
and notice that you get the same value as you would have gotten from np.ones((2, 3), dtype=int).
[This works in Python more generally:
>>> def f(a, b):
... return a + b
...
>>> f(b="foo", a="bar")
'barfoo'
]
In order for python to tell the difference between foo(1, 2), foo(1, dtype='int') and foo(1, 2, dtype='int') you would have to use keyword-only arguments which weren't formally introduced until python 3. It is possible to use **kargs to implement keyword only arguments in python 2.x but it's unnatural and does not seem Pythonic. I think for that reason array does not allow array(1, 2) but reshape(1, 2) is ok because reshape does not take any keywords.