Making a list of input variables within a funciton - python

I am using the locals function to make a iterable object populated with the values of a,b,c provided when I call this function. When I iterate through the dictionairy that locals returns, I iterate through the variable names, rather than there values. But l.values is not iterable, I guess it points to a function, not a list like I was expecting.
Traceback (most recent call last):
file.py on line ?, in getUserOutputs
userOutput = _runaedlz(testInputs[i])
file.py on line ?, in _runaedlz
return extraNumber(*_fArgs_dfasruobxant)
file.py on line 5, in extraNumber
for i in l:
TypeError: 'builtin_function_or_method' object is not iterable
Is there a better way to make a list of these values? I could do it in a more brute force way, but I'm trying to learn all these built in functions and what they are good for.
Or is there a way to iterate through the dictionairies values that I'm missing?
Two different ways to accomplish the same thing, yes?
def extraNumber(a, b, c):
l=locals()
l=l.values
print(l)
for i in l:
if(l.count(i)==1):
return i

A potential problem with locals() is that it returns a dictionary, and the order of values in a dictionary isn't defined, hence listing the values might permute them relative to the order in which they were passed to the function. If you really want to do something like this, you can get the order defined by using the function's __code__ object's __co_varnames attribute. Something like:
def extraNumber(a, b, c):
loc = locals() #should be the first line, so that only parameters have values
vals = list(loc.values()) #order might have been changed
lvals = extraNumber.__code__.co_varnames
orderedVals = [loc[x] for x in lvals if x in loc]
return vals, orderedVals #both returned for comparison
When I ran it I got:
>>> extraNumber(1,2,3)
([3, 2, 1], [1, 2, 3])
Different runs (especially in fresh Python shells) will have different orders in the first returned list, but the second will always be [1,2,3]

>>> def extraNumber(a,b,c):
... for i in locals().values():
... yield i
...
>>>
>>> for n in extraNumber(5,10,7):
... n
...
10
5
7
I've used a yield statement because the extraNumber function will produce one value for each argument value.
As an alternative, if you want the entire list at once you could use something like this.
>>> def extraNumber(a,b,c):
... parameterValues=list(locals().values())
... return parameterValues
...
>>> extraNumber(5,10,7)
[10, 5, 7]

Related

Using map function with external dictionary (global)

I'm trying to improve the computing time of my code so I want to replace for loops with map functions.
For each key in the dictionary I check if it is bigger than a specific value and inserting it to a new dictionary under the same key:
My original code is:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
for key in dict1.keys():
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
print (dict_filt)
output is: {'d': 20, 'e': 30}
and this works
but when I try with map:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
def for_filter (key):
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
map (for_filter ,dict1.keys())
print (dict_filt)
I get an empty dictionary
I tried to make it work with lambda:
map (lambda x: for_filter(x) ,dict1.keys())
or define the dictionarys as global but it still doesnt work.
I'll be glad to get some help
I don't need the original dictionary so if it's simpler to work on one dictionary it's still ok
Use a dictionary-comprehension instead of map:
{k: v * 10 for k, v in dict1.items() if v > 1}
Code:
dict1 = {'a':-1,'b':0,'c':1,'d':2,'e':3}
print({k: v * 10 for k, v in dict1.items() if v > 1})
# {'d': 20, 'e': 30}
map is lazy: if you do not consume the values, the function for_filter is not applied. Since you are using a side effect to populate dict_filt, nothing will happen unless you force the evaluation of the map:
Replace:
map(for_filter, dict1.keys())
By:
list(map(for_filter, dict1)) # you don't need keys here
And you will get the expected result.
But note that this is a misuse of map. You should use a dict comprehension (see #Austin's answer).
EDIT: More on map and lazyness.
TLDR;
Look at the doc:
map(function, iterable, ...)
Return an iterator that applies function to every item of iterable, yielding the results.
Explanation
Consider the following function:
>>> def f(x):
... print("x =", x)
... return x
...
This function returns its parameter and performs a side effect (print the value). Let's try to apply this function to a simple range with the map function:
>>> m = map(f, range(5))
Nothing is printed! Let's look at the value of m:
>>> m
<map object at 0x7f91d35cccc0>
We were expecting [0, 1, 2, 3, 4] but we got a strange <map object at 0x7f91d35cccc0>. That's lazyness: map does not really apply the function but creates an iterator. This iterator returns, on each next call, a value:
>>> next(m)
x = 0
0
That value is the result of the application of the function f to the next element of the mapped iterable (range). Here, 0 is the returned value and x = 0 the result of the print side effect. What is important here is that this value does not exist before you pull it out of the iterator. Hence the side effect is not performed before you pull the vlaue out of the iterator.
If we continue to call next, we'll exhaust the iterator:
...
>>> next(m)
x = 4
4
>>> next(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Another way to get all the values of the iterator is to create a list. That's not a cast, but rather the constrution of a new object and the consumption of the old one:
>>> m = map(f, range(5))
>>> list(m)
x = 0
x = 1
x = 2
x = 3
x = 4
[0, 1, 2, 3, 4]
We see that the side effect print is performed for every element of the range, and then the list [0, 1, 2, 3, 4] is returned.
In your case, the function doesn't print anything, but makes an assignement to an external variable dict_filt. The function is not applied unless you consume the map iterator.
I repeat: do not use map (or any list/dict comprehension) to perform a side effect (map comes from the functional world where side effect do not exist).

What does single(not double) asterisk * means when unpacking dictionary in Python?

Can anyone explain the difference when unpacking the dictionary using single or double asterisk? You can mention their difference when used in function parameters, only if it is relevant here, which I don't think so.
However, there may be some relevance, because they share the same asterisk syntax.
def foo(a,b)
return a+b
tmp = {1:2,3:4}
foo(*tmp) #you get 4
foo(**tmp) #typeError: keyword should be string. Why it bothers to check the type of keyword?
Besides, why the key of dictionary is not allowed to be non-string when passed as function arguments in THIS situation? Are there any exceptions? Why they design Python in this way, is it because the compiler can't deduce the types in here or something?
When dictionaries are iterated as lists the iteration takes the keys of it, for example
for key in tmp:
print(key)
is the same as
for key in tmp.keys():
print(key)
in this case, unpacking as *tmp is equivalent to *tmp.keys(), ignoring the values. If you want to use the values you can use *tmp.values().
Double asterisk is used for when you define a function with keyword parameters such as
def foo(a, b):
or
def foo(**kwargs):
here you can store the parameters in a dictionary and pass it as **tmp. In the first case keys must be strings with the names of the parameter defined in the function firm. And in the second case you can work with kwargs as a dictionary inside the function.
def foo(a,b)
return a+b
tmp = {1:2,3:4}
foo(*tmp) #you get 4
foo(**tmp)
In this case:
foo(*tmp) mean foo(1, 3)
foo(**tmp) mean foo(1=2, 3=4), which will raise an error since 1 can't be an argument. Arg must be strings and (thanks # Alexander Reynolds for pointing this out) must start with underscore or alphabetical character. An argument must be a valid Python identifier. This mean you can't even do something like this:
def foo(1=2, 3=4):
<your code>
or
def foo('1'=2, '3'=4):
<your code>
See python_basic_syntax for more details.
It is a Extended Iterable Unpacking.
>>> def add(a=0, b=0):
... return a + b
...
>>> d = {'a': 2, 'b': 3}
>>> add(**d)#corresponding to add(a=2,b=3)
5
For single *,
def add(a=0, b=0):
... return a + b
...
>>> d = {'a': 2, 'b': 3}
>>> add(*d)#corresponding to add(a='a',b='b')
ab
Learn more here.
I think the ** double asterisk in function parameter and unpacking dictionary means intuitively in this way:
#suppose you have this function
def foo(a,**b):
print(a)
for x in b:
print(x,"...",b[x])
#suppose you call this function in the following form
foo(whatever,m=1,n=2)
#the m=1 syntax actually means assign parameter by name, like foo(a = whatever, m = 1, n = 2)
#so you can also do foo(whatever,**{"m":1,"n":2})
#the reason for this syntax is you actually do
**b is m=1,n=2 #something like pattern matching mechanism
so b is {"m":1,"n":2}, note "m" and "n" are now in string form
#the function is actually this:
def foo(a,**b): # b = {"m":1,"n":2}
print(a)
for x in b: #for x in b.keys(), thanks to #vlizana answer
print(x,"...",b[x])
All the syntax make sense now. And it is the same for single asterisk. It is only worth noting that if you use single asterisk to unpack dictionary, you are actually trying to unpack it in a list way, and only key of dictionary are unpacked.
[https://docs.python.org/3/reference/expressions.html#calls]
A consequence of this is that although the *expression syntax may appear after explicit keyword arguments, it is processed before the keyword arguments (and any **expression arguments – see below). So:
def f(a, b):
print(a, b)
f(b=1, *(2,))
f(a=1, *(2,))
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
#TypeError: f() got multiple values for keyword argument 'a'
f(1, *(2,))

Initializing a list and appending in one line

How can I initialize a list, if it is not already initialized and append to it in one line. For example,
function example(a=None):
a = a or []
a.append(1)
return another_function(a)
How can I combine those two statements in the function into one?
I am looking for something like this, but that does not work:
function example(a):
a = (a or []).append(1)
return another_function(a)
EDIT: I don't reference a elsewhere, it is just being passed from function to function. Practically just value is important, so it is OK if it is another object with the right value. I also added a default value of None.
def example(a):
return a + [1] if a else [1]
>>> example([1,2,3])
[1, 2, 3, 1]
>>> example([])
[1]

How to use python generator expressions to create a oneliner to run a function multiple times and get a list output

I am wondering if there is there is a simple Pythonic way (maybe using generators) to run a function over each item in a list and result in a list of returns?
Example:
def square_it(x):
return x*x
x_set = [0,1,2,3,4]
squared_set = square_it(x for x in x_set)
I notice that when I do a line by line debug on this, the object that gets passed into the function is a generator.
Because of this, I get an error:
TypeError: unsupported operand type(s) for *: 'generator' and 'generator'
I understand that this generator expression created a generator to be passed into the function, but I am wondering if there is a cool way to accomplish running the function multiple times only by specifying an iterable as the argument? (without modifying the function to expect an iterable).
It seems to me that this ability would be really useful to cut down on lines of code because you would not need to create a loop to fun the function and a variable to save the output in a list.
Thanks!
You want a list comprehension:
squared_set = [square_it(x) for x in x_set]
There's a builtin function, map(), for this common problem.
>>> map(square_it, x_set)
[0,1,4,9,16] # On Python 3, a generator is returned.
Alternatively, one can use a generator expression, which is memory-efficient but lazy (meaning the values will not be computed now, only when needed):
>>> (square_it(x) for x in x_set)
<generator object <genexpr> at ...>
Similarly, one can also use a list comprehension, which computes all the values upon creation, returning a list.
Additionally, here's a comparison of generator expressions and list comprehensions.
You want to call the square_it function inside the generator, not on the generator.
squared_set = (square_it(x) for x in x_set)
As the other answers have suggested, I think it is best (most "pythonic") to call your function explicitly on each element, using a list or generator comprehension.
To actually answer the question though, you can wrap your function that operates over scalers with a function that sniffs the input, and has different behavior depending on what it sees. For example:
>>> import types
>>> def scaler_over_generator(f):
... def wrapper(x):
... if isinstance(x, types.GeneratorType):
... return [f(i) for i in x]
... return f(x)
... return wrapper
>>> def square_it(x):
... return x * x
>>> square_it_maybe_over = scaler_over_generator(square_it)
>>> square_it_maybe_over(10)
100
>>> square_it_maybe_over(x for x in range(5))
[0, 1, 4, 9, 16]
I wouldn't use this idiom in my code, but it is possible to do.
You could also code it up with a decorator, like so:
>>> #scaler_over_generator
... def square_it(x):
... return x * x
>>> square_it(x for x in range(5))
[0, 1, 4, 9, 16]
If you didn't want/need a handle to the original function.
Note that there is a difference between list comprehension returning a list
squared_set = [square_it(x) for x in x_set]
and returning a generator that you can iterate over it:
squared_set = (square_it(x) for x in x_set)

Passing functions which have multiple return values as arguments in Python

So, Python functions can return multiple values. It struck me that it would be convenient (though a bit less readable) if the following were possible.
a = [[1,2],[3,4]]
def cord():
return 1, 1
def printa(y,x):
print a[y][x]
printa(cord())
...but it's not. I'm aware that you can do the same thing by dumping both return values into temporary variables, but it doesn't seem as elegant. I could also rewrite the last line as "printa(cord()[0], cord()[1])", but that would execute cord() twice.
Is there an elegant, efficient way to do this? Or should I just see that quote about premature optimization and forget about this?
printa(*cord())
The * here is an argument expansion operator... well I forget what it's technically called, but in this context it takes a list or tuple and expands it out so the function sees each list/tuple element as a separate argument.
It's basically the reverse of the * you might use to capture all non-keyword arguments in a function definition:
def fn(*args):
# args is now a tuple of the non-keyworded arguments
print args
fn(1, 2, 3, 4, 5)
prints (1, 2, 3, 4, 5)
fn(*[1, 2, 3, 4, 5])
does the same.
Try this:
>>> def cord():
... return (1, 1)
...
>>> def printa(y, x):
... print a[y][x]
...
>>> a=[[1,2],[3,4]]
>>> printa(*cord())
4
The star basically says "use the elements of this collection as positional arguments." You can do the same with a dict for keyword arguments using two stars:
>>> a = {'a' : 2, 'b' : 3}
>>> def foo(a, b):
... print a, b
...
>>> foo(**a)
2 3
Actually, Python doesn't really return multiple values, it returns one value which can be multiple values packed into a tuple. Which means that you need to "unpack" the returned value in order to have multiples.
A statement like
x,y = cord()
does that, but directly using the return value as you did in
printa(cord())
doesn't, that's why you need to use the asterisk. Perhaps a nice term for it might be "implicit tuple unpacking" or "tuple unpacking without assignment".

Categories