Python - Printing Map Object Issue - python

I was playing with the map object and noticed that it didn't print if I do list() beforehand. When I viewed only the map beforehand, the printing worked. Why?

map returns an iterator and you can consume an iterator only once.
Example:
>>> a=map(int,[1,2,3])
>>> a
<map object at 0x1022ceeb8>
>>> list(a)
[1, 2, 3]
>>> next(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> list(a)
[]
Another example where I consume the first element and create a list with the rest
>>> a=map(int,[1,2,3])
>>> next(a)
1
>>> list(a)
[2, 3]

As per the answer from #newbie, this is happening because you are consuming the map iterator before you use it. (Here is another great answer on this topic from #LukaszRogalski)
Example 1:
w = [[1,5,7],[2,2,2,9],[1,2],[0]]
m = map(sum,w) # map iterator is generated
list(m) # map iterator is consumed here (output: [13,15,3,0])
for v in m:
print(v) # there is nothing left in m, so there's nothing to print
Example 2:
w = [[1,5,7],[2,2,2,9],[1,2],[0]]
m = map(sum,w) #map iterator is generated
for v in m:
print(v) #map iterator is consumed here
# if you try and print again, you won't get a result
for v in m:
print(v) # there is nothing left in m, so there's nothing to print
So you have two options here, if you only want to iterate the list once, Example 2 will work fine. However, if you want to be able to continue using m as a list in your code, you need to amend Example 1 like so:
Example 1 (amended):
w = [[1,5,7],[2,2,2,9],[1,2],[0]]
m = map(sum,w) # map iterator is generated
m = list(m) # map iterator is consumed here, but it is converted to a reusable list.
for v in m:
print(v) # now you are iterating a list, so you should have no issue iterating
# and reiterating to your heart's content!

It's because it return an generator so clearer example:
>>> gen=(i for i in (1,2,3))
>>> list(gen)
[1, 2, 3]
>>> for i in gen:
print(i)
>>>
Explanation:
it's because to convert it into the list it basically loops trough than after you want to loop again it will think that still continuing but there are no more elements
so best thing to do is:
>>> M=list(map(sum,W))
>>> M
[13, 15, 3, 0]
>>> for i in M:
print(i)
13
15
3
0

You can either use this:
list(map(sum,W))
or this:
{*map(sum,W)}

Related

multi line v single line for-loop different results

This is an exercise on Kaggle/Python/Strings and Dictionaries. I wasn't able to solve it so I peeked at the solution and tried to write it in a way I would do it (i.e. not necessarily as sophisticated but in a way I understood). I use Python tutor to visualise what's going on behind the code and understand most things but the for-loop is getting me.
normalised = (token.strip(",.").lower() for token in tokens) This works and gives me index [0]
but if I rewrite as:
for token in tokens:
normalised = token.strip(",.").lower()
it doesn't work; it gives me index [0][2] (presumably because casino is in casinoville). Can someone write the multi-line equivalent: for token in tokens:...?
code is below for a bit more context.
def word_search(doc_list, keyword):
Takes a list of documents (each document is a string) and a keyword.
Returns list of the index values into the original list for all documents
containing the keyword.
Example:
doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
>>> word_search(doc_list, 'casino')
>>> [0]
"""
indices = []
counter = 0
for doc in doc_list:
tokens = doc.split()
**normalised = (token.strip(",.").lower() for token in tokens)**
if keyword.lower() in normalised:
indices.append(counter)
counter += 1
return indices
#Test - output should be [0]
doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
keyword = 'Casino'
print(word_search(doc_list,keyword))
normalised = (token.strip(",.").lower() for token in tokens) returns a tuple generator. Let's explore this:
>>> a = [1,2,3]
>>> [x**2 for x in a]
[1, 4, 9]
This is a list comprehension. The multi-line equivalent is:
>>> a = [1,2,3]
>>> b = []
>>> for x in a:
... b.append(x**2)
...
>>> print(b)
[1, 4, 9]
Using parentheses instead of square brackets does not return a tuple (as one might suspect naively, as I did earlier), but a generator:
>>> a = [1,2,3]
>>> (x**2 for x in a)
<generator object <genexpr> at 0x0000024BD6E33B48>
We can iterate over this object with next:
>>> a = [1,2,3]
>>> b = (x**2 for x in a)
>>> next(b)
1
>>> next(b)
4
>>> next(b)
9
>>> next(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
This can be written as a multi-line expression like this:
>>> a = [1,2,3]
>>> def my_iterator(x):
... for k in x:
... yield k**2
...
>>> b = my_iterator(a)
>>> next(b)
1
>>> next(b)
4
>>> next(b)
9
>>> next(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
In the original example, an in comparison is used. This works for both the list and the generator, but for the generator it only works once:
>>> a = [1,2,3]
>>> b = [x**2 for x in a]
>>> 9 in b
True
>>> 5 in b
False
>>> b = (x**2 for x in a)
>>> 9 in b
True
>>> 9 in b
False
Here is a discussion of the issue with generator reset: Resetting generator object in Python
I hope that clarified the differences between list comprehensions, generators and multi-line loops.

what is the difference and uses of these lines of code?

first line of code:
for i in list:
print(i)
second line of code:
print(i for i in list)
what would I use each of them for?
You can see for yourself what the difference is.
The first one iterates over range and then prints integers.
>>> for i in range(4):
... print(i)
...
0
1
2
3
The second one is a generator expression.
>>> print(i for i in range(4))
<generator object <genexpr> at 0x10b6c20f0>
How iteration works in the generator. Python generators are a simple way of creating iterators.
Simply speaking, a generator is a function that returns an object (iterator) which we can iterate over (one value at a time).
>>> g=(i for i in range(4))
>>> print(g)
<generator object <genexpr> at 0x100f015d0>
>>> print(next(g))
0
>>>
>>> print(next(g))
1
>>> print(next(g))
2
>>> print(next(g))
3
>>> print(next(g))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> g=(i for i in range(4))
>>> for i in g:
... print(i)
...
0
1
2
3
>>> for i in g:
... print(i)
...
>>>
>>>
In python3, you can use tuple unpacking to print the generator. If that's what you were going for.
>>> print(*(i for i in range(4)))
0 1 2 3
The first code snippet will iterate over your list and print the value of i for each pass through the loop. In most cases you will want to use something like this to print the values in a list:
my_list = list(range(5))
for i in my_list:
print(i)
0
1
2
3
4
The second snippet will evaluate the expression in the print statement and print the result. Since the expression in print statement, i for i in my_list evaluates to a generator expression, that string representation of that generator expression will be outputted. I cannot think of any real world cases where that is the result you would want.
my_list = list(range(5))
print(i for i in my_list)
<generator object <genexpr> at 0x0E9EB2F0>
The first way is just a loop going through a list and printing the elements one by one:
l = [1, 2, 3]
for i in l:
print(i)
output:
1
2
3
The second way, list comprehension, creates an iterable you can store (list, dictionary, etc.)
l = [i for i in l]
print( l ) #[1, 2, 3]
print( l[0] ) #1
print( l[1:] ) #[2, 3]
output:
[1, 2, 3]
1
[2, 3]
The second is used for doing 1 thing to all the elements e.g. turn all elements from string to int:
l = ['1', '2', '3']
l = [int(i) for i in l] #now the list is [1, 2, 3]
loops are better for doing a lot of things:
for i in range(4):
#Code
#more code
#lots of more code
pass
In response to the (since edited) answers that suggested otherwise:
the second one is NOT a list comprehension.
the two code snippets do NOT do the same thing.
>>> x = [1,2,3,4,5]
>>> print(i for i in x)
<generator object <genexpr> at 0x000002322FA1BA50>
The second one is printing a generator object because (i for i in x) is a generator. The first snippet simply prints the elements in the list one at a time.
BTW: don't use list as a variable name. It's the name of a built-in type in Python, so when you use it as a variable name, you overwrite the constructor for that type. Basically, you're erasing the built-in list() function.
The second one is a generator expression. Both will give same result if you convert the generator expression to list comprehension. In short, list comprehensions are used to increase both memory and execution efficiency. However, they are generally applicable to small blocks of codes, generally one to two lines.
For more information, see this link on official python website - https://docs.python.org/3/tutorial/datastructures.html?highlight=list%20comprehensions

Preventing a generator from yielding the same object twice

Assuming I have a generator yielding hashable values (str / int etc.) is there a way to prevent the generator from yielding the same value twice?
Obviously, I'm using a generator so I don't need to unpack all the values first so something like yield from set(some_generator) is not an option, since that will unpack the entire generator.
Example:
# Current result
for x in my_generator():
print(x)
>>> 1
>>> 17
>>> 15
>>> 1 # <-- This shouldn't be here
>>> 15 # <-- This neither!
>>> 3
>>> ...
# Wanted result
for x in my_no_duplicate_generator():
print(x)
>>> 1
>>> 17
>>> 15
>>> 3
>>> ...
What's the most Pythonic solution for this?
There is a unique_everseen in Python itertools module recipes that is roughly equivalent to #NikosOikou's answer.
The main drawback of these solutions is that they rely upon the hypothesis that elements of the iterable are hashable:
>>> L = [[1], [2,3], [1]]
>>> seen = set()
>>> for e in L: seen.add(e)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
The more-itertools module refines the implementation to accept unhashables elements and the doc give a tip on how to keep a good speed in some cases (disclaimer: I'm the "author" of the tip).
You can check the source code.
You can try this:
def my_no_duplicate_generator(iterable):
seen = set()
for x in iterable:
if x not in seen:
yield x
seen.add(x)
You can use it by passing your generator as an argument:
for x in my_no_duplicate_generator(my_generator()):
print(x)

Using map function with external dictionary (global)

I'm trying to improve the computing time of my code so I want to replace for loops with map functions.
For each key in the dictionary I check if it is bigger than a specific value and inserting it to a new dictionary under the same key:
My original code is:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
for key in dict1.keys():
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
print (dict_filt)
output is: {'d': 20, 'e': 30}
and this works
but when I try with map:
dict1={'a':-1,'b':0,'c':1,'d':2,'e':3}
dict_filt = {}
def for_filter (key):
if dict1[key]>1:
dict_filt[key] = dict1[key]*10
map (for_filter ,dict1.keys())
print (dict_filt)
I get an empty dictionary
I tried to make it work with lambda:
map (lambda x: for_filter(x) ,dict1.keys())
or define the dictionarys as global but it still doesnt work.
I'll be glad to get some help
I don't need the original dictionary so if it's simpler to work on one dictionary it's still ok
Use a dictionary-comprehension instead of map:
{k: v * 10 for k, v in dict1.items() if v > 1}
Code:
dict1 = {'a':-1,'b':0,'c':1,'d':2,'e':3}
print({k: v * 10 for k, v in dict1.items() if v > 1})
# {'d': 20, 'e': 30}
map is lazy: if you do not consume the values, the function for_filter is not applied. Since you are using a side effect to populate dict_filt, nothing will happen unless you force the evaluation of the map:
Replace:
map(for_filter, dict1.keys())
By:
list(map(for_filter, dict1)) # you don't need keys here
And you will get the expected result.
But note that this is a misuse of map. You should use a dict comprehension (see #Austin's answer).
EDIT: More on map and lazyness.
TLDR;
Look at the doc:
map(function, iterable, ...)
Return an iterator that applies function to every item of iterable, yielding the results.
Explanation
Consider the following function:
>>> def f(x):
... print("x =", x)
... return x
...
This function returns its parameter and performs a side effect (print the value). Let's try to apply this function to a simple range with the map function:
>>> m = map(f, range(5))
Nothing is printed! Let's look at the value of m:
>>> m
<map object at 0x7f91d35cccc0>
We were expecting [0, 1, 2, 3, 4] but we got a strange <map object at 0x7f91d35cccc0>. That's lazyness: map does not really apply the function but creates an iterator. This iterator returns, on each next call, a value:
>>> next(m)
x = 0
0
That value is the result of the application of the function f to the next element of the mapped iterable (range). Here, 0 is the returned value and x = 0 the result of the print side effect. What is important here is that this value does not exist before you pull it out of the iterator. Hence the side effect is not performed before you pull the vlaue out of the iterator.
If we continue to call next, we'll exhaust the iterator:
...
>>> next(m)
x = 4
4
>>> next(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Another way to get all the values of the iterator is to create a list. That's not a cast, but rather the constrution of a new object and the consumption of the old one:
>>> m = map(f, range(5))
>>> list(m)
x = 0
x = 1
x = 2
x = 3
x = 4
[0, 1, 2, 3, 4]
We see that the side effect print is performed for every element of the range, and then the list [0, 1, 2, 3, 4] is returned.
In your case, the function doesn't print anything, but makes an assignement to an external variable dict_filt. The function is not applied unless you consume the map iterator.
I repeat: do not use map (or any list/dict comprehension) to perform a side effect (map comes from the functional world where side effect do not exist).

map,lambda and append.. why doesn't it work?

So I'm trying to do this.
a = []
map(lambda x: a.append(x),(i for i in range(1,5)))
I know map takes a function but so why doesn't it append to the list? Or is append not a function?
However printing a results to a still being empty
now an interesting thing is this works
a = []
[a.append(i) for i in range(5)]
print(a)
aren't they basically "saying" the same thing?
It's almost as if that list comprehension became some sort of hybrid list-comprehension function thing
So why doesn't the lambda and map approach work?
I am assuming you are using Python 3.x , the actual reason why your code with map() does not work is because in Python 3.x , map() returns a generator object , unless you iterate over the generator object returned by map() , the lambda function is not called . Try doing list(map(...)) , and you should see a getting filled.
That being said , what you are doing does not make much sense , you can just use -
a = list(range(5))
append() returns None so it doesn't make sense using that in conjunction with map function. A simple for loop would suffice:
a = []
for i in range(5):
a.append(i)
print a
Alternatively if you want to use list comprehensions / map function;
a = range(5) # Python 2.x
a = list(range(5)) # Python 3.x
a = [i for i in range(5)]
a = map(lambda i: i, range(5)) # Python 2.x
a = list(map(lambda i: i, range(5))) # Python 3.x
[a.append(i) for i in range(5)]
The above code does the appending too, however it also creates a list of None values as the size of range(5) which is totally a waste of memory.
>>> a = []
>>> b = [a.append(i) for i in range(5)]
>>> print a
[0, 1, 2, 3, 4]
>>> print b
[None, None, None, None, None]
The functions map and filter have as first argument a function reference that is called for each element in the sequence (list, tuple, etc.) provided as second argument AND the result of this call is used to create the resulting list
The function reduce has as first argument a function reference that is called for first 2 elems in the sequence provided as second argument AND the result is used together with the third elem in another call, then the result is used with the fourth elem, and so on. A single value results in the end.
>>> map(lambda e: e+10, [i for i in range(5)])
[10, 11, 12, 13, 14]
>>> filter(lambda e: e%2, [i for i in range(5)])
[1, 3]
>>> reduce(lambda e1, e2: e1+e2, [i for i in range(5)])
10
Explanations:
map example: adds 10 to each elem of list [0,1,2,3,4]
filter example: keeps only elems that are odd of list [0,1,2,3,4]
reduce example: add first 2 elems of list [0,1,2,3,4], then the result and the third elem of list, then the result and fourth elem, and so on.
This map doesn't work because the append() method returns None and not a list:
>>> a = []
>>> type(a.append(1))
<class 'NoneType'>
To keep it functional why not use reduce instead?
>>> from functools import reduce
>>> reduce(lambda p, x: p+[x], (i for i in range(5)), [])
[0, 1, 2, 3, 4]
Lambda function will not get triggered unless you wrap the call to map function in list() like below
list(map(lambda x: a.append(x),(i for i in range(1,5))))
map only returns a generator object which needs to be iterated in order to create a list. Above code will get the lambda called.
However this code does not make much sense considering what you are trying to achieve

Categories