to convert each element of list to tuple like following :
l = ['abc','xyz','test']
convert to tuple list:
newl = [('abc',),('xyz',),('test',)]
Actually I have dict with keys like this so for searching purpose I need to have these.
You can use a list comprehension:
>>> l = ['abc','xyz','test']
>>> [(x,) for x in l]
[('abc',), ('xyz',), ('test',)]
>>>
Or, if you are on Python 2.x, you could just use zip:
>>> # Python 2.x interpreter
>>> l = ['abc','xyz','test']
>>> zip(l)
[('abc',), ('xyz',), ('test',)]
>>>
However, the previous solution will not work in Python 3.x because zip now returns a zip object. Instead, you would need to explicitly make the results a list by placing them in list:
>>> # Python 3.x interpreter
>>> l = ['abc','xyz','test']
>>> zip(l)
<zip object at 0x020A3170>
>>> list(zip(l))
[('abc',), ('xyz',), ('test',)]
>>>
I personally prefer the list comprehension over this last solution though.
Just do this:
newl = [(i, ) for i in l]
Related
In Java we have HashSet<Integer>, I need similar structure in Python to use contains like below:
A = [1, 2, 3]
S = set()
S.add(2)
for x in A:
if S.contains(x):
print "Example"
Could you please help?
Just use a set:
>>> l = set()
>>> l.add(1)
>>> l.add(2)
>>> 1 in l
True
>>> 34 in l
False
The same works for lists:
>>> ll = [1,2,3]
>>> 2 in ll
True
>>> 23 in ll
False
Edit:
Note #bholagabbar's comment below that the time complexity for in checks in lists and tuples is O(n) on average (see the python docs here), whereas for sets it is on average O(1) (worst case also O(n), but is very uncommon and might only happen if __hash__ is implemented poorly).
In Python, there is a built-in type, set.
The major difference from the hashmap in Java is that the Python set is not typed,
i.e., it is legal to have a set {'2', 2} in Python.
Out of the box, the class set does not have a contains() method implemented.
We typically use the Python keyword in to do what you want, i.e.,
A = [1, 2, 3]
S = set()
S.add(2)
for x in A:
if x in S:
print("Example")
If that does not work for you, you can invoke the special method __contains__(), which is NOT encouraged.
A = [1, 2, 3]
S = set()
S.add(2)
for x in A:
if S.__contains__(x):
print("Example")
i am refreshing my python (2.7) and i am discovering iterators and generators.
As i understood, they are an efficient way of navigating over values without consuming too much memory.
So the following code do some kind of logical indexing on a list:
removing the values of a list L that triggers a False conditional statement represented here by the function f.
I am not satisfied with my code because I feel this code is not optimal for three reasons:
I read somewhere that it is better to use a for loop than a while loop.
However, in the usual for i in range(10), i can't modify the value of 'i' because it seems that the iteration doesn't care.
Logical indexing is pretty strong in matrix-oriented languages, and there should be a way to do the same in python (by hand granted, but maybe better than my code).
Third reason is just that i want to use generator/iterator on this example to help me understand.
Third reason is just that i want to use generator/iterator on this example to help me understand.
TL;DR : Is this code a good pythonic way to do logical indexing ?
#f string -> bool
def f(s):
return 'c' in s
L=['','a','ab','abc','abcd','abcde','abde'] #example
length=len(L)
i=0
while i < length:
if not f(L[i]): #f is a conditional statement (input string output bool)
del L[i]
length-=1 #cut and push leftwise
else:
i+=1
print 'Updated list is :', L
print length
This code has a few problems, but the main one is that you must never modify a list you're iterating over. Rather, you create a new list from the elements that match your condition. This can be done simply in a for loop:
newlist = []
for item in L:
if f(item):
newlist.append(item)
which can be shortened to a simple list comprehension:
newlist = [item for item in L if f(item)]
It looks like filter() is what you're after:
newlist = filter(lambda x: not f(x), L)
filter() filters (...) an iterable and only keeps the items for which a predicate returns True. In your case f(..) is not quite the predicate but not f(...).
Simpler:
def f(s):
return 'c' not in s
newlist = filter(f, L)
See: https://docs.python.org/2/library/functions.html#filter
Never modify a list with del, pop or other methods that mutate the length of the list while iterating over it. Read this for more information.
The "pythonic" way to filter a list is to use reassignment and either a list comprehension or the built-in filter function:
List comprehension:
>>> [item for item in L if f(item)]
['abc', 'abcd', 'abcde']
i want to use generator/iterator on this example to help me understand
The for item in L part is implicitly making use of the iterator protocol. Python lists are iterable, and iter(somelist) returns an iterator .
>>> from collections import Iterable, Iterator
>>> isinstance([], Iterable)
True
>>> isinstance([], Iterator)
False
>>> isinstance(iter([]), Iterator)
True
__iter__ is not only being called when using a traditional for-loop, but also when you use a list comprehension:
>>> class mylist(list):
... def __iter__(self):
... print('iter has been called')
... return super(mylist, self).__iter__()
...
>>> m = mylist([1,2,3])
>>> [x for x in m]
iter has been called
[1, 2, 3]
Filtering:
>>> filter(f, L)
['abc', 'abcd', 'abcde']
In Python3, use list(filter(f, L)) to get a list.
Of course, to filter a list, Python needs to iterate over it, too:
>>> filter(None, mylist())
iter has been called
[]
"The python way" to do it would be to use a generator expression:
# list comprehension
L = [l for l in L if f(l)]
# alternative generator comprehension
L = (l for l in L if f(l))
It depends on your context if a list or a generator is "better" (see e.g. this so question). Because your source data is coming from a list, there is no real benefit of using a generator here.
For simply deleting elements, especially if the original list is no longer needed, just iterate backwards:
Python 2.x:
for i in xrange(len(L) - 1, -1, -1):
if not f(L[i]):
del L[i]
Python 3.x:
for i in range(len(L) - 1, -1, -1):
if not f(L[i]):
del L[i]
By iterating from the end, the "next" index does not change after deletion and a for loop is possible. Note that you should use the xrange generator in Python 2, or the range generator in Python 3, to save memory*.
In cases where you must iterate forward, use your given solution above.
*Note that Python 2's xrange will break if there are >= 2 ** 32 - 1 elements. Python 3's range, as well as the less efficient Python 2's range do not have this limitation.
How can I iterate through items of two dictionaries in a single loop? This is not working:
for word, cls in self.spam.items() and self.ham.items():
pass
Use itertools.chain:
from itertools import chain
for word, cls in chain(self.spam.items(), self.ham.items()):
print(word, cls)
Since in Python2, dict.items() will generate a list of (key,value) tuples, you can concatenate the two lists, whereas in Python3, it will return a viewing object, hence we need to convert it to a list, so the following is also one way to do:
>>> d1 = {1:'ONE',2:'TWO'}
>>> d2 = {3:'THREE', 4:'FOUR'}
>>> dict_chained = d1.items() + d2.items() #Python2
>>> dict_chained = list(d1.items())+list(d2.items())) #Python3
>>> for x,y in dict_chained:
print x,y
1 ONE
2 TWO
3 THREE
4 FOUR
>>>
So I'm trying to do this.
a = []
map(lambda x: a.append(x),(i for i in range(1,5)))
I know map takes a function but so why doesn't it append to the list? Or is append not a function?
However printing a results to a still being empty
now an interesting thing is this works
a = []
[a.append(i) for i in range(5)]
print(a)
aren't they basically "saying" the same thing?
It's almost as if that list comprehension became some sort of hybrid list-comprehension function thing
So why doesn't the lambda and map approach work?
I am assuming you are using Python 3.x , the actual reason why your code with map() does not work is because in Python 3.x , map() returns a generator object , unless you iterate over the generator object returned by map() , the lambda function is not called . Try doing list(map(...)) , and you should see a getting filled.
That being said , what you are doing does not make much sense , you can just use -
a = list(range(5))
append() returns None so it doesn't make sense using that in conjunction with map function. A simple for loop would suffice:
a = []
for i in range(5):
a.append(i)
print a
Alternatively if you want to use list comprehensions / map function;
a = range(5) # Python 2.x
a = list(range(5)) # Python 3.x
a = [i for i in range(5)]
a = map(lambda i: i, range(5)) # Python 2.x
a = list(map(lambda i: i, range(5))) # Python 3.x
[a.append(i) for i in range(5)]
The above code does the appending too, however it also creates a list of None values as the size of range(5) which is totally a waste of memory.
>>> a = []
>>> b = [a.append(i) for i in range(5)]
>>> print a
[0, 1, 2, 3, 4]
>>> print b
[None, None, None, None, None]
The functions map and filter have as first argument a function reference that is called for each element in the sequence (list, tuple, etc.) provided as second argument AND the result of this call is used to create the resulting list
The function reduce has as first argument a function reference that is called for first 2 elems in the sequence provided as second argument AND the result is used together with the third elem in another call, then the result is used with the fourth elem, and so on. A single value results in the end.
>>> map(lambda e: e+10, [i for i in range(5)])
[10, 11, 12, 13, 14]
>>> filter(lambda e: e%2, [i for i in range(5)])
[1, 3]
>>> reduce(lambda e1, e2: e1+e2, [i for i in range(5)])
10
Explanations:
map example: adds 10 to each elem of list [0,1,2,3,4]
filter example: keeps only elems that are odd of list [0,1,2,3,4]
reduce example: add first 2 elems of list [0,1,2,3,4], then the result and the third elem of list, then the result and fourth elem, and so on.
This map doesn't work because the append() method returns None and not a list:
>>> a = []
>>> type(a.append(1))
<class 'NoneType'>
To keep it functional why not use reduce instead?
>>> from functools import reduce
>>> reduce(lambda p, x: p+[x], (i for i in range(5)), [])
[0, 1, 2, 3, 4]
Lambda function will not get triggered unless you wrap the call to map function in list() like below
list(map(lambda x: a.append(x),(i for i in range(1,5))))
map only returns a generator object which needs to be iterated in order to create a list. Above code will get the lambda called.
However this code does not make much sense considering what you are trying to achieve
I'm just trying to figure out what is happening in this python code. I was trying to use this answer here, and so I was tinkering with the console and my list elements just vanished.
What I was doing was loading the lines into a file into a list, then trying to remove the newline chars from each element by using the lambda thing from the other answer.
Can anyone help me explain why the list became empty?
>>> ================================ RESTART ================================
>>> x = ['a\n','b\n','c\n']
>>> x
['a\n', 'b\n', 'c\n']
>>> x = map(lambda s: s.strip(), x)
>>> x
<map object at 0x00000000035901D0>
>>> y = x
>>> y
<map object at 0x00000000035901D0>
>>> x
<map object at 0x00000000035901D0>
>>> list(x)
['a', 'b', 'c']
>>> x = list(x)
>>> x
[]
>>> y
<map object at 0x00000000035901D0>
>>> list(y)
[]
>>>
You are using Python3, so map returns a mapobject. You can only iterate over a mapobject once. So convert it to a list if you need to iterate over the items more than once. (also if you need to look up by index etc.)
Use
x = list(map(lambda s: s.strip(), x))
or better - a list comprehension
x = [s.strip() for s in x]