I have generator like
def not_nones(some_iterable):
for item in some_iterable:
if item is not None:
yield item
But since "flat is better than nested", I would like to do this in one line, like:
def not_nones(some_iterable):
for item in some_iterable:
yield item if item is not None else None
But this will actually make None an item of the generator.
Is it possible to yield nothing in a one-liner anyway?
You could just return a generator expression:
def not_nones(iterable):
return (item for item in iterable if item is not None)
Or for a real one-liner:
not_nones = lambda it: (i for i in it if i is not None)
which at this point is getting more into code-golf territory.
But really, there's not much wrong with your current code; it does what it needs to do, in a reasonable way. Your code is what I would have written in this situation.
You could use itertools.ifilter(). Given the right predicate function it provides exactly the functionality you are implementing here.
Example:
import itertools
# make up data
l = [1, None, 2, None, 3]
# predicate function
not_none = lambda x: x is not None
# filter out None values
not_nones = itertools.ifilter(not_none, l)
print list(not_nones) # prints [1, 2, 3]
For reference:
https://docs.python.org/2/library/itertools.html#itertools.ifilter
Related
I feel this a noob question but I haven't used lambda functions that much and I couldn't find what I needed online.
I wanted to write a lambda function which takes a series as an input, and returns a list without any 'None's in it
Basically I want a lambda function for the followinf function:
def get_path(x):
toRet = []
for item in x.tolist():
if item is not None:
toret.append(item)
return toRet
Is it possible to get an inline lambda function which does what get_path() does.
Yes I do know that I can do the following:
lambda x: get_path(x)
It solves the problem but I would really love to know how to make an inline function for this.
You don't even need lambda functions here.
toRet = [item for item in x.tolist() if item is not None]
You can also use filter for example, which looks neater
toRet = list(filter(None, x.tolist()))
get_path = lambda x: [ elem for elem in x.tolist() if not elem]
As others have pointed out, you can use filter() but be careful of what function you provide for the filter-check.
If function is None , the identity function is assumed, that is, all elements of iterable that are false are removed.
list(filter(None, something)) won't work correctly because it will filter out False-ish values too, not just None. Example:
>>> mylist = [1, 2, 3, 0, None, '', tuple(), 'last']
>>> list(filter(None, mylist)) # not the expected behaviour
[1, 2, 3, 'last']
OP had an explicit if item is not None check, rather than just if item - so the two are not equivalent. Only None needs to be filtered out. To get the correct filtering behaviour, provide an actual check for the function provided to filter:
>>> list(filter(lambda i: i is not None, mylist))
[1, 2, 3, 0, '', (), 'last']
Replace mylist with x.tolist(). And when you put that into a lambda, it gets messy:
get_path = lambda x: list(filter(lambda i: i is not None, x.tolist()))
Instead of all that, the list comprehension option is better:
get_path = lambda x: [i for i in x.tolist() if i is not None]
with an explicit is not None check.
I wrote a function trying to compute and map the list, it works fine for this. But when I try to use a filter to filter out any integer values that are less than 5 in map result and return a list, it gives me an error "TypeError: 'NoneType' object is not iterable", can someone help me with this?
def compute(value):
if type(value) == int or type(value) == float:
return value ** 2
elif type(value) == str:
return value[::-1]
def map_compute(my_list):
print(list(map(compute, my_list)))
it works fine until here for the filter option:
def filter_compute(my_list):
number_list = map_compute(my_list)
new_list = list(filter(lambda x: x > 5, number_list))
print(new_list)
filter_compute(['cup', '321', 2, ['x'], 4])
Want I want is that :
Example: function call:
filter_compute(['cup', '321', 2, ['x'], 4])
Expected returned output:
['puc', '123', None, 16]
Another question is that is there any other way, for example just use lambda to do all the above functions?
WARNING
Before anything else, there is an important matter to address: Why are you checking types? It should be avoided as much as possible, particularly in a situation as simple as this one. Is your program purely for educational purposes?
You ask: Another question is that is there any other way, for example just use lambda to do all the above functions? The answer to that is yes, there are other ways, and no, lambda is not a good one.
Code Review
Let's look at your code.
def compute(value):
if type(value) == int or type(value) == float:
return value ** 2
elif type(value) == str:
return value[::-1]
As I mentioned above, the type checking should be avoided. The name of the function and its parameter need improvement, they're generic, nondescript and provide no useful information.
def map_compute(my_list):
print(list(map(compute, my_list)))
print() prints a value to stdout, you probably want return instead. I also strongly discourage the use of map(). That doesn't even matter, however, since you can get rid of this function entirely.
def filter_compute(my_list):
number_list = map_compute(my_list)
new_list = list(filter(lambda x: x > 5, number_list))
print(new_list)
Again, print() -> return. filter(), much like map, is unidiomatic. This function, too, seems unnecessary, although that depends on its intended purpose. Indeed, that code will crash on your example list, since you're comparing an int (5) to strings and a list.
Solution(ish)
Now, here is how I would rewrite your program:
def comp_value(val_in):
if isinstance(val_in, int) or isinstance(val_in, float):
return val_in ** 2
elif isinstance(val_in, str):
return val_in[::-1]
else:
return None
list_1 = ['cup', '321', 2, ['x'], 4]
list_2 = [comp_value(item) for item in list_1]
list_3 = [item for item in list_2 if item > 5]
print(list_3)
The two superfluous functions are replaced with simple list comprehensions. This code still doesn't make much sense and crashes, of course, the important part is how it is written.
Change
def map_compute(my_list):
print(list(map(compute, my_list)))
to
def map_compute(my_list):
return list(map(compute, my_list))
using the print function will return None object from map_compute, which result the number_list var to be None, and will make your exception, caused by the filter function that want to get an iterable item, but will get None
Below (Using inline condition and list comprehension).
It works but it is not very readable and I think you should avoid using this kind of code.
lst = ['cup', '321', 2, ['x'], 4]
new_lst = [x ** 2 if isinstance(x, (int, float)) else x[::-1] if isinstance(x, str) else None for x in lst]
print(new_lst)
output
['puc', '123', 4, None, 16]
This question already has answers here:
Python for-in loop preceded by a variable [duplicate]
(5 answers)
Closed 8 years ago.
I am reading an article about python removing duplicate element in a list.
there is a function defined as:
def f8(seq): # Dave Kirby
# Order preserving
seen = set()
return [x for x in seq if x not in seen and not seen.add(x)]
However, i don't really understand the syntax for
[x for x in seq if x not in seen and not seen.add(x)]
what is this syntax ? how do I read it?
thank you.
Firstly list comprehensions are usually easy to read, here is a simple example:
[x for x in seq if x != 2]
translates to:
result = []
for x in seq:
if x != 2:
result.append(x)
The reason why you can't read this code is because it is not readable and hacky code as I stated in this question:
def f8(seq):
seen = set()
return [x for x in seq if x not in seen and not seen.add(x)]
translates to:
def f8(seq):
seen = set()
result = []
for x in seq:
if x not in seen and not seen.add(x): # not seen.add(...) always True
result.append(x)
and relies on the fact that set.add is an in-place method that always returns None so not None evaluates to True.
>>> s = set()
>>> y = s.add(1) # methods usually return None
>>> print s, y
set([1]) None
The reason why the code has been written this way is to sneakily take advantage of Python's list comprehension speed optimizations.
Python methods will usually return None if they modify the data structure (pop is one of the exceptions)
I also noted that the current accepted way of doing this (2.7+) which is more readable and doesn't utilize a hack is as follows:
>>> from collections import OrderedDict
>>> items = [1, 2, 0, 1, 3, 2]
>>> list(OrderedDict.fromkeys(items))
[1, 2, 0, 3]
Dictionary keys must be unique, therefore the duplicates are filtered out.
It is called a list comprehension, they provide a syntactically more compact and more efficient way of writing a normal for-loop based solution.
def f8(seq): # Dave Kirby
# Order preserving
seen = set()
return [x for x in seq if x not in seen and not seen.add(x)]
The above list comprehension is roughly equivalent to:
def f8(seq):
seen = set()
lis =[]
for x in seq:
if x not in seen:
lis.append(x)
seen.add(x)
return lis
The construct is called a list comprehension
[x for x in seq if some_condition]. In this case the condition is that x isn't already in the resulting list. You can't inspect the result of a list comprehension while you are running it, so it keeps track of the items that are in there using a set called seen
This condition here is a bit tricky because it relies on a side-effect
not in seen and not seen.add(x)
seen.add() always returns None. If the item is in seen,
not in seen is False, so the and shortcircuits.
If the item is not in seen,
not in seen is True and not seen.add(x) is also True, so the item is included, and as a side-effect, it is added to the seen set
While, this type of thing can be fun, it's not a particularly clear way to express the intent.
I think the less tricky way is much more readable
def f8(seq):
seen = set()
result = []
for x in seq:
if x not in seen:
result.append(x)
seen.add(x)
return result
I am using generators to perform searches in lists like this simple example:
>>> a = [1,2,3,4]
>>> (i for i, v in enumerate(a) if v == 4).next()
3
(Just to frame the example a bit, I am using very much longer lists compared to the one above, and the entries are a little bit more complicated than int. I do it this way so the entire lists won't be traversed each time I search them)
Now if I would instead change that to i == 666, it would return a StopIteration because it can't find any 666 entry in a.
How can I make it return None instead? I could of course wrap it in a try ... except clause, but is there a more pythonic way to do it?
If you are using Python 2.6+ you should use the next built-in function, not the next method (which was replaced with __next__ in 3.x). The next built-in takes an optional default argument to return if the iterator is exhausted, instead of raising StopIteration:
next((i for i, v in enumerate(a) if i == 666), None)
You can chain the generator with (None,):
from itertools import chain
a = [1,2,3,4]
print chain((i for i, v in enumerate(a) if v == 6), (None,)).next()
but I think a.index(2) will not traverse the full list, when 2 is found, the search is finished. you can test this:
>>> timeit.timeit("a.index(0)", "a=range(10)")
0.19335955439601094
>>> timeit.timeit("a.index(99)", "a=range(100)")
2.1938486138533335
What I'm trying to do, is, given a list with an arbitrary number of other nested lists, recursively descend through the last value in the nested lists until I've reached the maximum depth, and then append a value to that list. An example might make this clearer:
>>> nested_list1 = [1, 2, 3, [4, 5, 6]]
>>> last_inner_append(nested_list1, 7)
[1, 2, 3, [4, 5, 6, 7]]
>>> nested_list2 = [1, 2, [3, 4], 5, 6]
>>> last_inner_append(nested_list2, 7)
[1, 2, [3, 4], 5, 6, 7]
The following code works, but it seems excessively tricky to me:
def add_to_inner_last(nested, item):
nest_levels = [nested]
try:
nest_levels.append(nested[-1])
except IndexError: # The empty list case
nested.append(item)
return
while type(nest_levels[-1]) == list:
try:
nest_levels.append(nest_levels[-1][-1])
except IndexError: # The empty inner list case
nest_levels[-1].append(item)
return
nest_levels[-2].append(item)
return
Some things I like about it:
It works
It handles the cases of strings at the end of lists, and the cases of empty lists
Some things I don't like about it:
I have to check the type of objects, because strings are also indexable
The indexing system feels too magical--I won't be able to understand this tomorrow
It feels excessively clever to use the fact that appending to a referenced list affects all references
Some general questions I have about it:
At first I was worried that appending to nest_levels was space inefficient, but then I realized that this is probably just a reference, and a new object is not created, right?
This code is purely side effect producing (It always returns None). Should I be concerned about that?
Basically, while this code works (I think...), I'm wondering if there's a better way to do this. By better I mean clearer or more pythonic. Potentially something with more explicit recursion? I had trouble defining a stopping point or a way to do this without producing side effects.
Edit:
To be clear, this method also needs to handle:
>>> last_inner_append([1,[2,[3,[4]]]], 5)
[1,[2,[3,[4,5]]]]
and:
>>> last_inner_append([1,[2,[3,[4,[]]]]], 5)
[1,[2,[3,[4,[5]]]]]
How about this:
def last_inner_append(x, y):
try:
if isinstance(x[-1], list):
last_inner_append(x[-1], y)
return x
except IndexError:
pass
x.append(y)
return x
This function returns the deepest inner list:
def get_deepest_list(lst, depth = 0):
deepest_list = lst
max_depth = depth
for li in lst:
if type(li) == list:
tmp_deepest_list, tmp_max_depth = get_deepest_list(li, depth + 1)
if max_depth < tmp_max_depth: # change to <= to get the rightmost inner list
max_depth = tmp_max_depth
deepest_list = tmp_deepest_list
return deepest_list, max_depth
And then use it as:
def add_to_deepest_inner(lst, item):
inner_lst, depth = get_deepest_list(lst)
inner_lst.append(item)
Here is my take:
def last_inner_append(cont, el):
if type(cont) == list:
if not len(cont) or type(cont[-1]) != list:
cont.append(el)
else:
last_inner_append(cont[-1], el)
I think it's nice and clear, and passes all your tests.
It is also pure side-effect; if you want to change this, I suggest you go with BasicWolf's approach and create a 'selector' and an 'update' function, where the latter uses the former.
It's the same recursion scheme as Phil H's, but handles empty lists.
I don't think there is a good way around the two type tests, however you approach them (e.g. with 'type' or checking for 'append'...).
You can test if append is callable, rather than using try/catch, and recursing:
def add_to_inner_last(nested, item):
if callable(nested,append):
if callable(nested[-1],append):
return add_to_inner_last(nested[-1],item)
else:
nested.append(item)
return true
else:
return false
It's slightly annoying to have to have two callable tests, but the alternative is to pass a reference to the parent as well as the child.
def last_inner_append(sequence, element):
def helper(tmp, seq, elem=element):
if type(seq) != list:
tmp.append(elem)
elif len(seq):
helper(seq, seq[-1])
else:
seq.append(elem)
helper(sequence, sequence)