There any way to optimize these two functions ?
first function:
def searchList(list_, element):
for i in range (0,len(list_)):
if(list_[i] == element):
return True
return False
second function:
return_list=[]
for x in list_search:
if searchList(list_users,x)==False:
return_list.append(x)
Yes:
return_list = [x for x in list_search if x not in list_users]
The first function basically checks for membership, in which case you could use the in keyword. The second function can be reduced to a list comprehension to filter out elements from list_search list based on your condition.
For first function
def searchList(list, element):
return element in list
You can make it in 1 line
searchList = lambda x,y: y in x
For 2nd, use a list comp like shown in the other answer
What you are doing with your two functions is building the complement as ozgur pointed out.
Using sets is the most easy thing here
>>> set([2,2,2,3,3,4])- set([1,2,2,4,5])
set([3])
your list_search would be the first list and your list_users the second list.
The only difference is that your new user is only once in the result no matter how often it is in the list_search
Disclaimer: I assumed list_search has no duplicate elements. Otherwise, use this solution.
What you want is exactly the set complement of list_users in list_search.
As an alternative approach, you can use sets to get the difference between two lists and I think it should be much more performant than the naive look up which takes 0(n^2).
>>> list_search = [1, 2, 3, 4]
>>> list_users = [4, 5, 1, 6]
>>> print set(list_search).difference(list_users)
>>> {2, 3}
Related
I'm trying to return true if only all the previous elements are true up to the current position.
I have it set up with all function but I don't want to code it this way
def check(lightsOnOff, light):
for light in lights[:light]:
if not on:
return False
return True
count = count + 1
In general all is a useful construct to use, I can see why it looks wrong in this expression
all(list(lightsOnOff.values())[:light])
but the smelly part is actually the list(iterable)[:number] construction, which forces construction of the whole list then truncates it.
As an important aside, if lightsOnOff is a dict (not e.g. an OrderedDict) your code will be non-deterministic (see notes at bottom).
If you don't want to create a list and slice it, you can leverage itertools:
from itertools import islince
...
all(islice(lightsOnOff.values(), n))
As a frame challenge, if your dict has an order and you know the keys, you can simply rewrite it as:
all(lightsOnOff[k] for k in keys[:light])
and if your dict has keys that are ordered and e.g. integers, just use a list?
all(listOfLights[:light])
Provided you want to implement all yourself on an arbitrary list, you can do something like:
my_list = [1, 7, 2, 1, None, 2, 3]
up_to_ix = 5
def my_all(some_list, up_to_index):
for element in some_list[:up_to_index]:
if not element:
return False
return True
my_all(my_list, up_to_ix)
The function will loop through all elements in the list up to, but excluding the some_index and if it finds at least one Falsy value, will return False, otherwise True.
I have a multidimensional list F, holding elements of some type. So, if for example the rank is 4, then the elements of F can be accessed by something like F[a][b][c][d].
Given a list L=[a,b,c,d], I would like to access F[a][b][c][d]. My problem is that my rank is going to be changing, so I cannot just have F[L[0]][L[1]][L[2]][L[3]].
Ideally, I would like to be able to do F[L] and get the element F[a][b][c][d]. I think something like this can be done with numpy, but for the types of arrays that I'm using, numpy is not suitable, so I want to do it with python lists.
How can I have something like the above?
Edit: For a specific example of what I'm trying to achieve, see the demo in Martijn's answer.
You can use the reduce() function to access consecutive elements:
from functools import reduce # forward compatibility
import operator
reduce(operator.getitem, indices, somelist)
In Python 3 reduce was moved to the functools module, but in Python 2.6 and up you can always access it in that location.
The above uses the operator.getitem() function to apply each index to the previous result (starting at somelist).
Demo:
>>> import operator
>>> somelist = ['index0', ['index10', 'index11', ['index120', 'index121', ['index1220']]]]
>>> indices = [1, 2, 2, 0]
>>> reduce(operator.getitem, indices, somelist)
'index1220'
Something like this?
def get_element(lst, indices):
if indices:
return get_element(lst[indices[0]], indices[1:])
return lst
Test:
get_element([[["element1"]], [["element2"]], "element3"], [2])
'element3'
get_element([[["element1"]], [["element2"]], "element3"], [0, 0])
['element1']
Or if you want an iterative version:
def get_element(lst, indices):
res = lst
for index in indices:
res = res[index]
return res
Test:
get_element([[["element1"]], [["element2"]], "element3"], [1, 0])
['element2']
Given a list xs and a value item, how can I check whether xs contains item (i.e., if any of the elements of xs is equal to item)? Is there something like xs.contains(item)?
For performance considerations, see Fastest way to check if a value exists in a list.
Use:
if my_item in some_list:
...
Also, inverse operation:
if my_item not in some_list:
...
It works fine for lists, tuples, sets and dicts (check keys).
Note that this is an O(n) operation in lists and tuples, but an O(1) operation in sets and dicts.
In addition to what other have said, you may also be interested to know that what in does is to call the list.__contains__ method, that you can define on any class you write and can get extremely handy to use python at his full extent.
A dumb use may be:
>>> class ContainsEverything:
def __init__(self):
return None
def __contains__(self, *elem, **k):
return True
>>> a = ContainsEverything()
>>> 3 in a
True
>>> a in a
True
>>> False in a
True
>>> False not in a
False
>>>
I came up with this one liner recently for getting True if a list contains any number of occurrences of an item, or False if it contains no occurrences or nothing at all. Using next(...) gives this a default return value (False) and means it should run significantly faster than running the whole list comprehension.
list_does_contain = next((True for item in list_to_test if item == test_item), False)
The list method index will return -1 if the item is not present, and will return the index of the item in the list if it is present. Alternatively in an if statement you can do the following:
if myItem in list:
#do things
You can also check if an element is not in a list with the following if statement:
if myItem not in list:
#do things
There is also the list method:
[2, 51, 6, 8, 3].__contains__(8)
# Out[33]: True
[2, 51, 6, 3].__contains__(8)
# Out[33]: False
There is one another method that uses index. But I am not sure if this has any fault or not.
list = [5,4,3,1]
try:
list.index(2)
#code for when item is expected to be in the list
print("present")
except:
#code for when item is not expected to be in the list
print("not present")
Output:
not present
Without any heavy libraries such as numpy, I want to uniformly handle a single list or multi-dimensional list in my code. For example, the function sum_up(list_or_matrix) should
return 6 for argument [1, 2, 3] and return 9 for [[1, 2, 3], [1, 2, 0]].
My question is:
1. Can I code in a way without explicitly detecting the dimension of my input such as by isinstance(arg[0], (tuple, list))?
2. If I have to do so, is there any elegant way of detecting the dimension of a list (of list of list ...), e.g. recursively?
As many users suggested you can always use dict instead of list for any-dimensinal collection. Dictionaries are accepting tuples as arguments as they are hashable. So you can easy fill-up your collection like
>>> m = {}
>>> m[1] = 1
>>> m[1,2] = 12
>>> m[1,2,"three",4.5] = 12345
>>> sum(m.values()) #better use m.itervalues() in python 2.*
12358
You can solve this problem using recursion, like this:
#!/usr/bin/env python
def sum(seq_or_elem):
if hasattr(seq_or_elem, '__iter__'):
# We were passed a sequence so iterate over it, summing the elements.
total = 0
for i in seq_or_elem:
total += sum(i)
return total
else:
# We were passed an atomic element, the sum is identical to the passed value.
return seq_or_elem
Test:
>>> print(sum([1, 2, [3, [4]], [], 5]))
15
Well I dont see a way if you are planning to use a single function to sum up your list like sum_up(list_or_matrix).
If you are having a list of lists I would only imagine you need to loop through the list to find out if its a 1-D or a 2-D list. Anyway whats wrong in looping?
def sum_up(matrix):
2DMatrix = False
for item in matrix:
if type(item) == list:
2DMatrix = True
if(2DMatrix):
//sum 2d matrix
else:
//sum 1d matrix
A simple way to sum up a matrix is as follow:
def sum_up(matrix):
if isinstance(matrix, (tuple, list)):
return sum(matrix)
else:
return sum((sum(x) for x in matrix))
The 'else' branch uses list comprehension, a powerful and quick tool.
You could sum recursively until you have a scalar value:
def flatten(x):
if isinstance(x, list):
return sum(map(flatten, x))
return x
Note: you can use collections.Iterable (or another base class) instead of list, depending on what you want to flatten.
What I'm trying to do, is, given a list with an arbitrary number of other nested lists, recursively descend through the last value in the nested lists until I've reached the maximum depth, and then append a value to that list. An example might make this clearer:
>>> nested_list1 = [1, 2, 3, [4, 5, 6]]
>>> last_inner_append(nested_list1, 7)
[1, 2, 3, [4, 5, 6, 7]]
>>> nested_list2 = [1, 2, [3, 4], 5, 6]
>>> last_inner_append(nested_list2, 7)
[1, 2, [3, 4], 5, 6, 7]
The following code works, but it seems excessively tricky to me:
def add_to_inner_last(nested, item):
nest_levels = [nested]
try:
nest_levels.append(nested[-1])
except IndexError: # The empty list case
nested.append(item)
return
while type(nest_levels[-1]) == list:
try:
nest_levels.append(nest_levels[-1][-1])
except IndexError: # The empty inner list case
nest_levels[-1].append(item)
return
nest_levels[-2].append(item)
return
Some things I like about it:
It works
It handles the cases of strings at the end of lists, and the cases of empty lists
Some things I don't like about it:
I have to check the type of objects, because strings are also indexable
The indexing system feels too magical--I won't be able to understand this tomorrow
It feels excessively clever to use the fact that appending to a referenced list affects all references
Some general questions I have about it:
At first I was worried that appending to nest_levels was space inefficient, but then I realized that this is probably just a reference, and a new object is not created, right?
This code is purely side effect producing (It always returns None). Should I be concerned about that?
Basically, while this code works (I think...), I'm wondering if there's a better way to do this. By better I mean clearer or more pythonic. Potentially something with more explicit recursion? I had trouble defining a stopping point or a way to do this without producing side effects.
Edit:
To be clear, this method also needs to handle:
>>> last_inner_append([1,[2,[3,[4]]]], 5)
[1,[2,[3,[4,5]]]]
and:
>>> last_inner_append([1,[2,[3,[4,[]]]]], 5)
[1,[2,[3,[4,[5]]]]]
How about this:
def last_inner_append(x, y):
try:
if isinstance(x[-1], list):
last_inner_append(x[-1], y)
return x
except IndexError:
pass
x.append(y)
return x
This function returns the deepest inner list:
def get_deepest_list(lst, depth = 0):
deepest_list = lst
max_depth = depth
for li in lst:
if type(li) == list:
tmp_deepest_list, tmp_max_depth = get_deepest_list(li, depth + 1)
if max_depth < tmp_max_depth: # change to <= to get the rightmost inner list
max_depth = tmp_max_depth
deepest_list = tmp_deepest_list
return deepest_list, max_depth
And then use it as:
def add_to_deepest_inner(lst, item):
inner_lst, depth = get_deepest_list(lst)
inner_lst.append(item)
Here is my take:
def last_inner_append(cont, el):
if type(cont) == list:
if not len(cont) or type(cont[-1]) != list:
cont.append(el)
else:
last_inner_append(cont[-1], el)
I think it's nice and clear, and passes all your tests.
It is also pure side-effect; if you want to change this, I suggest you go with BasicWolf's approach and create a 'selector' and an 'update' function, where the latter uses the former.
It's the same recursion scheme as Phil H's, but handles empty lists.
I don't think there is a good way around the two type tests, however you approach them (e.g. with 'type' or checking for 'append'...).
You can test if append is callable, rather than using try/catch, and recursing:
def add_to_inner_last(nested, item):
if callable(nested,append):
if callable(nested[-1],append):
return add_to_inner_last(nested[-1],item)
else:
nested.append(item)
return true
else:
return false
It's slightly annoying to have to have two callable tests, but the alternative is to pass a reference to the parent as well as the child.
def last_inner_append(sequence, element):
def helper(tmp, seq, elem=element):
if type(seq) != list:
tmp.append(elem)
elif len(seq):
helper(seq, seq[-1])
else:
seq.append(elem)
helper(sequence, sequence)