How exactly does Python check through a list? - python

I was doing one of the course exercises on codeacademy for python and I had a few questions I couldn't seem to find an answer to:
For this block of code, how exactly does python check whether something is "in" or "not in" a list? Does it run through each item in the list to check or does it use a quicker process?
Also, how would this code be affected if it were running with a massive list of numbers (thousands or millions)? Would it slow down as the list size increases, and are there better alternatives?
numbers = [1, 1, 2, 3, 5, 8, 13]
def remove_duplicates(list):
new_list = []
for i in list:
if i not in new_list:
new_list.append(i)
return new_list
remove_duplicates(numbers)
Thanks!
P.S. Why does this code not function the same?
numbers = [1, 1, 2, 3, 5, 8, 13]
def remove_duplicates(list):
new_list = []
new_list.append(i for i in list if i not in new_list)
return new_list

In order to execute i not in new_list Python has to do a linear scan of the list. The scanning loop breaks as soon as the result of the test is known, but if i is actually not in the list the whole list must be scanned to determine that. It does that at C speed, so it's faster than doing a Python loop to explicitly check each item. Doing the occasional in some_list test is ok, but if you need to do a lot of such membership tests it's much better to use a set.
On average, with random data, testing membership has to scan through half the list items, and in general the time taken to perform the scan is proportional to the length of the list. In the usual notation the size of the list is denoted by n, and the time complexity of this task is written as O(n).
In contrast, determining membership of a set (or a dict) can be done (on average) in constant time, so its time complexity is O(1). Please see TimeComplexity in the Python Wiki for further details on this topic. Thanks, Serge, for that link.
Of course, if your using a set then you get de-duplication for free, since it's impossible to add duplicate items to a set.
One problem with sets is that they generally don't preserve order. But you can use a set as an auxilliary collection to speed up de-duping. Here is an illustration of one common technique to de-dupe a list, or other ordered collection, which does preserve order. I'll use a string as the data source because I'm too lazy to type out a list. ;)
new_list = []
seen = set()
for c in "this is a test":
if c not in seen:
new_list.append(c)
seen.add(c)
print(new_list)
output
['t', 'h', 'i', 's', ' ', 'a', 'e']
Please see How do you remove duplicates from a list whilst preserving order? for more examples. Thanks, Jean-François Fabre, for the link.
As for your PS, that code appends a single generator object to new_list, it doesn't append what the generate would produce.
I assume you alreay tried to do it with a list comprehension:
new_list = [i for i in list if i not in new_list]
That doesn't work, because the new_list doesn't exist until the list comp finishes running, so doing in new_list would raise a NameError. And even if you did new_list = [] before the list comp, it won't be modified by the list comp, and the result of the list comp would simply replace that empty list object with a new one.
BTW, please don't use list as a variable name (even in example code) since that shadows the built-in list type, which can lead to mysterious error messages.

You are asking multiple questions and one of them asking if you can do this more efficiently. I'll answer that.
Ok let's say you'd have thousands or millions of numbers. From where exactly? Let's say they were stored in some kind of txtfile, then you would probably want to use numpy (if you are sticking with Python that is). Example:
import numpy as np
numbers = np.array([1, 1, 2, 3, 5, 8, 13], dtype=np.int32)
numbers = np.unique(numbers).tolist()
This will be more effective (above all memory-efficient compared) than reading it with python and performing a list(set..)
numbers = [1, 1, 2, 3, 5, 8, 13]
numbers = list(set(numbers))

You are asking for the algorithmic complexity of this function. To find that you need to see what is happening at each step.
You are scanning the list one at a time, which takes 1 unit of work. This is because retrieving something from a list is O(1). If you know the index, it can be retrieved in 1 operation.
The list to which you are going to add it increases at worst case 1 at a time. So at any point in time, the unique items list is going to be of size n.
Now, to add the item you picked to the unique items list is going to take n work in the worst case. Because we have to scan each item to decide that.
So if you sum up the total work in each step, it would be 1 + 2 + 3 + 4 + 5 + ... n which is n (n + 1) / 2. So if you have a million items, you can just find that by applying n = million in the formula.
This is not entirely true because of how list works. But theoretically, it would help to visualize this way.

to answer the question in the title: python has more efficient data types but the list() object is just a plain array, if you want a more efficient way to search values you can use dict() which uses a hash of the object stored to insert it into a tree which i assume is what you were thinking of when you mentioned "a quicker process".
as to the second code snippet:
list().append() inserts whatever value you give it to the end of the list, i for i in list if i not in new_list is a generator object and it inserts that generator as an object into the array, list().extend() does what you want: it takes in an iterable and appends all of its elements to the list

Related

Python lists - codes & algorithem

I need some help with python, a new program language to me.
So, lets say that I have this list:
list= [3, 1, 4, 9, 8, 2]
And I would like to sort it, but without using the built-in function "sort", otherwise where's all the fun and the studying in here? I want to code as simple and as basic as I can, even if it means to work a bit harder. Therefore, if you want to help me and to offer me some of ideas and code, please, try to keep them very "basic".
Anyway, back to my problem: In order to sort this list, I've decided to compare every time a number from the list to the last number. First, I'll check 3 and 2. If 3 is smaller than 2 (and it's false, wrong), then do nothing.
Next - check if 1 is smaller than 2 (and it's true) - then change the index place of this number with the first element.
On the next run, it will check again if the number is smaller or not from the last number in the list. But this time, if the number is smaller, it will change the place with the second number (and on the third run with the third number, if it's smaller, of course).
and so on and so on.
In the end, the ()function will return the sorted list.
Hop you've understand it.
So I want to use a ()recursive function to make the task bit interesting, but still basic.
Therefore, I thought about this code:
def func(list):
if not list:
for i in range(len(list)):
if list[-1] > lst[i]:
#have no idea what to write here in order to change the locations
i = i + 1
#return func(lst[i+1:])?
return list
2 questions:
1. How can I change the locations? Using pop/remove and then insert?
2. I don't know where to put the recursive part and if I've wrote it good (I think I didn't). the recursive part is the second "#", the first "return".
What do you think? How can I improve this code? What's wrong?
Thanks a lot!
Oh man, sorting. That's one of the most popular problems in programming with many, many solutions that differ a little in every language. Anyway, the most straight-forward algorithm is I guess the bubble sort. However, it's not very effective, so it's mostly used for educational purposes. If you want to try something more efficient and common go for the quick sort. I believe it's the most popular sorting algorithm. In python however, the default algorithm is a bit different - read here. And like I've said, there are many, many more sorting algorithms around the web.
Now, to answer your specific questions: in python replacing an item in a list is as simple as
list[-1]=list[i]
or
tmp=list[-1]
list[-1]=list[i]
list[i]=tmp
As to recursion - I don't think it's a good idea to use it, a simple while/for loop is better here.
maybe you can try a quicksort this way :
def quicksort(array, up, down):
# start sorting in your array from down to up :
# is array[up] < array[down] ? if yes switch
# do it until up <= down
# call recursively quicksort
# with the array, middle, up
# with the array, down, middle
# where middle is the value found when the first sort ended
you can check this link : Quicksort on Wikipedia
It is nearly the same logic.
Hope it will help !
The easiest way to swap the two list elements is by using “parallel assignment”:
list[-1], list[i] = list[i], list[-1]
It doesn't really make sense to use recursion for this algorithm. If you call func(lst[i+1:]), that makes a copy of those elements of the list, and the recursive call operates on the copy, and then the copy is discarded. You could make func take two arguments: the list and i+1.
But your code is still broken. The not list test is incorrect, and the i = i + 1 is incorrect. What you are describing sounds a variation of selection sort where you're doing a bunch of extra swapping.
Here's how a selection sort normally works.
Find the smallest of all elements and swap it into index 0.
Find the smallest of all remaining elements (all indexes greater than 0) and swap it into index 1.
Find the smallest of all remaining elements (all indexes greater than 1) and swap it into index 2.
And so on.
To simplify, the algorithm is this: find the smallest of all remaining (unsorted) elements, and append it to the list of sorted elements. Repeat until there are no remaining unsorted elements.
We can write it in Python like this:
def func(elements):
for firstUnsortedIndex in range(len(elements)):
# elements[0:firstUnsortedIndex] are sorted
# elements[firstUnsortedIndex:] are not sorted
bestIndex = firstUnsortedIndex
for candidateIndex in range(bestIndex + 1, len(elements)):
if elements[candidateIndex] < elements[bestIndex]:
bestIndex = candidateIndex
# Now bestIndex is the index of the smallest unsorted element
elements[firstUnsortedIndex], elements[bestIndex] = elements[bestIndex], elements[firstUnsortedIndex]
# Now elements[0:firstUnsortedIndex+1] are sorted, so it's safe to increment firstUnsortedIndex
# Now all elements are sorted.
Test:
>>> testList = [3, 1, 4, 9, 8, 2]
>>> func(testList)
>>> testList
[1, 2, 3, 4, 8, 9]
If you really want to structure this so that recursion makes sense, here's how. Find the smallest element of the list. Then call func recursively, passing all the remaining elements. (Thus each recursive call passes one less element, eventually passing zero elements.) Then prepend that smallest element onto the list returned by the recursive call. Here's the code:
def func(elements):
if len(elements) == 0:
return elements
bestIndex = 0
for candidateIndex in range(1, len(elements)):
if elements[candidateIndex] < elements[bestIndex]:
bestIndex = candidateIndex
return [elements[bestIndex]] + func(elements[0:bestIndex] + elements[bestIndex + 1:])

Most efficient way to cycle over Python sublists while making them grow (insert method)?

My problem is about managing insert/append methods within loops.
I have two lists of length N: the first one (let's call it s) indicates a subset to which, while the second one represents a quantity x that I want to evaluate. For sake of simplicity, let's say that every subset presents T elements.
cont = 0;
for i in range(NSUBSETS):
for j in range(T):
subcont = 0;
if (x[(i*T)+j] < 100):
s.insert(((i+1)*T)+cont, s[(i*T)+j+cont]);
x.insert(((i+1)*T)+cont, x[(i*T)+j+cont]);
subcont += 1;
cont += subcont;
While cycling over all the elements of the two lists, I'd like that, when a certain condition is fulfilled (e.g. x[i] < 100), a copy of that element is put at the end of the subset, and then going on with the loop till completing the analysis of all the original members of the subset. It would be important to maintain the "order", i.e. inserting the elements next to the last element of the subset it comes from.
I thought a way could have been to store within 2 counter variables the number of copies made within the subset and globally, respectively (see code): this way, I could shift the index of the element I was looking at according to that. I wonder whether there exists some simpler way to do that, maybe using some Python magic.
If the idea is to interpolate your extra copies into the lists without making a complete copy of the whole list, you can try this with a generator expression. As you loop through your lists, collect the matches you want to append. Yield each item as you process it, then yield each collected item too.
This is a simplified example with only one list, but hopefully it illustrates the idea. You would only get a copy if you do like i've done and expand the generator with a comprehension. If you just wanted to store or further analyze the processed list (eg, to write it to disk) you could never have it in memory at all.
def append_matches(input_list, start, end, predicate):
# where predicate is a filter function or lambda
for item in input_list[start:end]:
yield item
for item in filter(predicate, input_list[start:end]):
yield item
example = lambda p: p < 100
data = [1,2,3,101,102,103,4,5,6,104,105,106]
print [k for k in append_matches (data, 0, 6, example)]
print [k for k in append_matches (data, 5, 11, example)]
[1, 2, 3, 101, 102, 103, 1, 2, 3]
[103, 4, 5, 6, 104, 105, 4, 5, 6]
I'm guessing that your desire not to copy the lists is based on your C background - an assumption that it would be more expensive that way. In Python lists are not actually lists, inserts have O(n) time as they are more like vectors and so those insert operations are each copying the list.
Building a new copy with the extra elements would be more efficient than trying to update in-place. If you really want to go that way you would need to write a LinkedList class that held prev/next references so that your Python code really was a copy of the C approach.
The most Pythonic approach would not try to do an in-place update, as it is simpler to express what you want using values rather than references:
def expand(origLs) :
subsets = [ origLs[i*T:(i+1)*T] for i in range(NSUBSETS) ]
result = []
for s in subsets :
copies = [ e for e in s if e<100 ]
result += s + copies
return result
The main thing to keep in mind is that the underlying cost model for an interpreted garbage-collected language is very different to C. Not all copy operations actually cause data movement, and there are no guarantees that trying to reuse the same memory will be successful or more efficient. The only real answer is to try both techniques on your real problem and profile the results.
I'd be inclined to make a copy of your lists and then, while looping across the originals, as you come across a criteria to insert you insert into the copy at the place you need it to be at. You can then output the copied and updated lists.
I think to have found a simple solution.
I cycle from the last subset backwards, putting the copies at the end of each subset. This way, I avoid encountering the "new" elements and get rid of counters and similia.
for i in range(NSUBSETS-1, -1, -1):
for j in range(T-1, -1, -1):
if (x[(i*T)+j] < 100):
s.insert(((i+1)*T), s[(i*T)+j])
x.insert(((i+1)*T), x[(i*T)+j])
One possibility would be using numpy's advanced indexing to provide the illusion of copying elements to the ends of the subsets by building a list of "copy" indices for the original list, and adding that to an index/slice list that represents each subset. Then you'd combine all the index/slice lists at the end, and use the final index list to access all your items (I believe there's support for doing so generator-style, too, which you may find useful as advanced indexing/slicing returns a copy rather than a view). Depending on how many elements meet the criteria to be copied, this should be decently efficient as each subset will have its indices as a slice object, reducing the number of indices needed to keep track of.

Deduping a complex list using a simplified copy of itself

I have two lists of strings that are passed into a function. They are more or less the same, except that one has been run through a regex filter to remove certain boilerplate substrings (e.g. removing 'LLC' from 'Blues Brothers LLC').
This function is meant to internally deduplicate the modified list and remove the associated item in the non-modified list. You can assume that these lists were sorted alphabetically before being run through the regex filter, and remain in the same order (i.e. original[x] and modified[x] refer to the same entity, even if original[x] != modified[x]). Relative order must be maintained between the two lists in the output.
This is what I have so far. It works 99% of the time, except for very rare combinations of inputs and boilerplate strings (1 in 1000s) where some output strings will be mismatched by a single list position. Input lists are 'original' and 'modified'.
# record positions of duplicates so we're not trying to modify the same lists we're iterating
dellist_modified = []
dellist_original = []
# probably not necessary, extra precaution against modifying lists being iterated.
# fwiw the problem still exists if I remove these and change their references in the last two lines directly to the input lists
modified_copy = modified
original_copy = original
for i in range(0, len(modified)-1):
if modified[i] == modified[i+1]:
dellist_modified.append(modified[i+1])
dellist_original.append(original[i+1])
for j in dellist_modified:
if j in modified:
del modified_copy[agg_match.index(j)]
del original_copy[agg_match.index(j)]
# return modified_copy and original_copy
It's ugly, but it's all I got. My testing indicates the problem is created by the last chunk of code.
Modifications or entirely new approaches would be greatly appreciated. My next step is to try using dictionaries.
Here is a clean way of doing this:
original = list(range(10))
modified = list(original)
modified[5] = "a"
modified[6] = "a"
def without_repeated(original, modified):
seen = set()
for (o, m) in zip(original, modified):
if m not in seen:
seen.add(m)
yield o, m
original, modified = zip(*without_repeated(original, modified))
print(original)
print(modified)
Giving us:
(0, 1, 2, 3, 4, 5, 7, 8, 9)
(0, 1, 2, 3, 4, 'a', 7, 8, 9)
We iterate through both lists at the same time. We keep a set of items we have seen (sets have very fast checks for ownership) and then yields any results that we haven't already seen.
We can then use zip again to give us two lists back.
Note we could actually do this like so:
seen = set()
original, modified = zip(*((o, m) for (o, m) in zip(original, modified) if m not in seen and not seen.add(m)))
This works the same way, except using a single generator expression, with adding the item to the set hacked in using the conditional statement (as add always returns false, we can do this). However, this method is considerably harder to read and so I'd advise against it, just an example for the sake of it.
A set in python is a collection of distinct elements. Is the order of these elements critical? Something like this may work:
distinct = list(set(original))
Why use parallel lists? Why not a single list of class instances? That keeps things grouped easily, and reduces your list lookups.

Most pythonic way to truncate a list to N indices when you can't guarantee the list is at least N length?

What is the most pythonic way to truncate a list to N indices when you can not guarantee the list is even N length? Something like this:
l = range(6)
if len(l) > 4:
l = l[:4]
I'm fairly new to python and am trying to learn to think pythonicly. The reason I want to even truncate the list is because I'm going to enumerate on it with an expected length and I only care about the first 4 elements.
Python automatically handles, the list indices that are out of range gracefully.
In [5]: k = range(2)
In [6]: k[:4]
Out[6]: [0, 1]
In [7]: k = range(6)
In [8]: k[:4]
Out[8]: [0, 1, 2, 3]
BTW, the degenerate slice behavior is explained in The Python tutorial. That is a good place to start because it covers a lot of concepts very quickly.
You've got it here:
lst = lst[:4]
This works regardless of the number of items in the list, even if it's less than 4.
If you want it to always have 4 elements, padding it with (say) zeroes or None if it's too short, try this:
lst = (lst + [0] * 4)[:4]
When you have a question like this, it's usually feasible to try it and see what happens, or look it up in the documentation.
It's bad idea to name a variable list, by the way; this will prevent you from referring to the built-in list type.
All of the answers so far don't truncate the list. They follow your example in assigning the name to a new list which contains the first up to 4 elements of the old list. To truncate the existing list, delete elements whose index is 4 or higher. This is done very simply:
del lst[4:]
Moving on to what you really want to do, one possibility is:
for i, value in enumerate(lst):
if i >= 4:
break
do_something_with(lst, i, value)
I think the most pythonic solution that answers your question is this:
for x in l[:4]:
do_something_with(x)
The advantages:
It's the most succinct and descriptive way of doing what you're after.
I suggest that the pythonic way to do this is to slice the source array rather than truncate it. Once you remove elements from the source array they are gone forever, and while in your trivial example that's fine, I think in general it's not a great practice.
Slicing the array to the first four is almost certainly more efficient than removing a possibly large number of elements from its tail.

python function slowing down for no apparent reason

I have a python function defined as follows which i use to delete from list1 the items which are already in list2. I am using python 2.6.2 on windows XP
def compareLists(list1, list2):
curIndex = 0
while curIndex < len(list1):
if list1[curIndex] in list2:
list1.pop(curIndex)
else:
curIndex += 1
Here, list1 and list2 are a list of lists
list1 = [ ['a', 11221, '2232'], ['b', 1321, '22342'] .. ]
# list2 has a similar format.
I tried this function with list1 with 38,000 elements and list2 with 150,000 elements. If i put in a print statement to print the current iteration, I find that the function slows down with each iterations. At first, it processes around 1000 or more items in a second and then after a while it reduces to around 20-50 a second. Why can that be happening?
EDIT: In the case with my data, the curIndex remains 0 or very close to 0 so the pop operation on list1 is almost always on the first item.
If possible, can someone also suggest a better way of doing the same thing in a different way?
Try a more pythonic approach to the filtering, something like
[x for x in list1 if x not in set(list2)]
Converting both lists to sets is unnessescary, and will be very slow and memory hungry on large amounts of data.
Since your data is a list of lists, you need to do something in order to hash it.
Try out
list2_set = set([tuple(x) for x in list2])
diff = [x for x in list1 if tuple(x) not in list2_set]
I tested out your original function, and my approach, using the following test data:
list1 = [[x+1, x*2] for x in range(38000)]
list2 = [[x+1, x*2] for x in range(10000, 160000)]
Timings - not scientific, but still:
#Original function
real 2m16.780s
user 2m16.744s
sys 0m0.017s
#My function
real 0m0.433s
user 0m0.423s
sys 0m0.007s
There are 2 issues that cause your algorithm to scale poorly:
x in list is an O(n) operation.
pop(n) where n is in the middle of the array is an O(n) operation.
Both situations cause it to scale poorly O(n^2) for large amounts of data. gnud's implementation would probably be the best solution since it solves both problems without changing the order of elements or removing potential duplicates.
If we rule the data structure itself out, look at your memory usage next. If you end up asking the OS to swap in for you (i.e., the list takes up more memory than you have), Python's going to sit in iowait waiting on the OS to get the pages from disk, which makes sense given your description.
Is Python sitting in a jacuzzi of iowait when this slowdown happens? Anything else going on in the environment?
(If you're not sure, update with your platform and one of us will tell you how to tell.)
The only reason why the code can become slower is that you have big elements in both lists which share a lot of common elements (so the list1[curIndex] in list2 takes more time).
Here are a couple of ways to fix this:
If you don't care about the order, convert both lists into sets and use set1.difference(set2)
If the order in list1 is important, then at least convert list2 into a set because in is much faster with a set.
Lastly, try a filter: filter(list1, lambda x: x not in set2)
[EDIT] Since set() doesn't work on recursive lists (didn't expect that), try:
result = filter(list1, lambda x: x not in list2)
It should still be much faster than your version. If it isn't, then your last option is to make sure that there can't be duplicate elements in either list. That would allow you to remove items from both lists (and therefore making the compare ever cheaper as you find elements from list2).
EDIT: I've updated my answer to account for lists being unhashable, as well as some other feedback. This one is even tested.
It probably relates to the cost of poping an item out of a middle of a list.
Alternatively have you tried using sets to handle this?
def difference(list1, list2):
return [x for x in list1 if tuple(x) in set(tuple(y) for y in list2)]
You can then set list one to the resulting list if that is your intention by doing
list1 = difference(list1, list2)
The often suggested set wont work here, because the two lists contain lists, which are unhashable. You need to change your data structure first.
You can
convert the sublists into tuples or class instances to make them hashable, then use sets.
Keep both lists sorted, then you just have to compare the lists' heads.

Categories