Are nested for loops always slow? - python

It seems there are quite a few questions and answers related to the speed of nested for loops - I think I looked at every single one of them! But unfortunately I am still not exactly sure why my code is slow. I'm hoping I can get some guidance from you fine people.
I download a csv file daily that has ~116,000 entries. Items are added and taken away from it at inconsistent points in the file, so every day I want to see what was added and what was taken away.
Getting the entries from csv to a list takes no time at all, for both the old and new list, but I encounter a big speed decrease in the next part of the code, although at the end, it does what I want and spits out the difference - items added and items removed.
Each of the 116,000 items in the list is a dictionary like so:
old or new = [{'Date Stamped': '', 'Name': '', 'Registration Number': '', 'Type': '', "Form Name': '', 'URL': "}]
when I get to this point:
added = [i for i in new if not i in old]
removed = [i for i in old if not i in new]
It takes 25 minutes to finish! I feel like this is a long time, but also I may not be understanding exactly what I'm doing.
Each list (old and new) has ~116000 items in it. Is that because i has to iterate through ~116,000 items 4 times?
It does what I want, in the end, but it seems awfully slow for what it's doing; that said, this is really the first time I've worked with a data set with this many items, so maybe it's par for course.
Is this slow because it is a nested for loop? Is it slow because of the size? I am definitely an amateur and really appreciate everyone's help. Thanks so much.

Effectively, yes, it's slow because it's a nested for loop, because of the size.
Python's element in list operation works by just searching the entire list, element by element, for the one it wants. If you have to do that for every single element in new, that means you're possibly searching the entire old for each element in new.
Lists are not a good datastructure for searching through. What you should be doing instead, if you have a use case like this, is to transform them into a set first - an unordered collection (but order probably doesn't matter) which uses a hashtable to determine whether elements are present in it. Now, instead of searching the entire datastructure element-by-element, it can just hash the element being searched for, check if there's an element there, and say so if so.
In other words, element in set is an order of magnitude more efficient than element in list. For a relatively small overhead cost (in creating the sets in the first place), this shaves a huge amount of time off of the for loops that will follow:
old_set = set(old)
new_set = set(new)
added = [i for i in new if not i in old_set]
removed = [i for i in old if not i in new]
Furthermore, you can even dispense with the list comprehension, because set supports operations from set theory - taking the difference between two sets (elemenents in one set that are not in the other) is as easy as subtracting them:
added = list(new_set - old_set) # (new_set - old_set) is identical to new_set.difference(old_set)
removed = list(old_set - new_set)
which is probably even more efficient than a list comprehension, because it's optimized for exactly this use case.

Related

str.split() in the for-loop instantiation, does it cause slower execution?

I'm a sucker for reducing code to its bare minimum and love keeping it short and slim, but occasionally I get into the dilemma of whether I'm doing more harm than good. Below is an example of a situation I frequently encounter and where I start pondering if I am minifying at the expense of speed.
str = "my name is john"
##Alternative 1
for el in str.split(" "):
print(el)
##Alternative 2
splittedStr = str.split(" ")
for el in splittedStr:
print(el)
What is faster? I'd assume it's the second one because we don't split the string after every iteration (not even sure we do that)?
str.split(" ") does the exact same thing in both cases. It creates an anonymous list of the split strings. In the second case you have the minor overhead of assigning it to a variable and then fetching the value of the variable. Its wasted time if you don't need to keep the object for other reasons. But this is a trivial amount of time compared to other object referencing taking place in the same loop. Alternative 2 also leaves the data in memory which is another small performance issue.
The real reason Alternative 1 is better than 2, IMHO, is that it doesn't leave the hint that splittedStr is going to be needed later.
Look my friend, if you want to actually reduce the amount of time in the code in general,loop on a tuple instead of list but assigning the result in a variable then using the variable is not the best approach is you just reserved a memory location just to store the value but sometimes you can do that just for the sake of having a clean code like if you have more than one operation in one line like
min(str.split(mylist)[3:10])
In this case, it is better to have a variable called min_value for example just to make things cleaner.
returning back to the performance issue, you could actually notice the difference in performance if you loop through a list or a tuple like
This is looping through a tuple
for i in (1,2,3):
print(i)
& This is looping through a list
for i in [1,2,3]:
print(i)
you will find that using tuple will be faster !

why isn't len returning correct values?

def dbl_linear(n):
u=[1]
i=0
for a in u:
u.append((2*a+1))
u.append((3*a+1))
u=set(u)
u=list(u)
if len(u)>=n:
print(len(u))
break
return len(u)
i want this code to return n elements in list u.But that isn't happening.can someone help? i gave input n=20.the len(u) is coming as 15 or 7.different answers on every run
Modifying an object you're iterating over is basically undefined behaviour, you can't assume whether the iterations will or will not take the new items in account, especially in the face of resize (list is O(1) amortized append, because it's O(1) on reserved space but they regularly need to reallocate the entire thing to make more room for new elements). Not to mention here you're only modifying the initial list during the first iteration, afterwards you're updating a different unrelated list.
There's no reason to even use for a in u, just use an infinite loop (and probably remember the last element as your uniquification via set will scramble the list, alternatively just check before inserting if the element is already present, in is O(n) but so are set(a) and list(a)).

iterating over a growing set in python

I have a set, setOfManyElements, which contains n elements. I need to go through all those elements and run a function on each element of S:
for s in setOfManyElements:
elementsFound=EvilFunction(s)
setOfManyElements|=elementsFound
EvilFunction(s) returns the set of elements it has found. Some of them will already be in S, some will be new, and some will be in S and will have already been tested.
The problem is that each time I run EvilFunction, S will expand (until a maximum set, at which point it will stop growing). So I am essentially iterating over a growing set. Also EvilFunction takes a long time to compute, so you do not want to run it twice on the same data.
Is there an efficient way to approach this problem in Python 2.7?
LATE EDIT: changed the name of the variables to make them more understandable. Thanks for the suggestion
I suggest an incremental version of 6502's approach:
seen = set(initial_items)
active = set(initial_items)
while active:
next_active = set()
for item in active:
for result in evil_func(item):
if result not in seen:
seen.add(result)
next_active.add(result)
active = next_active
This visits each item only once, and when finished seen contains all visited items.
For further research: this is a breadth-first graph search.
You can just keep a set of already visited elements and pick a non-yet-visited element each time
visited = set()
todo = S
while todo:
s = todo.pop()
visited.add(s)
todo |= EvilFunction(s) - visited
Iterating a set in your scenario is a bad idea, as you have no guarantee on the ordering and the iterator are not intended to be used in a modifying set. So you do not know what will happen to the iterator, nor will you know the position of a newly inserted element
However, using a list and a set may be a good idea:
list_elements = list(set_elements)
for s in list_elements:
elementsFound=EvilFunction(s)
new_subset = elementsFound - list_elements
list_elements.extend(new_subset)
set_elements |= new_subset
Edit
Depending on the size of everything, you could even drop the set entirely
for s in list_elements:
elementsFound=EvilFunction(s)
list_elements.extend(i for i in elementsFound if i not in list_elements)
However, I am not sure on the performance of this. I think that you should profile. If the list is huge, then the set-based solution seems good --it is cheap to perform set-based operations. However, for moderate size, maybe the EvilFunction is expensive enough and it doesn't matter.

Python: Nested for loops or "next" statement

I'm a rookie hobbyist and I nest for loops when I write python, like so:
dict = {
key1: {subkey/value1: value2}
...
keyn: {subkeyn/valuen: valuen+1}
}
for key in dict:
for subkey/value in key:
do it to it
I'm aware of a "next" keyword that would accomplish the same goal in one line (I asked a question about how to use it but I didn't quite understand it).
So to me, a nested for loop is much more readable. Why, then do people use "next"? I read somewhere that Python is a dynamically-typed and interpreted language and because + both concontinates strings and sums numbers, that it must check variable types for each loop iteration in order to know what the operators are, etc. Does using "next" prevent this in some way, speeding up the execution or is it just a matter of style/preference?
next is precious to advance an iterator when necessary, without that advancement controlling an explicit for loop. For example, if you want "the first item in S that's greater than 100", next(x for x in S if x > 100) will give it to you, no muss, no fuss, no unneeded work (as everything terminates as soon as a suitable x is located) -- and you get an exception (StopIteration) if unexpectedly no x matches the condition. If a no-match is expected and you want None in that case, next((x for x in S if x > 100), None) will deliver that. For this specific purpose, it might be clearer to you if next was actually named first, but that would betray its much more general use.
Consider, for example, the task of merging multiple sequences (e.g., a union or intersection of sorted sequences -- say, sorted files, where the items are lines). Again, next is just what the doctor ordered, because none of the sequences can dominate over the others by controlling A "main for loop". So, assuming for simplicity no duplicates can exist (a condition that's not hard to relax if needed), you keep pairs (currentitem, itsfile) in a list controlled by heapq, and the merging becomes easy... but only thanks to the magic of next to advance the correct file once its item has been used, and that file only.
import heapq
def merge(*theopentextfiles):
theheap = []
for afile in theopentextfiles:
theitem = next(afile, '')
if theitem: theheap.append((theitem, afile))
heapq.heapify(theheap)
while theheap:
theitem, afile = heapq.heappop(theheap)
yielf theitem
theitem = next(afile, '')
if theitem: heapq.heappush(theheap, (theitem, afile))
Just try to do anything anywhere this elegant without next...!-)
One could go on for a long time, but the two use cases "advance an iterator by one place (without letting it control a whole for loop)" and "get just the first item from an iterator" account for most important uses of next.

Excel CSV into Nested Dictionary; List Comprehensions

I have a Excel CSV files with employee records in them. Something like this:
mail,first_name,surname,employee_id,manager_id,telephone_number
blah#blah.com,john,smith,503422,503423,+65(2)3423-2433
foo#blah.com,george,brown,503097,503098,+65(2)3423-9782
....
I'm using DictReader to put this into a nested dictionary:
import csv
gd_extract = csv.DictReader(open('filename 20100331 original.csv'), dialect='excel')
employees = dict([(row['employee_id'], row) for row in gp_extract])
Is the above the proper way to do it - it does work, but is it the Right Way? Something more efficient? Also, the funny thing is, in IDLE, if I try to print out "employees" at the shell, it seems to cause IDLE to crash (there's approximately 1051 rows).
2. Remove employee_id from inner dict
The second issue issue, I'm putting it into a dictionary indexed by employee_id, with the value as a nested dictionary of all the values - however, employee_id is also a key:value inside the nested dictionary, which is a bit redundant? Is there any way to exclude it from the inner dictionary?
3. Manipulate data in comprehension
Thirdly, we need do some manipulations to the imported data - for example, all the phone numbers are in the wrong format, so we need to do some regex there. Also, we need to convert manager_id to an actual manager's name, and their email address. Most managers are in the same file, while others are in an external_contractors CSV, which is similar but not quite the same format - I can import that to a separate dict though.
Are these two items things that can be done within the single list comprehension, or should I use a for loop? Or does multiple comprehensions work? (sample code would be really awesome here). Or is there a smarter way in Python do it?
Cheers,
Victor
Your first part has one simple issue (which might not even be an issue). You don't handle key collisions at all (unless you intend to simply overwrite).
>>> dict([('a', 'b'), ('a', 'c')])
{'a': 'c'}
If you're guaranteed that employee_id is unique, there isn't an issue though.
2) Sure you can exclude it, but no real harm done. Actually, especially in python, if employee_id is a string or int (or some other primitive), the inner dict's reference and the key actually reference the same thing. They both point to the same spot in memory. The only duplication is in the reference (which isn't that big). If you're worried about memory consumption, you probably don't have to.
3) Don't try to do too much in one list comprehension. Just use a for loop after the first list comprehension.
To sum it all up, it sounds like you're really worried about the performance of iterating over the loop twice. Don't worry about performance initially. Performance problems come from algorithm problems, not specific language constructs like for loops vs list comprehensions.
If you're familiar with Big O notation, the list comprehension and for loop after (if you decide to do that) both have a Big O of O(n). Add them together and you get O(2n), but as we know from Big O notation, we can simplify that to O(n). I've over simplified a lot here, but the point is, you really don't need to worry.
If there are performance concerns, raise them after you written the code and prove it to yourself with a code profiler.
response to comments
As for your #2 reply, python really doesn't have a lot of mechanisms for making one liners cute and extra snazzy. It's meant to force you into simply writing the code out vs sticking it all in one line. That being said, it's still possible to do quite a bit of work in one line. My suggestion is to not worry about how much code you can stick in one line. Python looks a lot more beautiful (IMO) when its written out, not jammed in one line.
As for your #1 reply, you could try something like this:
employees = {}
for row in gd_extract:
if row['employee_id'] in employees:
... handle duplicates in employees dictionary ...
else:
employees[row['employee_id']] = row
As for your #3 reply, not sure what you're looking for and what about the telephone numbers you'd like to fix, but... this may give you a start:
import re
retelephone = re.compile(r'[-\(\)\s]') # remove dashes, open/close parens, and spaces
for empid, row in employees.iteritems():
retelephone.sub('',row['telephone'])

Categories