I have two lists of users (users1 and users2) and i am comparing them with the following code:
def lev(seq1, seq2):
oneago = None
thisrow = range(1, len(seq2) + 1) + [0]
for x in xrange(len(seq1)):
twoago, oneago, thisrow = oneago, thisrow, [0] * len(seq2) + [x + 1]
for y in xrange(len(seq2)):
delcost = oneago[y] + 1
addcost = thisrow[y - 1] + 1
subcost = oneago[y - 1] + (seq1[x] != seq2[y])
thisrow[y] = min(delcost, addcost, subcost)
return thisrow[len(seq2) - 1]
for x in users1_list:
for y in users2_list:
if 3 >= lev(x,y) > 1:
print x, "seems a lot like", y
Can i use list-comprehension to improve the nested for loop?
Can you use a list comprehension to improve the nested for loop?
In the lev function, I don't think so--at least not in the sense of "this is bad, and a list comprehension is the natural and direct thing that would clean it up."
Yes, you could use a list comprehension there, but several factors argue against comprehensions:
You're calculating a lot of things. This means there are many characters required for the resulting expressions (or subexpressions). It would be a very long comprehension expression, making quality formatting difficult and making it harder to hold all of the pieces in your head all at once.
You've nicely named the sub-expression components in ways that make logical sense. Spread out into multiple statements, the code is clear about how the deletion, addition, and substation costs are calculated. That's nice. It aids comprehension, esp. for you or someone else who comes back to this code after some time, and has to understand it all over again. If you shorten into a long expression to make a list comprehension neat, you'd remove the clarity of those subexpressions.
You do a lot of indexing. That is usually an anti-pattern / bad practice in Python, which has good "iterate over loop items" features. But there are algorithms--and this seems to be one of them--where indexing is the clear method of access. It's very consistent with what you will find in similar programs from other sources, or in reference materials. So using a more primitive indexing approach--something that often doesn't make sense in many Python contexts--works pretty well here.
In the second section, where you can loop over items not indices neatly, you do so. It's not like you're trying to avoid Pythonic constructs.
It does jump out at me that you're recalculating len(seq2) all the time, even though it seems to be a constant during this function. I'd calculate it once and reuse a stored value. And do you ever really use twoago? I didn't see it. So a revised snippet might be:
def lev(seq1, seq2):
oneago = None
len2 = len(seq2)
thisrow = range(1, len2 + 1) + [0]
for x in xrange(len(seq1)):
oneago, thisrow = thisrow, [0] * len2 + [x + 1]
for y in xrange(len2):
delcost = oneago[y] + 1
addcost = thisrow[y - 1] + 1
subcost = oneago[y - 1] + (seq1[x] != seq2[y])
thisrow[y] = min(delcost, addcost, subcost)
return thisrow[len2 - 1]
Finally, stackoverflow tends to be problem related. It has a sister site codereview that might be more appropriate for detailed code improvement suggestions (much as programmers is better for more theoretical programming questions).
>>> list1 = ['Bret', 'Jermaine', 'Murray']
>>> list2 = ['Jermaine', 'Murray', 'Mel']
If the entries in the lists are unique, it might make sense to convert them into sets. You could then see which things are common:
>>> set(list1).intersection(set(list2))
{'Jermaine', 'Murray'}
The union of both sets can be returned:
>>> set(list1).union(set(list2))
{'Bret', 'Jermaine', 'Mel', 'Murray'}
To measure the commonality between the two sets, you could calculate the Jaccard index (see http://en.wikipedia.org/wiki/Jaccard_index for more details):
>>> len(set(list1).intersection(set(list2))) / float(len(set(list1).union(set(list2))))
0.5
This the number of common elements divided by the total number of elements.
Related
I am trying to learn functional programming and algorithms at the same time, and Ive implemented a merge sort in Haskell. Then I converted the style into python and run a test on a learning platform, but I get return that it takes too long time to sort a list on a 1000 integers.
Is there a way i can optimize my python code and still keep my functional style or do I have to solve the problem iteratively?
Thanks in advance.
So here is the code I made in Haskell first.
merge :: Ord a => [a] -> [a] -> [a]
merge [] xs = xs
merge ys [] = ys
merge (x:xs) (y:ys)
| (x <= y) = x : (merge xs (y:ys))
| otherwise = y : (merge (x:xs) ys)
halve :: [a] -> ([a] , [a])
halve [x] = ([x], [])
halve xs = (take n xs , drop n xs)
where n = length xs `div` 2
msort :: Ord a => [a] -> [a]
msort [x] = [x]
msort [] = []
msort xs = merge (msort n) (msort m)
where (n,m) = halve xs
Then I made this code in python based on the Haskell style.
import sys
sys.setrecursionlimit(1002) #This is because the recursion will go 1002 times deep when I have a list on 1000 numbers.
def merge(xs,ys):
if len(xs) == 0:
return ys
elif len(ys) == 0:
return xs
else:
if xs[0] <= ys[0]:
return [xs[0]] + merge(xs[1:], ys)
else:
return [ys[0]] + merge(xs, ys[1:])
def halve(xs):
return (xs[:len(xs)//2],xs[len(xs)//2:])
def msort(xss):
if len(xss) <= 1:
return xss
else:
xs,ys = halve(xss)
return merge(msort(xs), msort(ys))
Is there a smarter way I can optimize the python version and still have a functional style?
Haskell lists are lazy. [x] ++ xs first produces the x, and then it produces all the elements in xs.
In e.g. Lisp the lists are singly-linked lists and appending them copies the first list, so prepending a singleton is an O(1) operation.
In Python though the appending copies the second list (as confirmed by #chepner in the comments), i.e. [x] + xs will copy the whole list xs and thus is an O(n) operation (where n is the length of xs).
This means that both your [xs[0]] + merge(xs[1:], ys) and [ys[0]] + merge(xs, ys[1:]) lead to quadratic behavior which you observe as the dramatic slowdown you describe.
Python's equivalent to Haskell's lazy lists is not lists, it's generators, which produce their elements one by one on each yield. Thus the rewrite could look something like
def merge(xs,ys):
if len(xs) == 0:
return ys
elif len(ys) == 0:
return xs
else:
a = (x for x in xs) # or maybe iter(xs)
b = (y for y in ys) # or maybe iter(ys)
list( merge_gen(a,b))
Now what's left is to re-implement your merge logic as merge_gen which expects two generators (or should that be iterators? do find out) as its input and generates the ordered stream of elements which it gets by pulling them one by one from the two sources as needed. The resulting stream of elements is converted back to list, as expected by the function's caller. No redundant copying will be performed.
If I've made some obvious Python errors, please treat the above as a pseudocode.
Your other option is to pre-allocate a second list of the same length and copy the elements between the two lists back and forth while merging, using indices to reference the elements of the arrays and mutating the contents to store the results.
I am no Haskell expert, so I might be missing something. Here's my best gamble:
Haskell list's are not state-aware. One implication of that is that lists can be shared. That make the action of halving leaner on memory allocations - To produce a 'drop n xs' you only have to allocate one list node (or whatever they are called in Haskell) and point it to the list element in the (n 'div' 2) + 1 node on the pre-halved list.
Note that 'take' is not able to do this little trick - it is not allowed to change a state of any node in the list, and hence it has to allocate new node object with equal values to the first n div 2 elements in the pre-halved list.
Now look at the python equivalent of that function - to halve the list, you use the list slicing:
def halve(xs):
return (xs[:len(xs)//2],xs[len(xs)//2:])
Here you allocate two lists instead of one - in every level of the recursion tree! (I am also pretty sure that a list is a much more complex thing in python than the Haskell list, so probably allocation is slower, too)
What I would do:
Check my gamble - use the time module to see if your code spends too long allocating those lists, compared to the overall running time.
In case my gamble proved correct - avoid those allocations. A (Not very elegant, but probably fast) way to work around it - Pass a list, alongside with indices that indicate where each halve begin and where it ends. Work with offsets instead of allocating a new list each time. (EDIT:) You can avoid similar allocations as well - whenever you want to slice, pass an index to the begin\end of the new list.
And a last word - one of the requirements you've mentioned is keeping the functional approach. One can interpret that as keeping your code side-effect free.
To do so, I'd define an output list, and store the elements you merge in it. Combined with the index approach, that will not change the state of the input list, and will produce a new sorted output list.
EDIT:
Another thing worth mentioning here: python lists are not singly linked-list, like Haskell lists. They are a data structure more commonly called Dynamic Arrays. This means that stuff like slicing, deleting an object from the middle of the list, etc. is expensive, since it has implication on ALL objects in the array. On the other hand, you are allowed to access an object at the i-th index in O(1). You should keep that in mind, it is closely related to the problem you came Up with.
If there are no other loops inside of that while loop, is it possible to have O(n^2) runtime?
Oh, sure! Your runtime will be determined by how long it takes to meet your termination condition on the loop. If your program makes a small amount of progress on each iteration, it might well take worse than linear time. For example, consider the following bubble sort implementation:
to_sort = [1,5,2,3,7,6,4]
sorted = True
i = 0
while not sorted or i < len(to_sort):
if i == len(to_sort):
sorted = True
i = 0
if i < len(to_sort) - 1 and to_sort[i] > to_sort[i + 1]:
to_sort[i], to_sort[i + 1] = to_sort[i + 1], to_sort[i]
sorted = False
i += 1
print(to_sort)
You'll notice that I'm mistreating my i variable a little bit. That's because bubble sort is usually written as nested loops. But it's often possible to rewrite nested loops as one, more complicated, less readable loop, as I did here.
for i in range(n * n):
foo(i)
for i in range(2 ** n):
<look at bits of i>
Loops with runtime O(n^2) and O(2^n). You need to be a lot more specific about your question.
For one of my programming questions, I am required to define a function that accepts two variables, a list of length l and an integer w. I then have to find the maximum sum of a sublist with length w within the list.
Conditions:
1<=w<=l<=100000
Each element in the list ranges from [1, 100]
Currently, my solution works in O(n^2) (correct me if I'm wrong, code attached below), which the autograder does not accept, since we are required to find an even simpler solution.
My code:
def find_best_location(w, lst):
best = 0
n = 0
while n <= len(lst) - w:
lists = lst[n: n + w]
cur = sum(lists)
best = cur if cur>best else best
n+=1
return best
If anyone is able to find a more efficient solution, please do let me know! Also if I computed my big-O notation wrongly do let me know as well!
Thanks in advance!
1) Find sum current of first w elements, assign it to best.
2) Starting from i = w: current = current + lst[i]-lst[i-w], best = max(best, current).
3) Done.
Your solution is indeed O(n^2) (or O(n*W) if you want a tighter bound)
You can do it in O(n) by creating an aux array sums, where:
sums[0] = l[0]
sums[i] = sums[i-1] + l[i]
Then, by iterating it and checking sums[i] - sums[i-W] you can find your solution in linear time
You can even calculate sums array on the fly to reduce space complexity, but if I were you, I'd start with it, and see if I can upgrade my solution next.
x = [1,2,3,4,5,6,7,8,9,10]
#Random list elements
for i in range(int(len(x)/2)):
value = x[i]
x[i] = x[len(x)-i-1]
x[len(x)-i-1] = value
#Confusion on efficiency
print(x)
This is a uni course for first year. So no python shortcuts are allowed
Not sure what counts as "a shortcut" (reversed and the "Martian Smiley" [::-1] being obvious candidates -- but does either count as "a shortcut"?!), but at least a couple small improvements are easy:
L = len(x)
for i in range(L//2):
mirror = L - i - 1
x[i], x[mirror] = x[mirror], x[i]
This gets len(x) only once -- it's a fast operation but there's no reason to keep repeating it over and over -- also computes mirror but once, does the swap more directly, and halves L (for the range argument) directly with the truncating-division operator rather than using the non-truncating division and then truncating with int. Nanoseconds for each case, but it may be considered slightly clearer as well as microscopically faster.
x = [1,2,3,4,5,6,7,8,9,10]
x = x.__getitem__(slice(None,None,-1))
slice is a python builtin object (like range and len that you used in your example)
__getitem__ is a method belonging to iterable types ( of which x is)
there are absolutely no shortcuts here :) and its effectively one line.
I am creating a fast method of generating a list of primes in the range(0, limit+1). In the function I end up removing all integers in the list named removable from the list named primes. I am looking for a fast and pythonic way of removing the integers, knowing that both lists are always sorted.
I might be wrong, but I believe list.remove(n) iterates over the list comparing each element with n. meaning that the following code runs in O(n^2) time.
# removable and primes are both sorted lists of integers
for composite in removable:
primes.remove(composite)
Based off my assumption (which could be wrong and please confirm whether or not this is correct) and the fact that both lists are always sorted, I would think that the following code runs faster, since it only loops over the list once for a O(n) time. However, it is not at all pythonic or clean.
i = 0
j = 0
while i < len(primes) and j < len(removable):
if primes[i] == removable[j]:
primes = primes[:i] + primes[i+1:]
j += 1
else:
i += 1
Is there perhaps a built in function or simpler way of doing this? And what is the fastest way?
Side notes: I have not actually timed the functions or code above. Also, it doesn't matter if the list removable is changed/destroyed in the process.
For anyone interested the full functions is below:
import math
# returns a list of primes in range(0, limit+1)
def fastPrimeList(limit):
if limit < 2:
return list()
sqrtLimit = int(math.ceil(math.sqrt(limit)))
primes = [2] + range(3, limit+1, 2)
index = 1
while primes[index] <= sqrtLimit:
removable = list()
index2 = index
while primes[index] * primes[index2] <= limit:
composite = primes[index] * primes[index2]
removable.append(composite)
index2 += 1
for composite in removable:
primes.remove(composite)
index += 1
return primes
This is quite fast and clean, it does O(n) set membership checks, and in amortized time it runs in O(n) (first line is O(n) amortized, second line is O(n * 1) amortized, because a membership check is O(1) amortized):
removable_set = set(removable)
primes = [p for p in primes if p not in removable_set]
Here is the modification of your 2nd solution. It does O(n) basic operations (worst case):
tmp = []
i = j = 0
while i < len(primes) and j < len(removable):
if primes[i] < removable[j]:
tmp.append(primes[i])
i += 1
elif primes[i] == removable[j]:
i += 1
else:
j += 1
primes[:i] = tmp
del tmp
Please note that constants also matter. The Python interpreter is quite slow (i.e. with a large constant) to execute Python code. The 2nd solution has lots of Python code, and it can indeed be slower for small practical values of n than the solution with sets, because the set operations are implemented in C, thus they are fast (i.e. with a small constant).
If you have multiple working solutions, run them on typical input sizes, and measure the time. You may get surprised about their relative speed, often it is not what you would predict.
The most important thing here is to remove the quadratic behavior. You have this for two reasons.
First, calling remove searches the entire list for values to remove. Doing this takes linear time, and you're doing it once for each element in removable, so your total time is O(NM) (where N is the length of primes and M is the length of removable).
Second, removing elements from the middle of a list forces you to shift the whole rest of the list up one slot. So, each one takes linear time, and again you're doing it M times, so again it's O(NM).
How can you avoid these?
For the first, you either need to take advantage of the sorting, or just use something that allows you to do constant-time lookups instead of linear-time, like a set.
For the second, you either need to create a list of indices to delete and then do a second pass to move each element up the appropriate number of indices all at once, or just build a new list instead of trying to mutate the original in-place.
So, there are a variety of options here. Which one is best? It almost certainly doesn't matter; changing your O(NM) time to just O(N+M) will probably be more than enough of an optimization that you're happy with the results. But if you need to squeeze out more performance, then you'll have to implement all of them and test them on realistic data.
The only one of these that I think isn't obvious is how to "use the sorting". The idea is to use the same kind of staggered-zip iteration that you'd use in a merge sort, like this:
def sorted_subtract(seq1, seq2):
i1, i2 = 0, 0
while i1 < len(seq1):
if seq1[i1] != seq2[i2]:
i2 += 1
if i2 == len(seq2):
yield from seq1[i1:]
return
else:
yield seq1[i1]
i1 += 1