I created this program for an assignment in which we were required to create an implementation of Quichesort. This is a hybrid sorting algorithm that uses Quicksort until it reaches a certain recursion depth (log2(N), where N is the length of the list), then switches to Heapsort, to avoid exceeding the maximum recursion depth.
While testing my implementation, I discovered that although it generally performed better than regular Quicksort, Heapsort consistently outperformed both. Can anyone explain why Heapsort performs better, and under what circumstances Quichesort would be better than both Quicksort and Heapsort?
Note that for some reason, the assignment referred to the algorithm as "Quipsort".
Edit: Apparently, "Quichesort" is actually identical to
Introsort.
I also noticed that a logic error in my medianOf3() function was
causing it to return the wrong value for certain inputs. Here is an improved
version of the function:
def medianOf3(lst):
"""
From a lst of unordered data, find and return the the median value from
the first, middle and last values.
"""
first, last = lst[0], lst[-1]
if len(lst) <= 2:
return min(first, last)
middle = lst[(len(lst) - 1) // 2]
return sorted((first, middle, last))[1]
Would this explain the algorithm's relatively poor performance?
Code for Quichesort:
import heapSort # heapSort
import math # log2 (for quicksort depth limit)
def medianOf3(lst):
"""
From a lst of unordered data, find and return the the median value from
the first, middle and last values.
"""
first, last = lst[0], lst[-1]
if len(lst) <= 2:
return min(first, last)
median = lst[len(lst) // 2]
return max(min(first, median), min(median, last))
def partition(pivot, lst):
"""
partition: pivot (element in lst) * List(lst) ->
tuple(List(less), List(same, List(more))).
Where:
List(Less) has values less than the pivot
List(same) has pivot value/s, and
List(more) has values greater than the pivot
e.g. partition(5, [11,4,7,2,5,9,3]) == [4,2,3], [5], [11,7,9]
"""
less, same, more = [], [], []
for val in lst:
if val < pivot:
less.append(val)
elif val > pivot:
more.append(val)
else:
same.append(val)
return less, same, more
def quipSortRec(lst, limit):
"""
A non in-place, depth limited quickSort, using median-of-3 pivot.
Once the limit drops to 0, it uses heapSort instead.
"""
if lst == []:
return []
if limit == 0:
return heapSort.heapSort(lst)
limit -= 1
pivot = medianOf3(lst)
less, same, more = partition(pivot, lst)
return quipSortRec(less, limit) + same + quipSortRec(more, limit)
def quipSort(lst):
"""
The main routine called to do the sort. It should call the
recursive routine with the correct values in order to perform
the sort
"""
depthLim = int(math.log2(len(lst)))
return quipSortRec(lst, depthLim)
Code for Heapsort:
import heapq # mkHeap (for adding/removing from heap)
def heapSort(lst):
"""
heapSort(List(Orderable)) -> List(Ordered)
performs a heapsort on 'lst' returning a new sorted list
Postcondition: the argument lst is not modified
"""
heap = list(lst)
heapq.heapify(heap)
result = []
while len(heap) > 0:
result.append(heapq.heappop(heap))
return result
The basic facts are as follows:
Heapsort has worst-case O(n log(n)) performance but tends to be slow in practice.
Quicksort has O(n log(n)) performance on average, but O(n^2) in the worst case but is fast in practice.
Introsort is intended to harness the fast-in-practice performance of quicksort, while still guaranteeing the worst-case O(n log(n)) behavior of heapsort.
One question to ask is, why is quicksort faster "in practice" than heapsort? This is a tough one to answer, but most answers point to how quicksort has better spatial locality, leading to fewer cache misses. However, I'm not sure how applicable this is to Python, as it is running in an interpreter and has a lot more junk going on under the hood than other languages (e.g. C) that could interfere with cache performance.
As to why your particular introsort implementation is slower than Python's heapsort - again, this is difficult to determine. First of all, note that the heapq module is written in Python, so it's on a relatively even footing with your implementation. It may be that creating and concatenating many smaller lists is costly, so you could try rewriting your quicksort to act in-place and see if that helps. You could also try tweaking various aspects of the implementation to see how that affects performance, or run the code through a profiler and see if there are any hot spots. But in the end I think it's unlikely you'll find a definite answer. It may just boil down to which operations are particularly fast or slow in the Python interpreter.
Related
Here's the question: https://leetcode.com/problems/coin-change/
I'm having some trouble understanding two different methods of dynamic programming used to solve this problem. I'm currently going through the Grokking Dynamic Programming course from educative.io, and their approach is to use subsets to search for each combination. They go about testing if a coin is viable, if so, then try it in the DFS. If not, skip the coin and go to the next index and try the next coin.
Here's Grokking's approach with memoization:
def coinChange(self, coins: List[int], amount: int) -> int:
def dfs(i, total, memo):
key = (i, total)
if key in memo:
return memo[key]
if total == 0:
return 0
if len(coins) == 0 or i >= len(coins):
return inf
count = inf
if coins[i] <= total:
res = dfs(i, total - coins[i], memo)
if res != inf:
count = res + 1
memo[key] = min(count, dfs(i + 1, total, memo))
return memo[key]
return dfs(0, amount, {}) if dfs(0, amount, {}) != inf else -1
It doesn't do very well on Leetcode; it runs very slowly (but passes, nonetheless). The efficient algorithm that was in the discussions was this:
def coinChange(self, coins: List[int], amount: int) -> int:
#lru_cache(None)
def dp(sum):
if sum == 0: return 0
if sum < 0: return float("inf")
count = float('inf')
for coin in coins:
count = min(count, dp(sum - coin))
return count + 1
return dp(amount) if dp(amount) != float("inf") else -1
Does this second code have the same logic as "testing the subsets of coins?" What's the difference between the two? Is the for-loop a way of testing the different subsets, like with backtracking?
I tested the second algorithm with memoization in a dictionary, like the first, using sum as the key, and it tanked in efficiency. But then I tried using the #lru_cache with the first algorithm, and it didn't help.
Could anyone explain why the second algorithm is so much faster? Is it my memoization that sucks?
Does this second code have the same logic as "testing the subsets of coins?"
If with subset you mean the subset of the coins that is still available for selection, then: no. The second algorithm does not reduce the problem in terms of coins; it reasons that at any time any coin can be selected, irrespective of previous selections. Although this may seem inefficient as it tries to take the same combinations in all possible permutations, this downside is minimised by the effect of memoization.
What's the difference between the two?
The first one takes coins in the order they are given, never going back to take an earlier coin once it has decided to go to the next one. So doing, it tries to reduce the problem in terms of available coins. The second one doesn't care about the order and looks at any permutation, it only reduces the problem in terms of amount.
This first one has a larger memoization collection because the index is part of the key, whereas the second uses a memoization collection that is only keyed by the amount.
The first one makes a recursive call even when no coin is selected (the one at the end of the inner function), since that fits in the logic of reducing the problem to fewer coins. The second one only makes a recursive call when the amount is further reduced.
Is the for-loop a way of testing the different subsets, like with backtracking?
If with subset you mean that the problem is reduced to fewer coins, then no: the second algorithm doesn't attempt to apply that methodology.
The for loop is just a way to consider every coin. It doesn't reduce the problem size in terms of available coins, only in terms of remaining amount.
Could anyone explain why the second algorithm is so much faster?
It is faster because the memoization key is smaller, leading to more hits, leading to fewer recursive calls. You can experiment with this and add global counters that count the number of executions of both inner functions (dfs and dp) and you'll see a dramatic difference there.
Is it my memoization that sucks?
You could say that, but it is too harsh.
I have this function:
def rec(lst):
n = len(lst)
if n <= 1:
return 1
return rec(lst[n // 2:]) + rec(lst[:n // 2])
How can I find the time complexity of this function?
Usually in such problems drawing the recursion tree helps.
Look at this photo I added, note how each level sums up to N (since slicing is the thing here doing the work),
and the depth of the tree is logN (this is easy to show, since we divide by 2 each time, you can find an explanation here). So what we have is the function doing O(n) n*logn times which means in total we have O(n*logn).
Now another way of understanding this is using the "Master Theorem" (I encourage you to look it up and learn about it)
We have here T(n) = 2T(n/2) + O(n), so according to the theorem a=2, b=2 so log_b(a) is equal to 1, and therefore
we have (according to the 2nd case of the theorem):
T(n)=O(logn*(n**(log_b(a)))=O(nlogn)
I tried a problem on project euler where I needed to find the sum of all the fibonacci terms under 4 million. It took me a long time but then I found out that I can use memoization to do it but it seems to take still a long time. After a lot of research, I found out that I can use a built-in module called lru_cache. My question is : why isn't it as fast as memoization ?
Here's my code:
from functools import lru_cache
#lru_cache(maxsize=1000000)
def fibonacci_memo(input_value):
global value
fibonacci_cache = {}
if input_value in fibonacci_cache:
return fibonacci_cache[input_value]
if input_value == 0:
value = 1
elif input_value == 1:
value = 1
elif input_value > 1:
value = fibonacci_memo(input_value - 1) + fibonacci_memo(input_value - 2)
fibonacci_cache[input_value] = value
return value
def sumOfFib():
SUM = 0
for n in range(500):
if fibonacci_memo(n) < 4000000:
if fibonacci_memo(n) % 2 == 0:
SUM += fibonacci_memo(n)
return SUM
print(sumOfFib())
The code works by the way. It takes less than a second to run it when I use the lru_cache module.
The other answer is the correct way to calculate the fibonacci sequence, indeed, but you should also know why your memoization wasn't working. To be specific:
fibonacci_cache = {}
This line being inside the function means you were emptying your cache every time fibonacci_memo was called.
You shouldn't be computing the Fibonacci sequence, not even by dynamic programming. Since the Fibonacci sequence satisfies a linear recurrence relation with constant coefficients and constant order, then so will be the sequence of their sums.
Definitely don't cache all the values. That will give you an unnecessary consumption of memory. When the recurrences have constant order, you only need to remember as many previous terms as the order of the recurrence.
Further more, there is a way to turn recurrences of constant order into systems recurrences of order one. The solution of the latter is given by a power of a matrix. This gives a faster algorithm, for large values of n. Each step will be more expensive, though. So, the best method would use a combination of the two, choosing the first method for small values of n and the latter for large inputs.
O(n) using the recurrence for the sum
Denote S_n=F_0+F_1+...+F_n the sum of the first Fibonacci numbers F_0,F_1,...,F_n.
Observe that
S_{n+1}-S_n=F_{n+1}
S_{n+2}-S_{n+1}=F_{n+2}
S_{n+3}-S_{n+2}=F_{n+3}
Since F_{n+3}=F_{n+2}+F_{n+1} we get that S_{n+3}-S_{n+2}=S_{n+2}-S_n. So
S_{n+3}=2S_{n+2}-S_n
with the initial conditions S_0=F_0=1, S_1=F_0+F_1=1+1=2, and S_2=S_1+F_2=2+2=4.
One thing that you can do is compute S_n bottom up, remembering the values of only the previous three terms at each step. You don't need to remember all of the values of S_k, from k=0 to k=n. This gives you an O(n) algorithm with O(1) amount of memory.
O(ln(n)) by matrix exponentiation
You can also get an O(ln(n)) algorithm in the following way:
Call X_n to be the column vector with components S_{n+2},S_{n+1},S_{n}
So, the recurrence above gives the recurrence
X_{n+1}=AX_n
where A is the matrix
[
[2,0,-1],
[1,0,0],
[0,1,0],
]
Therefore, X_n=A^nX_0. We have X_0. To multiply by A^n we can do exponentiation by squaring.
For the sake of completeness here are implementations of the general ideas described in #NotDijkstra's answer plus my humble optimizations including the "closed form" solution implemented in integer arithmetic.
We can see that the "smart" methods are not only an order of magnitude faster but also seem to scale better compatible with the fact (thanks #NotDijkstra) that Python big ints use better than naive multiplication.
import numpy as np
import operator as op
from simple_benchmark import BenchmarkBuilder, MultiArgument
B = BenchmarkBuilder()
def pow(b,e,mul=op.mul,unit=1):
if e == 0:
return unit
res = b
for bit in bin(e)[3:]:
res = mul(res,res)
if bit=="1":
res = mul(res,b)
return res
def mul_fib(a,b):
return (a[0]*b[0]+5*a[1]*b[1])>>1 , (a[0]*b[1]+a[1]*b[0])>>1
def fib_closed(n):
return pow((1,1),n+1,mul_fib)[1]
def fib_mat(n):
return pow(np.array([[1,1],[1,0]],'O'),n,op.matmul)[0,0]
def fib_sequential(n):
t1,t2 = 1,1
for i in range(n-1):
t1,t2 = t2,t1+t2
return t2
def sum_fib_direct(n):
t1,t2,res = 1,1,1
for i in range(n):
t1,t2,res = t2,t1+t2,res+t2
return res
def sum_fib(n,method="closed"):
if method == "direct":
return sum_fib_direct(n)
return globals()[f"fib_{method}"](n+2)-1
methods = "closed mat sequential direct".split()
def f(method):
def f(n):
return sum_fib(n,method)
f.__name__ = method
return f
for method in methods:
B.add_function(method)(f(method))
B.add_arguments('N')(lambda:(2*(1<<k,) for k in range(23)))
r = B.run()
r.plot()
import matplotlib.pylab as P
P.savefig(fib.png)
I am not sure how you are taking anything near a second. Here is the memoized version without fanciness:
class fibs(object):
def __init__(self):
self.thefibs = {0:0, 1:1}
def __call__(self, n):
if n not in self.thefibs:
self.thefibs[n] = self(n-1)+self(n-2)
return self.thefibs[n]
dog = fibs()
sum([dog(i) for i in range(40) if dog(i) < 4000000])
I'm currently doing some work around Big-O complexity and calculating the complexity of algorithms.
I seem to be struggling to work out the steps to calculate the complexity and was looking for some help to tackle this.
The function:
index = 0
while index < len(self.items):
if self.items[index] == item:
self.items.pop(index)
else:
index += 1
The actual challenge is to rewrite this function so that is has O(n) worst case complexity.
My problem with this is, as far as I thought, assignment statements and if statements have a complexity of O(1) whereas the while loop has a complexity of (n) and in the worst case any statements within the while loop could execute n times. So i work this out as 1 + n + 1 = 2 + n = O(n)
I figure I must be working this out incorrectly as there'd be no point in rewriting the function otherwise.
Any help with this is greatly appreciated.
If self.items is a list, the pop operation has complexity "k" where k is the index,
so the only way this is not O(N) is because of the pop operation.
Probably the exercise is done in order for you to use some other method of iterating and removing from the list.
To make it O(N) you can do:
self.items = [x for x in self.items if x == item]
If you are using Python's built in list data structure the pop() operation is not constant in the worst case and is O(N). So your overall complexity is O(N^2). You will need to use some other data structure like linked list if you cannot use auxiliary space.
With no arguments to pop its O(1)
With an argument to pop:
Average time Complexity O(k) (k represents the number passed in as an argument for pop
Amortized worst case time complexity O(k)
Worst case time complexity O(n)
Time Complexity - Python Wiki
So to make your code effective, allow user to pop from the end of the list:
for example:
def pop():
list.pop(-1)
Reference
Since you are passing index to self.items.pop(index), Its NOT O(1).
I am creating a fast method of generating a list of primes in the range(0, limit+1). In the function I end up removing all integers in the list named removable from the list named primes. I am looking for a fast and pythonic way of removing the integers, knowing that both lists are always sorted.
I might be wrong, but I believe list.remove(n) iterates over the list comparing each element with n. meaning that the following code runs in O(n^2) time.
# removable and primes are both sorted lists of integers
for composite in removable:
primes.remove(composite)
Based off my assumption (which could be wrong and please confirm whether or not this is correct) and the fact that both lists are always sorted, I would think that the following code runs faster, since it only loops over the list once for a O(n) time. However, it is not at all pythonic or clean.
i = 0
j = 0
while i < len(primes) and j < len(removable):
if primes[i] == removable[j]:
primes = primes[:i] + primes[i+1:]
j += 1
else:
i += 1
Is there perhaps a built in function or simpler way of doing this? And what is the fastest way?
Side notes: I have not actually timed the functions or code above. Also, it doesn't matter if the list removable is changed/destroyed in the process.
For anyone interested the full functions is below:
import math
# returns a list of primes in range(0, limit+1)
def fastPrimeList(limit):
if limit < 2:
return list()
sqrtLimit = int(math.ceil(math.sqrt(limit)))
primes = [2] + range(3, limit+1, 2)
index = 1
while primes[index] <= sqrtLimit:
removable = list()
index2 = index
while primes[index] * primes[index2] <= limit:
composite = primes[index] * primes[index2]
removable.append(composite)
index2 += 1
for composite in removable:
primes.remove(composite)
index += 1
return primes
This is quite fast and clean, it does O(n) set membership checks, and in amortized time it runs in O(n) (first line is O(n) amortized, second line is O(n * 1) amortized, because a membership check is O(1) amortized):
removable_set = set(removable)
primes = [p for p in primes if p not in removable_set]
Here is the modification of your 2nd solution. It does O(n) basic operations (worst case):
tmp = []
i = j = 0
while i < len(primes) and j < len(removable):
if primes[i] < removable[j]:
tmp.append(primes[i])
i += 1
elif primes[i] == removable[j]:
i += 1
else:
j += 1
primes[:i] = tmp
del tmp
Please note that constants also matter. The Python interpreter is quite slow (i.e. with a large constant) to execute Python code. The 2nd solution has lots of Python code, and it can indeed be slower for small practical values of n than the solution with sets, because the set operations are implemented in C, thus they are fast (i.e. with a small constant).
If you have multiple working solutions, run them on typical input sizes, and measure the time. You may get surprised about their relative speed, often it is not what you would predict.
The most important thing here is to remove the quadratic behavior. You have this for two reasons.
First, calling remove searches the entire list for values to remove. Doing this takes linear time, and you're doing it once for each element in removable, so your total time is O(NM) (where N is the length of primes and M is the length of removable).
Second, removing elements from the middle of a list forces you to shift the whole rest of the list up one slot. So, each one takes linear time, and again you're doing it M times, so again it's O(NM).
How can you avoid these?
For the first, you either need to take advantage of the sorting, or just use something that allows you to do constant-time lookups instead of linear-time, like a set.
For the second, you either need to create a list of indices to delete and then do a second pass to move each element up the appropriate number of indices all at once, or just build a new list instead of trying to mutate the original in-place.
So, there are a variety of options here. Which one is best? It almost certainly doesn't matter; changing your O(NM) time to just O(N+M) will probably be more than enough of an optimization that you're happy with the results. But if you need to squeeze out more performance, then you'll have to implement all of them and test them on realistic data.
The only one of these that I think isn't obvious is how to "use the sorting". The idea is to use the same kind of staggered-zip iteration that you'd use in a merge sort, like this:
def sorted_subtract(seq1, seq2):
i1, i2 = 0, 0
while i1 < len(seq1):
if seq1[i1] != seq2[i2]:
i2 += 1
if i2 == len(seq2):
yield from seq1[i1:]
return
else:
yield seq1[i1]
i1 += 1