Problem with comparing Merge and Insertion Sort - python

Im trying to run insertion sort and merge sort and plot them. Im taking the time for 5 different N and plot them. I want to do this three times such that Insertion time < merge time, Insertion time = merge time and Insertion time > merge time. However, No matter what I set as N, Insertion sort is always much faster. This is my output for N = 5000
N Values: [1, 1001, 2001, 3001, 4001]
Merge Sort: [0.005, 11.198, 21.965, 35.996, 49.971000000000004]
Insertion Sort: [0.002, 0.268, 0.545, 0.9129999999999999, 1.177]
I have tried different N up to like 10000000 and merge sort is always slower. What am I missing here?
def insertion_sort(array):
start_time = datetime.datetime.now()
for j in range(1, len(array)):
key = array[j]
i = j - 1
while i >= 0 and array[i] > key:
array[i + 1] = array[i]
i -= 1
array[i + 1] = key
time_diff = datetime.datetime.now() - start_time
return time_diff.total_seconds() * 1000
def merge_sort(arr, p, r):
start_time = datetime.datetime.now()
if p < r:
m = (p + (r - 1)) // 2
merge_sort(arr, p, m)
merge_sort(arr, m + 1, r)
merge(arr, p, m, r)
time_diff = datetime.datetime.now() - start_time
return time_diff.total_seconds() * 1000
def merge(arr, p, q, r):
n1 = q - p + 1
n2 = r - q
L = [0] * (n1 + 1)
R = [0] * (n2 + 1)
for i in range(0, n1):
L[i] = arr[p + i]
for j in range(0, n2):
R[j] = arr[q + 1 + j]
i = 0
j = 0
k = p
while i < n1 and j < n2:
if L[i] <= R[j]:
arr[k] = L[i]
i += 1
else:
arr[k] = R[j]
j += 1
k += 1
while i < n1:
arr[k] = L[i]
i += 1
k += 1
while j < n2:
arr[k] = R[j]
j += 1
k += 1
return arr
x, y1, y2 = [], [], []
N = 5000
for i in range(1, N, N // 5):
array = [j for j in range(i)]
array = array[::-1] # Array in reversed order
x.append(i)
y1.append(merge_sort(array, 0, len(array) - 1))
y2.append(insertion_sort(array))

What am I missing here?
Your test is not correct, as you don't provide the same array order to the two sorting algorithms.
As the first algorithm sorts the array (in-place), the second gets a sorted array.
To make a fair comparison, make sure to make a copy of the original array, e.g. using [:]:
y1.append(merge_sort(array[:], 0, len(array) - 1))
y2.append(insertion_sort(array[:]))
And now the results will show what you really expected.

Insertion sort runs very fast on sorted arrays. It just does n comparisons, and that's it.
So, a question for you: Why is insertion sort called with a sorted array in your code?
Hint: Try changing the order of these two lines and see how the running times change:
y1.append(merge_sort(array, 0, len(array) - 1))
y2.append(insertion_sort(array))

You're missing several significant factors:
Algorithmic complexity is a ratio as N approaches infinity. You're nowhere near infinity. :-) Some algorithms have a high overhead, such that their efficiencies don't begin to dominate execution time until you get to much larger lists.
If you want to see these efficiency effects, you have to efficiently implement the algorithms. Your code has a lot of superfluous overhead, especially in the merge sort. I recommend that you research a better implementation, as you're doing some extra copying and list building that adds nothing to the final result.
Whatever you choose to implement, you need to research its properties. As you can see from the raw figures and the graph, both of your functions are still dependent most on the linear and constant components of the implementation, and have no yet reached the parts of the curve dominated by the N^2 and N log N terms.

Related

Is there a faster way to solve the following problem?

A is a mn matrix
B is a nn matrix
I want to return matrix C of size m*n such that:
In python it could be like below
for i in range(m):
for j in range(n):
C[i][j] = 0
for k in range(n):
C[i][j] += max(0, A[i][j] - B[j][k])
this runs on O(m*n^2)
if A[i][j] - B[j][k] is always > 0 it could easily be improved as
C[i][j] = n*A[i][j] - sum(B[j])
but it is possible to improve as well when there are cases of A[i][j] - B[j][k]< 0 ? I think some divide and conquer algorithms might help here but I am not familiar with them.
For each j, You can sort each column B[j][:] and compute cumulative sums.
Then for a given A[i][j] you can find the sum of B[j][k] that are larger than A[i][j] in O(log n) time using binary search. If there's x elements of B[j][:] that are greater than A[i][j] and their sum is S, then C[i][j] = A[i][j] * x - S.
This gives you an overall O((m+n)n log n) time algorithm.
I would look on much simpler construct and go from there..
lets say the max between 0 and the addition wasn't there.
so the answer would be : a(i,j)n - sum(b(j,)
on this you could just go linearly by sum each vector and erase it from a(i,j)n
and because you need sum each vector in b only once per j it can be done in max(mn,nn)
now think about simple solution for the max problem...
if you would find which elements in b(j,) is bigger than a(i,j) you could just ignore their sum and substract their count from the multipication of a(i,j)
All of that can be done by ordering the vector b(j,) by size and make a summing array to each vector from the biggest to lowest (it can be done in nnlog(n) because you order each b(j,) vector once)
then you only need to binary search where is the a(i,j) in the ordered vector and take the sum you already found and subtract it from a(i,j) * the position you found in the binary search.
Eventually you'll get O( max( mnlog(n),nnlog(n) ) )
I got for you also the implementation:
import numpy as np
M = 4
N = 7
array = np.random.randint(100, size=(M,N))
array2 = np.random.randint(100, size=(N,N))
def matrixMacossoOperation(a,b, N, M):
cSlow = np.empty((M,N))
for i in range(M):
for j in range(N):
cSlow[i][j] = 0
for k in range(N):
cSlow[i][j] += max(0, a[i][j] - b[j][k])
for i in range(N):
b[i].sort()
sumArr = np.copy(b)
for j in range(N):
for i in range(N - 1):
sumArr[j][i + 1] += sumArr[j][i]
c = np.empty((M,N))
for i in range(M):
for j in range(N):
sumIndex = np.searchsorted(b[j],a[i][j])
if sumIndex == 0:
c[i][j] = 0;
else:
c[i][j] = ((sumIndex) * a[i][j]) - sumArr[j][sumIndex - 1]
print(c)
assert(np.array_equal(cSlow,c))
matrixMacossoOperation(array,array2,N,M)

Passing big arguments in change-making problem

I have problem with change-making problem algorithm.
My function coin_change_solutions works well with small numbers.
For example if we pass [1,10,25] as coins and 32 as S (change that we want to get) it will return [10,10,10,1,1]. Problem occurs when I want to pass bigger numbers as I want to operate on cents, not on dollars so that I have fixed-point arithmetic (it's a must because operating on floating-point arithmetic won't be a good idea later on).
So when I pass an array with all the denominations in cents [1,5,10,25,50,100,200,500,1000,2000,10000,50000] and 50000 as change my compiler stops and doesn't show any result.
Do you know what should I do so that the algorithm has high efficiency and can be passed all the nominals in cents?
def coin_change_solutions(coins, S):
# create an S x N table for memoization
N = len(coins)
sols = [[[] for n in range(N + 1)] for s in range(S + 1)]
for n in range(0, N + 1):
sols[0][n].append([])
# fill table using bottom-up dynamic programming
for s in range(1, S+1):
for n in range(1, N+1):
without_last = sols[s][n - 1]
if (coins[n - 1] <= s):
with_last = [list(sol) + [coins[n-1]] for sol in sols[s - coins[n - 1]][n]]
else:
with_last = []
sols[s][n] = without_last + with_last
x = min(sols[S][N], key=len)
return x
Not the solution to your query, but a better solution with less space:
dp = [0] + [float('inf') for i in range(S)]
for i in range(1, S+1):
for coin in coins:
if i - coin >= 0:
dp[i] = min(dp[i], dp[i-coin] + 1)
if dp[-1] == float('inf'):
return -1
return dp[-1]
Assume dp[i] is the fewest number of coins making up amount S, then for every coin in coins, dp[i] = min(dp[i - coin] + 1).
The time complexity is O(amount * coins.length) and the space complexity is O(amount).

How to obtain the result of n(n-1)(n-2) / 6

In my Python book, the question asks to prove the value of x after running the following code:
x = 0
for i in range(n):
for j in range(i+1, n):
for k in range(j+1, n):
x += 1
What I could see is that:
i = 0; j=1; k=2: from 2 to n, x+=1, (n-2) times 1
i = 1; j=2; k=3: from 3 to n, x+=1, (n-3) times 1
...
i=n-3; j=n-2; k=n-1: from n-1 to n, x+=1, just 1
i=n-2; j=n-1; k=n doesn't add 1
So it seems that the x is the sum of series of (n-2) + (n-3) + ... + 1?
I am not sure how to get to the answer of n(n-1)(n-2)/6.
One way to view this is that you have n values and three nested loops which are constructed to have non-overlapping ranges. Thus the number of iterations possible is equal to the number of ways to choose three unique values from n items, or n choose 3 = n!/(3!(n-3)!) = n(n-1)(n-2)/3*2*1 = n(n-1)(n-2)/6.
Just write the for loops as a sigma: S = sum_{i=1}^n sum_{j=i+1}^n sum_{k = j + 1}^n (1).
Try to expand the sum from inner to outer:
S = sum_{i=1}^n sum_{j=i+1}^n (n - j) = sum_{i=1}^n n(n-i) - ((i+1) + (i+2) + ... + n) = sum_{i=1}^n n(n-i) - ( 1+2+...+n - (1+2+...+i)) = sum_{i=1}^n n(n-i) -(n(n+1)/2 - i(i+1)/2) = sum_{i=1}^n n(n+1)/2 + i(i+1)/2 - n*i = n^2(n+1)/2 + sum_{i=1}^n (i^2/2 + i/2 - n*i).
If open this sum and simplify it (it is straightforward) you will get S = n(n-1)(n-2)/6.

Count all sub-arrays having sum divisible by k

While I was studying for interviews, I found this question and solution on GeeksForGeeks, but don't understand the solution.
What it says is
Let there be a subarray (i, j) whose sum is divisible by k
sum(i, j) = sum(0, j) - sum(0, i-1)
Sum for any subarray can be written as q*k + rem where q is a
quotient and rem is remainder Thus,
sum(i, j) = (q1 * k + rem1) - (q2 * k + rem2)
sum(i, j) = (q1 - q2)k + rem1-rem2
We see, for sum(i, j) i.e. for sum of any subarray to be
divisible by k, the RHS should also be divisible by k.
(q1 - q2)k is obviously divisible by k, for (rem1-rem2) to
follow the same, rem1 = rem2 where
rem1 = Sum of subarray (0, j) % k
rem2 = Sum of subarray (0, i-1) % k
First of all, I don't get what q1 and q2 indicate.
def subCount(arr, n, k):
# create auxiliary hash
# array to count frequency
# of remainders
mod =[]
for i in range(k + 1):
mod.append(0)
cumSum = 0
for i in range(n):
cumSum = cumSum + arr[i]
mod[((cumSum % k)+k)% k]= mod[((cumSum % k)+k)% k] + 1
result = 0 # Initialize result
# Traverse mod[]
for i in range(k):
if (mod[i] > 1):
result = result + (mod[i]*(mod[i]-1))//2
result = result + mod[0]
return result
And in this solution code, I don't get the role of mod. What is the effect of incrementing the cound of ((cumSum % k)+k)% kth array?
It would be great if this can be explained step by step easily. Thanks.
Are you familiar with integer modulo/remainder operation?
7 modulo 3 = 1 because
7 = 2 * 3 + 1
compare
N % M = r
because N might be represented as
N = q * M + r
here r is remainder and q is result of integer division like
7 // 3 = 2
For modulo k there might be k distinct remainders 0..k-1
mod array contains counters for every possible remainder. When remainder for every subrange sum is calculated, corresponding counter is incremented, so resulting mod array data looks like [3,2,5,0,7] three zero remainders, two remainders are equal to 1...

Finding c so that sum(x+c) over positives = K

Say I have a 1D array x with positive and negative values in Python, e.g.:
x = random.rand(10) * 10
For a given positive value of K, I would like to find the offset c that makes the sum of positive elements of the array y = x + c equal to K.
How can I solve this problem efficiently?
How about binary search to determine which elements of x + c are going to contribute to the sum, followed by solving the linear equation? The running time of this code is O(n log n), but only O(log n) work is done in Python. The running time could be dropped to O(n) via a more complicated partitioning strategy. I'm not sure whether a practical improvement would result.
import numpy as np
def findthreshold(x, K):
x = np.sort(np.array(x))[::-1]
z = np.cumsum(np.array(x))
l = 0
u = x.size
while u - l > 1:
m = (l + u) // 2
if z[m] - (m + 1) * x[m] >= K:
u = m
else:
l = m
return (K - z[l]) / (l + 1)
def test():
x = np.random.rand(10)
K = np.random.rand() * x.size
c = findthreshold(x, K)
assert np.abs(K - np.sum(np.clip(x + c, 0, np.inf))) / K <= 1e-8
Here's a randomized expected O(n) variant. It's faster (on my machine, for large inputs), but not dramatically so. Watch out for catastrophic cancellation in both versions.
def findthreshold2(x, K):
sumincluded = 0
includedsize = 0
while x.size > 0:
pivot = x[np.random.randint(x.size)]
above = x[x > pivot]
if sumincluded + np.sum(above) - (includedsize + above.size) * pivot >= K:
x = above
else:
notbelow = x[x >= pivot]
sumincluded += np.sum(notbelow)
includedsize += notbelow.size
x = x[x < pivot]
return (K - sumincluded) / includedsize
You can sort x in descending order, loop over x and compute the required c thus far. If the next element plus c is positive, it should be included in the sum, so c gets smaller.
Note that it might be the case that there is no solution: if you include elements up to m, c is such that m+1 should also be included, but when you include m+1, c decreases and a[m+1]+c might get negative.
In pseudocode:
sortDescending(x)
i = 0, c = 0, sum = 0
while i < x.length and x[i] + c >= 0
sum += x[i]
c = (K - sum) / i
i++
if i == 0 or x[i-1] + c < 0
#no solution
The running time is obviously O(n log n) because it is dominated by the initial sort.

Categories