I'm learning merge sort and many tutorials I've seen merge by replacing values of the original array, like the way here. I was wondering if my alternative implementation is correct. I have only seen 1 tutorial do the same. My implementation returns the sorted array which goes like this:
def mergesort(arr):
if len(arr) == 1:
return arr
mid = len(arr) // 2
left_arr = arr[:mid]
right_arr = arr[mid:]
return merge(mergesort(left_arr), mergesort(right_arr))
def merge(left_arr, right_arr):
merged_arr = [] # put merge of left_arr & right_arr here
i,j = 0, 0 # indices for left_arr & right_arr
while i < len(left_arr) and j < len(right_arr):
if left_arr[i] < right_arr[j]:
merged_arr.append(left_arr[i])
i += 1
else:
merged_arr.append(right_arr[j])
j += 1
# add remaining elements to resulting arrray
merged_arr.extend(left_arr[i:])
merged_arr.extend(right_arr[j:])
return merged_arr
arr = [12, 11, 13, 5, 6, 7]
sorted_arr = mergesort(arr)
print(sorted_arr)
# Output: [5, 6, 7, 11, 12, 13]
To me, this is a more intuitive way of doing merge sort. Did this implementation break what merge sort should be? Is it less efficient speed-wise or space-wise (Aside from creating the results array)?
If we are considering a merge sort with O(n) extra memory, then your implementation seems to be correct but inefficient. Let's take a look at these lines:
def mergesort(arr):
...
mid = len(arr) // 2
left_arr = arr[:mid]
right_arr = arr[mid:]
You are actually creating two new arrays on each call to mergesort() and then copy elements from the original arr. It's two extra memory allocations on the heap and O(n) copies. Usually, heap memory allocations are quite slow due to complicated allocators algorithms.
Going father, let's consider this line:
merged_arr.append(left_arr[i]) # or similar merged_arr.append(left_arr[j])
Here again a bunch of memory allocations happens because you use a dynamically allocated array (aka list).
So, the most efficient way to mergesort would be to allocate one extra array of size of the original array once at the very beginning and then use its parts for temporary results.
def mergesort(arr):
mergesort_helper(arr[:], arr, 0, len(arr))
def mergesort_helper(arr, aux, l, r):
""" sorts from arr to aux """
if l >= r - 1:
return
m = l + (r - l) // 2
mergesort_helper(aux, arr, l, m)
mergesort_helper(aux, arr, m, r)
merge(arr, aux, l, m, r)
def merge(arr, aux, l, m, r):
i = l
j = m
k = l
while i < m and j < r:
if arr[i] < arr[j]:
aux[k] = arr[i]
i += 1
else:
aux[k] = arr[j]
j += 1
k += 1
while i < m:
aux[k] = arr[i]
i += 1
k += 1
while j < r:
aux[k] = arr[j]
j += 1
k += 1
import random
def testit():
for _ in range(1000):
n = random.randint(1, 1000)
arr = [0]*n
for i in range(n):
arr[i] = random.randint(0, 100)
sarr = sorted(arr)
mergesort(arr)
assert sarr == arr
testit()
Do Python guys bother about effectiveness with their lists :) ?
To achieve the best speed of classical merge sort implementation, in compiled languages one should provide auxiliary memory piece only once to minimize allocation operations (memory throughput frequently is limiting stage when arithmetics is rather simple).
Perhaps this approach (preallocation of working space as list with size = source size) might be useful in Python implementation too.
Your implementation of merge sort is right.
As you pointed you are using an extra array to merge your results. Using this alternative array, adds a space complexity of O(n).
However, the first link you mentioned: https://www.geeksforgeeks.org/merge-sort/
also adds the same space complexity:
/* create temp arrays */
int L[n1], R[n2];
Note: In case you are interested, take a look to "in place" merge sort
I think this is a good implementation of merge sort because evaluating the complexity of your algorithm is part of the complexity of the merge sort that is: given n the number of elements to be ordered,
T(n) = 2T (n / 2) + n
Related
I am looking at GeeksForGeeks problem Kth smallest element:
Given an array arr[] and an integer K where K is smaller than size of array, the task is to find the Kth smallest element in the given array. It is given that all array elements are distinct.
Expected Time Complexity: O(n)
Expected Auxiliary Space: O(log(n))
Constraints:
1 <= N <= 105
1 <= arr[i] <= 105
1 <= K <= N
My Code:
class Solution:
def kthSmallest(self,arr, l, r, k):
'''
arr : given array
l : starting index of the array i.e 0
r : ending index of the array i.e size-1
k : find kth smallest element and return using this function
'''
arr2=arr[:k]
arr2.insert(0,None)
for i in range(k//2,0,-1):
arr2=self.heapify(arr2,i,k-1)
for i in arr[k:]:
if i <arr2[1]:
arr2[1]=i
arr2=self.heapify(arr2,1,k-1)
return arr2[1]
def heapify(self,arr, i, r):
if 2 * i <= r + 1 and arr[2 * i] > arr[i]:
arr[2 * i], arr[i] = arr[i], arr[i * 2]
arr = self.heapify(arr, 2 * i, r)
if 2 * i + 1 <= r + 1 and arr[2 * i + 1] > arr[i]:
arr[2 * i + 1], arr[i] = arr[i], arr[i * 2 + 1]
arr = self.heapify(arr, 2 * i + 1, r)
return arr
I made a sub array of first K elements in the array, and max heapified it.
Then for the rest of the elements in the array, if the element is smaller than the first element of the heap, I replaced the top element and then max heapified the top element. I am getting time limit exceeded error. Any idea?
The problem is that your heapify function is not efficient. In the worst case it makes two recursive calls at the same recursion depth. This may even happen at several recursion depths, so that the number of times heapify is called recursively could become quite large. The goal is to have this only call heapify once (at the most) per recursion level.
It should first find the greatest child, and only then determine whether heapify should be called again, and make that single call if needed.
Some other remarks:
Instead of making heapify recursive, use an iterative solution. This will also save some execution time.
It is strange to pass k-1 to heapify as last argument, when the last element sits at index k, and so you get the weird comparison <= r + 1 in that function. It is more intuitive to pass k as argument, and work with <= r inside the function.
As arr is mutated by heapify it is not needed to return it. This is just overhead that is useless.
2 * i is calculated several times. It is better to calcuate this only once.
arr[k:] makes a copy of that part of the list. This is not really needed. You could just iterate over the range and take the corresponding value from the array in the loop.
It is not clear why the main function needs to get l and r as arguments, since in comments it is explained that l will be 0 and r the index of the last element. But in my opinion, since you get them, you should use them. So you should not assume l is 0,... etc.
I would use a more descriptive name for arr2. Why not name it heap?
Here is the improvement of your code:
class Solution:
def kthSmallest(self,arr, l, r, k):
'''
arr : given array
l : starting index of the array i.e 0
r : ending index of the array i.e size-1
k : find kth smallest element and return using this function
'''
heap = [arr[i] for i in range(l, r + 1)]
heap.insert(0, None)
for i in range(k//2, 0, -1):
self.heapify(heap, i, k)
for i in range(l + k, r + 1):
val = arr[i]
if val < heap[1]:
heap[1] = val
self.heapify(heap, 1, k)
return heap[1]
def heapify(self, arr, i, r):
child = 2 * i
while child <= r:
if child + 1 <= r and arr[child + 1] > arr[child]:
child += 1
if arr[child] <= arr[i]:
break
arr[child], arr[i] = arr[i], arr[child]
i = child
child = 2 * i
Finally, there is a heapq module you can use, which simplifies your code:
from heapq import heapify, heapreplace
class Solution:
def kthSmallest(self, arr, l, r, k):
'''
arr : given array
l : starting index of the array i.e 0
r : ending index of the array i.e size-1
k : find kth smallest element and return using this function
'''
heap = [-arr[i] for i in range(l, l + k)]
heapify(heap)
for i in range(l + k, r + 1):
val = -arr[i]
if val > heap[0]:
heapreplace(heap, val)
return -heap[0]
The unary minus that occurs here and there is to make the native minheap work as a maxheap.
I am trying to do a challenge in Python, the challenge consists of :
Given an array X of positive integers, its elements are to be transformed by running the following operation on them as many times as required:
if X[i] > X[j] then X[i] = X[i] - X[j]
When no more transformations are possible, return its sum ("smallest possible sum").
Basically you pick two non-equal numbers from the array, and replace the largest of them with their subtraction. You repeat this till all numbers in array are same.
I tried a basic approach by using min and max but there is another constraint which is time. I always get timeout because my code is not optimized and takes too much time to execute. Can you please suggest some solutions to make it run faster.
def solution(array):
while len(set(array)) != 1:
array[array.index(max(array))] = max(array) - min(array)
return sum(array)
Thank you so much !
EDIT
I will avoid to spoil the challenge... because I didn't find the solution in Python. But here's the general design of an algorithm that works in Kotlin (in 538 ms). In Python I'm stuck at the middle of the performance tests.
Some thoughts:
First, the idea to remove the minimum from the other elements is good: the modulo (we remove the minimum as long as it is possible) will be small.
Second, if this minimum is 1, the array will be soon full of 1s and the result is N (the len of the array).
Third, if all elements are equal, the result is N times the value of one element.
The algorithm
The idea is to keep two indices: i is the current index that cycles on 0..N and k is the index of the current minimum.
At the beginning, k = i = 0 and the minimum is m = arr[0]. We advance i until one of the following happen:
i == k => we made a full cycle without updating k, return N*m;
arr[i] == 1 => return N;
arr[i] < m => update k and m;
arr[i] > m => compute the new value of arr[i] (that is arr[i] % m or m if arr[i] is a multiple of m). If thats not m, thats arr[i] % m < m: update k and m;
arr[i] == m => pass.
Bascially, we use a rolling minimum and compute the modulos on the fly until all element are the same. That spares the computation of a min of the array periodically.
PREVIOUS ANSWER
As #BallpointBen wrote, you'll get the n times the GCD of all numbers. But that's cheating ;)! If you want to find a solution by hand, you can optimize your code.
While you don't find N identical numbers, you use the set, max (twice!), min and index functions on array. Those functions are pretty expensive. The number of iterations depend on the array.
Imagine the array is sorted in reverse order: [22, 14, 6, 2]. You can replace 22 by 22-14, 14 by 14-6, ... and get: [8, 12, 4, 2]. Sort again: [12, 8, 4, 2], replace again: [4, 4, 4, 2]. Sort again, replace again (if different): [4, 4, 2, 2], [4, 2, 2, 2], [2, 2, 2, 2]. Actually, in the first pass 14 could be replaced by 14-2*6 = 2 (as in the classic GCD computation), giving the following sequence:
[22, 14, 6, 2]
[8, 2, 2, 2]
[2, 2, 2, 2]
The convergence is fast.
def solution2(arr):
N = len(arr)
end = False
while not end:
arr = sorted(arr, reverse=True)
end = True
for i in range(1, N):
while arr[i-1] > arr[i]:
arr[i-1] -= arr[i]
end = False
return sum(arr)
A benchmark:
import random
import timeit
arr = [4*random.randint(1, 100000) for _ in range(100)] # GCD will be 4 or a multiple of 4
assert solution(list(arr)) == solution2(list(arr))
print(timeit.timeit(lambda: solution(list(arr)), number=100))
print(timeit.timeit(lambda: solution2(list(arr)), number=100))
Output:
2.5396839629975148
0.029025810996245127
def solution(a):
N = len(a)
end = False
while not end:
a = sorted(a, reverse=True)
small = min(a)
end = True
for i in range(1, N):
if a[i-1] > small:
a[i-1] = a[i-1]%small if a[i-1]%small !=0 else small
end = False
return sum(a)
made it faster with a slight change
This solution worked for me. I iterated on the list only once. initially I find the minimum and iterating over the list I replace the element with the rest of the division. If I find a rest equal to 1 the result will be trivially 1 multiplied by the length of the list otherwise if it is less than the minimum, i will replace the variable m with the minimum found and continue. Once the iteration is finished, the result will be the minimum for the length of the list.
Here the code:
def solution(a):
L = len(a)
if L == 1:
return a[0]
m=min(a)
for i in range(L):
if a[i] != m:
if a[i] % m != 0:
a[i] = a[i]%m
if a[i]<m:
m=a[i]
elif a[i] % m == 0:
a[i] -= m * (a[i] // m - 1)
if a[i]==1:
return 1*L
return m*L
I'm tryin to design a function that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
This code works fine yet has a high order of complexity, is there another solution that reduces the order of complexity?
Note: The 10000000 number is the range of integers in array A, I tried the sort function but does it reduces the complexity?
def solution(A):
for i in range(10000000):
if(A.count(i)) <= 0:
return(i)
The following is O(n logn):
a = [2, 1, 10, 3, 2, 15]
a.sort()
if a[0] > 1:
print(1)
else:
for i in range(1, len(a)):
if a[i] > a[i - 1] + 1:
print(a[i - 1] + 1)
break
If you don't like the special handling of 1, you could just append zero to the array and have the same logic handle both cases:
a = sorted(a + [0])
for i in range(1, len(a)):
if a[i] > a[i - 1] + 1:
print(a[i - 1] + 1)
break
Caveats (both trivial to fix and both left as an exercise for the reader):
Neither version handles empty input.
The code assumes there no negative numbers in the input.
O(n) time and O(n) space:
def solution(A):
count = [0] * len(A)
for x in A:
if 0 < x <= len(A):
count[x-1] = 1 # count[0] is to count 1
for i in range(len(count)):
if count[i] == 0:
return i+1
return len(A)+1 # only if A = [1, 2, ..., len(A)]
This should be O(n). Utilizes a temporary set to speed things along.
a = [2, 1, 10, 3, 2, 15]
#use a set of only the positive numbers for lookup
temp_set = set()
for i in a:
if i > 0:
temp_set.add(i)
#iterate from 1 upto length of set +1 (to ensure edge case is handled)
for i in range(1, len(temp_set) + 2):
if i not in temp_set:
print(i)
break
My proposal is a recursive function inspired by quicksort.
Each step divides the input sequence into two sublists (lt = less than pivot; ge = greater or equal than pivot) and decides, which of the sublists is to be processed in the next step. Note that there is no sorting.
The idea is that a set of integers such that lo <= n < hi contains "gaps" only if it has less than (hi - lo) elements.
The input sequence must not contain dups. A set can be passed directly.
# all cseq items > 0 assumed, no duplicates!
def find(cseq, cmin=1):
# cmin = possible minimum not ruled out yet
size = len(cseq)
if size <= 1:
return cmin+1 if cmin in cseq else cmin
lt = []
ge = []
pivot = cmin + size // 2
for n in cseq:
(lt if n < pivot else ge).append(n)
return find(lt, cmin) if cmin + len(lt) < pivot else find(ge, pivot)
test = set(range(1,100))
print(find(test)) # 100
test.remove(42)
print(find(test)) # 42
test.remove(1)
print(find(test)) # 1
Inspired by various solutions and comments above, about 20%-50% faster in my (simplistic) tests than the fastest of them (though I'm sure it could be made faster), and handling all the corner cases mentioned (non-positive numbers, duplicates, and empty list):
import numpy
def firstNotPresent(l):
positive = numpy.fromiter(set(l), dtype=int) # deduplicate
positive = positive[positive > 0] # only keep positive numbers
positive.sort()
top = positive.size + 1
if top == 1: # empty list
return 1
sequence = numpy.arange(1, top)
try:
return numpy.where(sequence < positive)[0][0]
except IndexError: # no numbers are missing, top is next
return top
The idea is: if you enumerate the positive, deduplicated, sorted list starting from one, the first time the index is less than the list value, the index value is missing from the list, and hence is the lowest positive number missing from the list.
This and the other solutions I tested against (those from adrtam, Paritosh Singh, and VPfB) all appear to be roughly O(n), as expected. (It is, I think, fairly obvious that this is a lower bound, since every element in the list must be examined to find the answer.) Edit: looking at this again, of course the big-O for this approach is at least O(n log(n)), because of the sort. It's just that the sort is so fast comparitively speaking that it looked linear overall.
I have been asked to sort a k messed array
I have below code. I have to reduce the complexity from nlogn to nlogk.
arr = [3,2,1,4,5,6,8,10,9]
k = 2
def sortKmessedarr(arr, k):
i = 1
j = 0
n = len(arr)
while i < n:
if arr[i] > arr[i-1]:
pass
else:
arr[i-1:i+k].sort() # How to sort elements between two specific indexs
i += 1
sortKmessedarr(arr, k)
print(arr)
I think if I apply this approach then it will become nlogk
But how to apply this sort() between two indexes.
I have also tried another approach like below:
arr = [3,2,1,4,5,6,8,10,9]
k = 2
def sortKmessedarr(arr, k):
def merge(arr):
arr.sort()
print(arr)
i = 1
j = 0
n = len(arr)
while i < n:
if arr[i] > arr[i-1]:
pass
else:
merge(arr[i-1:i+k])#.sort()
i += 1
sortKmessedarr(arr, k)
print(arr)
But still no luck
You can use sorted with slice assignment to get the intended effect syntactically, but I am unsure of the impact on performance (memory or speed):
arr[i-1:i+k] = sorted(a[i-1:i+k])
What is the fastest way to sort an array of whole integers bigger than 0 and less than 100000 in Python? But not using the built in functions like sort.
Im looking at the possibility to combine 2 sport functions depending on input size.
If you are interested in asymptotic time, then counting sort or radix sort provide good performance.
However, if you are interested in wall clock time you will need to compare performance between different algorithms using your particular data sets, as different algorithms perform differently with different datasets. In that case, its always worth trying quicksort:
def qsort(inlist):
if inlist == []:
return []
else:
pivot = inlist[0]
lesser = qsort([x for x in inlist[1:] if x < pivot])
greater = qsort([x for x in inlist[1:] if x >= pivot])
return lesser + [pivot] + greater
Source: http://rosettacode.org/wiki/Sorting_algorithms/Quicksort#Python
Since you know the range of numbers, you can use Counting Sort which will be linear in time.
Radix sort theoretically runs in linear time (sort time grows roughly in direct proportion to array size ), but in practice Quicksort is probably more suited, unless you're sorting absolutely massive arrays.
If you want to make quicksort a bit faster, you can use insertion sort] when the array size becomes small.
It would probably be helpful to understand the concepts of algorithmic complexity and Big-O notation too.
Early versions of Python used a hybrid of samplesort (a variant of quicksort with large sample size) and binary insertion sort as the built-in sorting algorithm. This proved to be somewhat unstable. S0, from python 2.3 onward uses adaptive mergesort algorithm.
Order of mergesort (average) = O(nlogn).
Order of mergesort (worst) = O(nlogn).
But Order of quick sort (worst) = n*2
if you uses list=[ .............. ]
list.sort() uses mergesort algorithm.
For comparison between sorting algorithm you can read wiki
For detail comparison comp
I might be a little late to the show, but there's an interesting article that compares different sorts at https://www.linkedin.com/pulse/sorting-efficiently-python-lakshmi-prakash
One of the main takeaways is that while the default sort does great we can do a little better with a compiled version of quicksort. This requires the Numba package.
Here's a link to the Github repo:
https://github.com/lprakash/Sorting-Algorithms/blob/master/sorts.ipynb
We can use count sort using a dictionary to minimize the additional space usage, and keep the running time low as well. The count sort is much slower for small sizes of the input array because of the python vs C implementation overhead. The count sort starts to overtake the regular sort when the size of the array (COUNT) is about 1 million.
If you really want huge speedups for smaller size inputs, implement the count sort in C and call it from Python.
(Fixed a bug which Aaron (+1) helped catch ...)
The python only implementation below compares the 2 approaches...
import random
import time
COUNT = 3000000
array = [random.randint(1,100000) for i in range(COUNT)]
random.shuffle(array)
array1 = array[:]
start = time.time()
array1.sort()
end = time.time()
time1 = (end-start)
print 'Time to sort = ', time1*1000, 'ms'
array2 = array[:]
start = time.time()
ardict = {}
for a in array2:
try:
ardict[a] += 1
except:
ardict[a] = 1
indx = 0
for a in sorted(ardict.keys()):
b = ardict[a]
array2[indx:indx+b] = [a for i in xrange(b)]
indx += b
end = time.time()
time2 = (end-start)
print 'Time to count sort = ', time2*1000, 'ms'
print 'Ratio =', time2/time1
The built in functions are best, but since you can't use them have a look at this:
http://en.wikipedia.org/wiki/Quicksort
def sort(l):
p = 0
while(p<len(l)-1):
if(l[p]>l[p+1]):
l[p],l[p+1] = l[p+1],l[p]
if(not(p==0)):
p = p-1
else:
p += 1
return l
this is a algorithm that I created but is really fast. just do sort(l)
l being the list that you want to sort.
#fmark
Some benchmarking of a python merge-sort implementation I wrote against python quicksorts from http://rosettacode.org/wiki/Sorting_algorithms/Quicksort#Python
and from top answer.
Size of the list and size of numbers in list irrelevant
merge sort wins, however it uses builtin int() to floor
import numpy as np
x = list(np.random.rand(100))
# TEST 1, merge_sort
def merge(l, p, q, r):
n1 = q - p + 1
n2 = r - q
left = l[p : p + n1]
right = l[q + 1 : q + 1 + n2]
i = 0
j = 0
k = p
while k < r + 1:
if i == n1:
l[k] = right[j]
j += 1
elif j == n2:
l[k] = left[i]
i += 1
elif left[i] <= right[j]:
l[k] = left[i]
i += 1
else:
l[k] = right[j]
j += 1
k += 1
def _merge_sort(l, p, r):
if p < r:
q = int((p + r)/2)
_merge_sort(l, p, q)
_merge_sort(l, q+1, r)
merge(l, p, q, r)
def merge_sort(l):
_merge_sort(l, 0, len(l)-1)
# TEST 2
def quicksort(array):
_quicksort(array, 0, len(array) - 1)
def _quicksort(array, start, stop):
if stop - start > 0:
pivot, left, right = array[start], start, stop
while left <= right:
while array[left] < pivot:
left += 1
while array[right] > pivot:
right -= 1
if left <= right:
array[left], array[right] = array[right], array[left]
left += 1
right -= 1
_quicksort(array, start, right)
_quicksort(array, left, stop)
# TEST 3
def qsort(inlist):
if inlist == []:
return []
else:
pivot = inlist[0]
lesser = qsort([x for x in inlist[1:] if x < pivot])
greater = qsort([x for x in inlist[1:] if x >= pivot])
return lesser + [pivot] + greater
def test1():
merge_sort(x)
def test2():
quicksort(x)
def test3():
qsort(x)
if __name__ == '__main__':
import timeit
print('merge_sort:', timeit.timeit("test1()", setup="from __main__ import test1, x;", number=10000))
print('quicksort:', timeit.timeit("test2()", setup="from __main__ import test2, x;", number=10000))
print('qsort:', timeit.timeit("test3()", setup="from __main__ import test3, x;", number=10000))
Bucket sort with bucket size = 1. Memory is O(m) where m = the range of values being sorted. Running time is O(n) where n = the number of items being sorted. When the integer type used to record counts is bounded, this approach will fail if any value appears more than MAXINT times.
def sort(items):
seen = [0] * 100000
for item in items:
seen[item] += 1
index = 0
for value, count in enumerate(seen):
for _ in range(count):
items[index] = value
index += 1