I am trying to figure out the time complexity of this algorithm. A is an array input. The code does not run, by the way, it is for demonstrative purposes.
def func(A):
result = 0
n = len(A)
while n > 1:
n = n/2
result = result + min(A[1,...,n])
return result
This assumes array A is of length n.
I would assume the time complexity of this to be O(n(log n)), as the while loop has complexity O(log n), and the min function has complexity O(n). However, this function is apparently of complexity O(n) not O(n(log n)). I am wondering how this could be?
The total number of iterations you get linearly depends on n. it is n/2 + n/4 + n/8 + ... = n(1/2 + 1/4 + 1/8 + ...) and this is what O(n) denotes.
Related
here is my code:
my_list.sort()
runningSum=0
for idx,query in enumerate(my_list):
runningSum+=sum(my_list[:idx])
as i know :
sort is O(nlogn)
for is O(n)
sum is O(n)
but total is how can I calculate the time complexity?
thanks
As you said sorting is O(n log n) (given an appropriate algorithm is used).
Then the for loop runs n times. In the first iteration there is only one operation, in the second loop there are two, in the third three and so on.
Therefore the computational effort can be described as the sum of the first n natural numbers:
1 + 2 + 3 + ... + n = (n* (n - 1)) / 2 = 0.5 * (n² - n) = O(n²)
The sum of the first n natural number can be calculated using the Gaussian summation formula.
The rules of big O notation dictate:
See Wikipedia for big-O notation
Therefore the overall runtime will be:
O(n log n) + O(n²) = O(n²)
Note: The algorithm you created is not the most efficient as you compute the same sum over and over again.
I have some functions here and I need to determine the efficiency for them. I tried to figure it out myself first and I have the explanation for my answers below each program.
1.
def funct(n):
tot = 0
for i in list(range(0, n, 5)):
for j in list(range(0, n, n//5)):
tot = tot + i - j
return tot
Here n is a natural number. I think the efficiency of the program is O(n^2). Because the efficiency is mainly dependent on the for loops. The first for loop is O(n), which is normal for a for loop. The second for loop is still O(n).
2.
def funct(L):
n = len(L)
tot = 0
M = []
for i in L[:n//2]:
M.append(i)
for i in L[n//2:]:
M.extend(L)
return sum(M)
Here L is a list of numbers. I think the efficiency of this one is O(log n). The 2 for loops only loop through half of the list. They are not nested, which means that the efficiency is only O(log n)
3.
def fn_f(n):
n = n % 116
tot = 0
for i in range(n):
for j in range(n**2):
tot = tot + 1
return tot
n is a natural number here. Because of the nested for loop, I think the efficiency is O(n^2). But the second for loop is n**2, which means that the efficiency might be exponential. So I think O(2^n) could be right too.
Can someone verify my answer? Thanks :)
Correct.
The first loop is O(n). In the second loop, M.extend(L) is O(n) because it has to copy L. This is done n//2 times, which is O(n). So the second loop is O(n2). O(n) + O(n2) is O(n2).
n = n % 116 limits the value of n to 115. So no matter how big the parameter n originally is, the loops are capped to this value. So this function is amortized O(1). If you didn't have that limit, it would be O(n3), because the outer loop is O(n) and the inner loop is O(n2).
def f1(n):
for i in range(n):
k = aux1(n - i)
while k > 0:
print(i*k)
k //= 2
def aux1(m):
jj = 0
for j in range(m):
jj += j
return m
I am trying to calculate the time complexity of function f1, and it's not really working out for me. I would appreciate any feedback on my work.
What I'm doing: I tried at first to substitute i=1 and try to go for an iteration, so the function calls aux with m=n-1, and aux1 iterates n-1 times, and returns m = n-1, so now in f1 we have k = n-1, and the while k > 0 loop runs log(n-1) times. so basically for the first run O(n) time complexity for f1 (coming from the call to function aux1).
But now with the loop we continue calling aux1 with n = n-1, n-2, n-3 ... 1, I am a little bit confused on how to continue calculating time complexity from here or if I'm on the right track.
Thanks in advance for any help and explanation!
This is all very silly but it can be figured out step by step.
The inner loop halves k every time, so its time complexity is O(log(aux1(n-i))).
Now what's aux1(n-i)? It is actually just n-i. But running it has time complexity n-i because of that superfluous weird extra loop.
Okay so now for the inner stuff we have one part time complexity n-i and one part log(n-i) so using the rules of time complexity, we can just ignore the smaller part (the log) and focus on the larger part that's O(n-i)`.
And now the outer loop has i run from 0 to n which means our time complexity would be O(n^2) because 1 + 2 + 3 + ... + n = O(n^2)
to find the factors I won't suggest the substitution approach for this type of question, rather try taking the approach where you actually try to calculate the order of functions on the basis of the number of operations they are trying to do.
Let's analyze it by first checking the below line
for i in range(n):
this will run for O(n) without any doubts.
k = aux1(n - i)
The complexity of the above line would be O( n * complexity of aux1(n-i))
Let's find the complexity of aux1(n-i) -> because of only one for loop it will also run for O(n) hence the complexity of the above line will be O(n * n)
Now the while loop will have a complexity of O(n * complexity of while loop)
while k > 0:
print(i*k)
k //= 2
this will run for log(k) times, but k is equal to (n-i) having an order of O(n)
hence, log(k) will be log(n). Making the complexity O(log(n)).
So the while loop will have a complexity of O(n*log(n)).
Now adding the overall complexities
O(nn) (complexity of aux1(n)) + O(nlog(n)) (complexity of while loop)
the above can be descibed as O(n^2) as big oh function requires the upper limit.
This is the function:
c = []
def badsort(l):
v = 0
m = len(l)
while v<m:
c.append(min(l))
l.remove(min(l))
v+=1
return c
Although I realize that this is a very inefficient way to sort, I was wondering what the time complexity of such a function would be, as although it does not have nested loops, it repeats the loop multiple times.
Terminology
Assume that n = len(l).
Iteration Count
The outer loops runs n times. The min() in the inner loop runs twice over l (room for optimisation here) but for incrementally decreasing numbers (for each iteration of the loop, the length of l decrements because you remove an item from the list every time).
That way the complexity is 2 * (n + (n-1) + (n-2) + ... + (n-n)).
This equals 2 * (n^2 - (1 + 2 + 3 + ... + n)).
The second term in the parenthesis is a triangular number and diverges to n*(n+1)/2.
Therefore your complexity equals 2*(n^2 - n*(n+1)/2)).
This can be expanded to 2*(n^2 - n^2/2 - n/2),
and simplified to n^2 - n.
BigO Notation
BigO notation is interested in the overall growth trend, rather than the precise growth rate of the function.
Drop Constants
In BigO notation, the constants are dropped. This leaves us still with n^2 - n since there are no constants.
Retain Only Dominant Terms
Also, in BigO notation only the dominant terms are considered. n^2 is dominant over n, so n is dropped.
Result
That means the answer in BigO is O(n) = n^2, i.e. quadratic complexity.
Here are a couple of useful points to help you understand how to find the complexity of a function.
Measure the number of iterations
Measure the complexity of each operation at each iteration
For the first point, you see the terminating condition is v < m, where v is 0 initially, and m is the size of the list. Since v increments by one at each iteration, the loop runs at most (at least) N times, where N is the size of the list.
Now, for the second point. Per iteration we have -
c.append(min(l))
Where min is a linear operation, taking O(N) time. append is a constant operation.
Next,
l.remove(min(l))
Again, min is linear, and so is remove. So, you have O(N) + O(N) which is O(N).
In summary, you have O(N) iterations, and O(N) per iteration, making it O(N ** 2), or quadratic.
The time compexity for this problem is O(n^2). While the code itself has only one obvious loop, the while loop, the min and max functions are both O(n) by implementation, because at worst case, it would have to scan the entire list to find the corresponding minimum or maximum value. list.remove is O(n) because it too has to traverse the list until it finds the first target value, which at worst case, could be at the end. list.append is amortized O(1), due to a clever implementation of the method, because list.append is technically O(n)/n = O(1) for n objects pushed:
def badsort(l):
v = 0
m = len(l)
while v<m: #O(n)
c.append(min(l)) #O(n) + O(1)
l.remove(min(l)) #O(n) + O(n)
v+=1
return c
Thus, there is:
Outer(O(n)) * Inner(O(n)+O(n)+O(n)) = Outer(O(n)) * Inner(O(n))
O(n)+O(n)+O(n) can be combined to simply O(n) because big o measures worst case. Thus, by combining the outer and inner compexities, the final complexity is O(n^2).
def sort_list(lst):
result = []
result.append(lst[0])
for i in range(1,len(lst)):
insert_list(lst[i], result)
return result
def insert_list(x, lst):
a = search(x, lst)
lst.insert(a, x)
return lst
def search(x, seq):
for i in seq:
if x<i:
return seq.index(i)
elif x == i:
return seq.index(i)
elif x>seq[-1]:
return (seq.index(seq[-1]))+1
Is the time complexity of this code O(n)?
Depends on the complexity of 'insert_list'. In this case, since insert_list is inside a for-loop, the complexity would be n*(complexity of insert_list).
If you take n unit of time for the loop as there are n elements inside the list and one unit time for each remaining steps. The total time taken is 1+1+n*(time for insert_list )+1 = n*(time for insert_list) + 3.
Again, the time complexity further depends on insert_list step.
Consider this step,
a = search(x, lst)
It has two parts, search and assignment.
Lets take linear search. its time complexity is O(n).
so for insert_list function time taken is (search)+(assign)+(insert) = n+1+1 = n+2
So the total time taken = n*(n+2)+3 = n^2+2n+3
Since O(n) is an asymptotic notation, n is taken as a very large value. In that case 2n and 3 can be omitted. So time complexity is O(n^2), which is a polynomial time complexity.