How to calculate time complexity of these functions - python

def f1(n):
for i in range(n):
k = aux1(n - i)
while k > 0:
print(i*k)
k //= 2
def aux1(m):
jj = 0
for j in range(m):
jj += j
return m
I am trying to calculate the time complexity of function f1, and it's not really working out for me. I would appreciate any feedback on my work.
What I'm doing: I tried at first to substitute i=1 and try to go for an iteration, so the function calls aux with m=n-1, and aux1 iterates n-1 times, and returns m = n-1, so now in f1 we have k = n-1, and the while k > 0 loop runs log(n-1) times. so basically for the first run O(n) time complexity for f1 (coming from the call to function aux1).
But now with the loop we continue calling aux1 with n = n-1, n-2, n-3 ... 1, I am a little bit confused on how to continue calculating time complexity from here or if I'm on the right track.
Thanks in advance for any help and explanation!

This is all very silly but it can be figured out step by step.
The inner loop halves k every time, so its time complexity is O(log(aux1(n-i))).
Now what's aux1(n-i)? It is actually just n-i. But running it has time complexity n-i because of that superfluous weird extra loop.
Okay so now for the inner stuff we have one part time complexity n-i and one part log(n-i) so using the rules of time complexity, we can just ignore the smaller part (the log) and focus on the larger part that's O(n-i)`.
And now the outer loop has i run from 0 to n which means our time complexity would be O(n^2) because 1 + 2 + 3 + ... + n = O(n^2)

to find the factors I won't suggest the substitution approach for this type of question, rather try taking the approach where you actually try to calculate the order of functions on the basis of the number of operations they are trying to do.
Let's analyze it by first checking the below line
for i in range(n):
this will run for O(n) without any doubts.
k = aux1(n - i)
The complexity of the above line would be O( n * complexity of aux1(n-i))
Let's find the complexity of aux1(n-i) -> because of only one for loop it will also run for O(n) hence the complexity of the above line will be O(n * n)
Now the while loop will have a complexity of O(n * complexity of while loop)
while k > 0:
print(i*k)
k //= 2
this will run for log(k) times, but k is equal to (n-i) having an order of O(n)
hence, log(k) will be log(n). Making the complexity O(log(n)).
So the while loop will have a complexity of O(n*log(n)).
Now adding the overall complexities
O(nn) (complexity of aux1(n)) + O(nlog(n)) (complexity of while loop)
the above can be descibed as O(n^2) as big oh function requires the upper limit.

Related

What is the effiiency of these 3 programs (in Python)

I have some functions here and I need to determine the efficiency for them. I tried to figure it out myself first and I have the explanation for my answers below each program.
1.
def funct(n):
tot = 0
for i in list(range(0, n, 5)):
for j in list(range(0, n, n//5)):
tot = tot + i - j
return tot
Here n is a natural number. I think the efficiency of the program is O(n^2). Because the efficiency is mainly dependent on the for loops. The first for loop is O(n), which is normal for a for loop. The second for loop is still O(n).
2.
def funct(L):
n = len(L)
tot = 0
M = []
for i in L[:n//2]:
M.append(i)
for i in L[n//2:]:
M.extend(L)
return sum(M)
Here L is a list of numbers. I think the efficiency of this one is O(log n). The 2 for loops only loop through half of the list. They are not nested, which means that the efficiency is only O(log n)
3.
def fn_f(n):
n = n % 116
tot = 0
for i in range(n):
for j in range(n**2):
tot = tot + 1
return tot
n is a natural number here. Because of the nested for loop, I think the efficiency is O(n^2). But the second for loop is n**2, which means that the efficiency might be exponential. So I think O(2^n) could be right too.
Can someone verify my answer? Thanks :)
Correct.
The first loop is O(n). In the second loop, M.extend(L) is O(n) because it has to copy L. This is done n//2 times, which is O(n). So the second loop is O(n2). O(n) + O(n2) is O(n2).
n = n % 116 limits the value of n to 115. So no matter how big the parameter n originally is, the loops are capped to this value. So this function is amortized O(1). If you didn't have that limit, it would be O(n3), because the outer loop is O(n) and the inner loop is O(n2).

What would be the worst case run time of my two algorithms

I'm having a really hard time understanding how to calculate worst case run times and run times in general. Since there is a while loop, would the run time have to be n+1 because the while loop must run 1 additional time to check is the case is still valid?I've also been searching online for a good explanation/ practice on how to calculate these run times but I can't seem to find anything good. A link to something like this would be very much appreciated.
def reverse1(lst):
rev_lst = []
i = 0
while(i < len(lst)):
rev_lst.insert(0, lst[i])
i += 1
return rev_lst
def reverse2(lst):
rev_lst = []
i = len(lst) - 1
while (i >= 0):
rev_lst.append(lst[i])
i -= 1
return rev_lst
Constant factors or added values don't matter for big-O run times, so you're over-complicating this. The run time is O(n) (linear) for reverse2, and O(n**2) (quadratic) for reverse1 (because list.insert(0, x) is itself a O(n) operation, performed O(n) times).
Big-O runtime calculations are about how the algorithm behaves as the input size increases towards infinity, and the smaller factors don't matter here; O(n + 1) is the same as O(n) (as is O(5n) for that matter; as n increases, the constant multiplier of 5 is irrelevant to the change in runtime), O(n**2 + n) is still just O(n**2), etc.
Since the number of iterations is fixed for any given size of the input list for both functions, the "worst" time complexity would be the same as the "best" and the average here.
In reverse1, the operation of inserting an item into a list at index 0 costs O(n) because it has to copy all the items to their following positions, and coupled with the while loop that iterates for the number of times of the size of the input list, the time complexity of reverse1 would be O(n^2).
There's no such an issue in reverse2, however, since the append method costs just O(1) to execute, so its overall time complexity is O(n).
I'm going to give you a mathematical explanation of why extra iterations and operations with constant time doesn't matter.
This is O(n) since the definition of Big-Oh is that for f(n) ∈ O(g(n)) there exists some constant k such that f(n) < kg(n).
Consider an algorithm with runtime represented as f(n) = 10000n + 15000000. A way you could simplify this is by factoring out the n: f(n) = n(10000 + 15000000/n). For the worst case runtime, you only care about the performance of the algorithm for super large values of n. Because in this simplification you're dividing by n, in the second part, as n gets really big, the coefficient of n will approach 10000, since 15000000/n approaches 0 if n is enormous. Therefore, for n > N (this means for a large enough value of n) there must exist a constant k such that f(n) < kn, for example k = 10001. Therefore, f(n) ∈ O(n), it has linear runtime efficiency.
With that being said, this means you don't need to worry about constant differences in your runtime, even if you loop n+1 times. The only part that matter (for polynomial time) is the highest degree of n in your code. Your reverse algorithms are O(n) runtime, and even if you iterated n + 1000 times, it would still be O(n) runtime.

Big-O of my function

I am trying to understand Big-O notation so I was making my own example for a O(n) using a while loop since I find while loops a bit confusing to understand in Big O notation. I defined a function called linear_example that takes in a list , the example is is python:
So my code is :
def linear_example (l):
n =10
while n>1:
n -= 1
for i in l:
print(i)
My thought process is the code in the for loop runs in constant time O(1)
and the code in the while loop runs in O(n) time .
So there for it would be O(1)+O(n) which would evaluate to O(n).
Feedback?
Think of a simple for-loop:
for i in l:
print(i)
This will be O(n) since you’re iterating through the list for however many items exist in l. (Where n == len(l))
Now we add a while loop which does the same thing ten times, so:
n + n + ... + n (x10)
And the complexity is O(10n).
Since this is still a polynomial with degree one, we can simplify this down to O(n), yes.
Not quite. First of all, n is not a fixed value, so O(n) is meaningless. Let's assume a given value M for this, changing the first two lines:
def linear_example (l, M):
n = M
The code in the for loop does run in O(1) time, provided that each element i of l is of finite bounded print time. However, the loop iterates len(l) times, so the loop complexity is O(len(l)).
Now, that loop runs once entirely through for each value of n in the while loop, a total of M times. Therefore, the complexity is the product of the loop complexities: O(M * len(l)).

Understanding how to measure the time complexity of a function

This is the function:
c = []
def badsort(l):
v = 0
m = len(l)
while v<m:
c.append(min(l))
l.remove(min(l))
v+=1
return c
Although I realize that this is a very inefficient way to sort, I was wondering what the time complexity of such a function would be, as although it does not have nested loops, it repeats the loop multiple times.
Terminology
Assume that n = len(l).
Iteration Count
The outer loops runs n times. The min() in the inner loop runs twice over l (room for optimisation here) but for incrementally decreasing numbers (for each iteration of the loop, the length of l decrements because you remove an item from the list every time).
That way the complexity is 2 * (n + (n-1) + (n-2) + ... + (n-n)).
This equals 2 * (n^2 - (1 + 2 + 3 + ... + n)).
The second term in the parenthesis is a triangular number and diverges to n*(n+1)/2.
Therefore your complexity equals 2*(n^2 - n*(n+1)/2)).
This can be expanded to 2*(n^2 - n^2/2 - n/2),
and simplified to n^2 - n.
BigO Notation
BigO notation is interested in the overall growth trend, rather than the precise growth rate of the function.
Drop Constants
In BigO notation, the constants are dropped. This leaves us still with n^2 - n since there are no constants.
Retain Only Dominant Terms
Also, in BigO notation only the dominant terms are considered. n^2 is dominant over n, so n is dropped.
Result
That means the answer in BigO is O(n) = n^2, i.e. quadratic complexity.
Here are a couple of useful points to help you understand how to find the complexity of a function.
Measure the number of iterations
Measure the complexity of each operation at each iteration
For the first point, you see the terminating condition is v < m, where v is 0 initially, and m is the size of the list. Since v increments by one at each iteration, the loop runs at most (at least) N times, where N is the size of the list.
Now, for the second point. Per iteration we have -
c.append(min(l))
Where min is a linear operation, taking O(N) time. append is a constant operation.
Next,
l.remove(min(l))
Again, min is linear, and so is remove. So, you have O(N) + O(N) which is O(N).
In summary, you have O(N) iterations, and O(N) per iteration, making it O(N ** 2), or quadratic.
The time compexity for this problem is O(n^2). While the code itself has only one obvious loop, the while loop, the min and max functions are both O(n) by implementation, because at worst case, it would have to scan the entire list to find the corresponding minimum or maximum value. list.remove is O(n) because it too has to traverse the list until it finds the first target value, which at worst case, could be at the end. list.append is amortized O(1), due to a clever implementation of the method, because list.append is technically O(n)/n = O(1) for n objects pushed:
def badsort(l):
v = 0
m = len(l)
while v<m: #O(n)
c.append(min(l)) #O(n) + O(1)
l.remove(min(l)) #O(n) + O(n)
v+=1
return c
Thus, there is:
Outer(O(n)) * Inner(O(n)+O(n)+O(n)) = Outer(O(n)) * Inner(O(n))
O(n)+O(n)+O(n) can be combined to simply O(n) because big o measures worst case. Thus, by combining the outer and inner compexities, the final complexity is O(n^2).

Time complexity of a function

I'm trying to find out the time complexity (Big-O) of functions and trying to provide appropriate reason.
First function goes:
r = 0
# Assignment is constant time. Executed once. O(1)
for i in range(n):
for j in range(i+1,n):
for k in range(i,j):
r += 1
# Assignment and access are O(1). Executed n^3
like this.
I see that this is triple nested loop, so it must be O(n^3).
but I think my reasoning here is very weak. I don't really get what is going
on inside the triple nested loop here
Second function is:
i = n
# Assignment is constant time. Executed once. O(1)
while i>0:
k = 2 + 2
i = i // 2
# i is reduced by the equation above per iteration.
# so the assignment and access which are O(1) is executed
# log n times ??
I found out this algorithm to be O(1). But like the first function,
I don't see what is going on in the while-loop.
Can someone explain thoroughly about the time complexity of the two
functions? Thanks!
For such a simple case, you could find the number of iterations of the innermost loop as a function of n exactly:
sum_(i=0)^(n-1)(sum_(j=i+1)^(n-1)(sum_(k=i)^(j-1) 1)) = 1/6 n (n^2-1)
i.e., Θ(n**3) time complexity (see Big Theta): it assumes that r += 1 is O(1) if r has O(log n) digits (model has words with log n bits).
The second loop is even simpler: i //= 2 is i >>= 1. n has Θ(log n) digits and each iteration drops one binary digit (shift right) and therefore the whole loop is Θ(log n) time complexity if we assume that the i >> 1 shift of log(n) digits is O(1) operation (same model as in the first example).
Well first of all, for the first function, the time complexity seems to be closer to O(N log N) because the parameters of each loop decreases each time.
Also, for the second function, the runtime is O(log2 N). Except, say i == n == 2. After one run i is 1. After another i is 0.5. After another i is 0.25. And so on... I assume you would want int(i).
For a rigorous mathematical approach to each function, you can go to https://www.coursera.org/course/algo. It's a great course for this sort of thing. I was sort of sloppy in my calculations.

Categories