I have some functions here and I need to determine the efficiency for them. I tried to figure it out myself first and I have the explanation for my answers below each program.
1.
def funct(n):
tot = 0
for i in list(range(0, n, 5)):
for j in list(range(0, n, n//5)):
tot = tot + i - j
return tot
Here n is a natural number. I think the efficiency of the program is O(n^2). Because the efficiency is mainly dependent on the for loops. The first for loop is O(n), which is normal for a for loop. The second for loop is still O(n).
2.
def funct(L):
n = len(L)
tot = 0
M = []
for i in L[:n//2]:
M.append(i)
for i in L[n//2:]:
M.extend(L)
return sum(M)
Here L is a list of numbers. I think the efficiency of this one is O(log n). The 2 for loops only loop through half of the list. They are not nested, which means that the efficiency is only O(log n)
3.
def fn_f(n):
n = n % 116
tot = 0
for i in range(n):
for j in range(n**2):
tot = tot + 1
return tot
n is a natural number here. Because of the nested for loop, I think the efficiency is O(n^2). But the second for loop is n**2, which means that the efficiency might be exponential. So I think O(2^n) could be right too.
Can someone verify my answer? Thanks :)
Correct.
The first loop is O(n). In the second loop, M.extend(L) is O(n) because it has to copy L. This is done n//2 times, which is O(n). So the second loop is O(n2). O(n) + O(n2) is O(n2).
n = n % 116 limits the value of n to 115. So no matter how big the parameter n originally is, the loops are capped to this value. So this function is amortized O(1). If you didn't have that limit, it would be O(n3), because the outer loop is O(n) and the inner loop is O(n2).
Related
def f1(n):
for i in range(n):
k = aux1(n - i)
while k > 0:
print(i*k)
k //= 2
def aux1(m):
jj = 0
for j in range(m):
jj += j
return m
I am trying to calculate the time complexity of function f1, and it's not really working out for me. I would appreciate any feedback on my work.
What I'm doing: I tried at first to substitute i=1 and try to go for an iteration, so the function calls aux with m=n-1, and aux1 iterates n-1 times, and returns m = n-1, so now in f1 we have k = n-1, and the while k > 0 loop runs log(n-1) times. so basically for the first run O(n) time complexity for f1 (coming from the call to function aux1).
But now with the loop we continue calling aux1 with n = n-1, n-2, n-3 ... 1, I am a little bit confused on how to continue calculating time complexity from here or if I'm on the right track.
Thanks in advance for any help and explanation!
This is all very silly but it can be figured out step by step.
The inner loop halves k every time, so its time complexity is O(log(aux1(n-i))).
Now what's aux1(n-i)? It is actually just n-i. But running it has time complexity n-i because of that superfluous weird extra loop.
Okay so now for the inner stuff we have one part time complexity n-i and one part log(n-i) so using the rules of time complexity, we can just ignore the smaller part (the log) and focus on the larger part that's O(n-i)`.
And now the outer loop has i run from 0 to n which means our time complexity would be O(n^2) because 1 + 2 + 3 + ... + n = O(n^2)
to find the factors I won't suggest the substitution approach for this type of question, rather try taking the approach where you actually try to calculate the order of functions on the basis of the number of operations they are trying to do.
Let's analyze it by first checking the below line
for i in range(n):
this will run for O(n) without any doubts.
k = aux1(n - i)
The complexity of the above line would be O( n * complexity of aux1(n-i))
Let's find the complexity of aux1(n-i) -> because of only one for loop it will also run for O(n) hence the complexity of the above line will be O(n * n)
Now the while loop will have a complexity of O(n * complexity of while loop)
while k > 0:
print(i*k)
k //= 2
this will run for log(k) times, but k is equal to (n-i) having an order of O(n)
hence, log(k) will be log(n). Making the complexity O(log(n)).
So the while loop will have a complexity of O(n*log(n)).
Now adding the overall complexities
O(nn) (complexity of aux1(n)) + O(nlog(n)) (complexity of while loop)
the above can be descibed as O(n^2) as big oh function requires the upper limit.
Let's say there is a simple nested for loop:
for i in range(0, n):
for j in range(0, n):
print(i*j)
This is very easily seen to be O(n^2) by pretty much everyone. Now if we modify the nested for loop:
for i in range(0, n):
for j in range(i, n):
print(i*j)
It's going to something along the lines of n x n-1 x n-2 ... x 1 right? This would be the same n!, which should be a horrendous upper bound. So what am I missing here? Why is the smaller version of the for loop, which is clearly skipping a couple of iterations of the loop resulting in a big o notation that is worse?
That calculation should be n + n-1 + n-2 ... + 1, which is O(n²).
for i in range(0, n):
for j in range(i, n):
print(i*j)
On the first iteration of the outer loop, the inner loop does n operations.
On the second one, n-1 operations.
On the third one, n-2 operations.
...and so forth, until the inner loop does only 1 iteration.
n + n-1 + n-2 + ... + 1 = O(n^2), where would the multiplication come from?
Note that in a pedantic sense, O(n^2) is also O(n!). That is, O(n!) includes functions that are O(n^2) (and then some).
I am also new. Due to my understanding, I think
for i in range(0,n) -> n times
for j in range(i,n) -> n-i times
Because there are just 2 loops which mean n * (n-i) for whole operations. It should be only O(n^2).
I am trying to understand Big-O notation so I was making my own example for a O(n) using a while loop since I find while loops a bit confusing to understand in Big O notation. I defined a function called linear_example that takes in a list , the example is is python:
So my code is :
def linear_example (l):
n =10
while n>1:
n -= 1
for i in l:
print(i)
My thought process is the code in the for loop runs in constant time O(1)
and the code in the while loop runs in O(n) time .
So there for it would be O(1)+O(n) which would evaluate to O(n).
Feedback?
Think of a simple for-loop:
for i in l:
print(i)
This will be O(n) since you’re iterating through the list for however many items exist in l. (Where n == len(l))
Now we add a while loop which does the same thing ten times, so:
n + n + ... + n (x10)
And the complexity is O(10n).
Since this is still a polynomial with degree one, we can simplify this down to O(n), yes.
Not quite. First of all, n is not a fixed value, so O(n) is meaningless. Let's assume a given value M for this, changing the first two lines:
def linear_example (l, M):
n = M
The code in the for loop does run in O(1) time, provided that each element i of l is of finite bounded print time. However, the loop iterates len(l) times, so the loop complexity is O(len(l)).
Now, that loop runs once entirely through for each value of n in the while loop, a total of M times. Therefore, the complexity is the product of the loop complexities: O(M * len(l)).
I have two functions, both of which flatten an arbitrarily nested list of lists in Python.
I am trying to figure out the time complexity of both, to see which is more efficient, but I haven't found anything definitive on SO anything so far. There are lots of questions about lists of lists, but not to the nth degree of nesting.
function 1 (iterative)
def flattenIterative(arr):
i = 0
while i < len(arr):
while isinstance(arr[i], list):
if not arr[i]:
arr.pop(i)
i -= 1
break
else:
arr[i: i + 1] = arr[i]
i += 1
return arr
function 2 (recursive)
def flattenRecursive(arr):
if not arr:
return arr
if isinstance(arr[0], list):
return flattenRecursive(arr[0]) + flattenRecursive(arr[1:])
return arr[:1] + flattenRecursiveweb(arr[1:])
My thoughts are below:
function 1 complexity
I think that the time complexity for the iterative version is O(n * m), where n is the length of the initial array, and m is the amount of nesting. I think space complexity of O(n) where n is the length of the initial array.
function 2 complexity
I think that the time complexity for the recursive version will be O(n) where n is the length of the input array. I think space complexity of O(n * m) where n is the length of the initial array, and m is the depth of nesting.
summary
So, to me it seems that the iterative function is slower, but more efficient with space. Conversely, the recursive function is faster, but less efficient with space. Is this correct?
I don't think so. There are N elements, so you will need to visit each element at least once. Overall, your algorithm will run for O(N) iterations. The deciding factor is what happens per iteration.
Your first algorithm has 2 loops, but if you observe carefully, it is still iterating over each element O(1) times per iteration. However, as #abarnert pointed out, the arr[i: i + 1] = arr[i] moves every element from arr[i+1:] up, which is O(N) again.
Your second algorithm is similar, but you are adding lists in this case (in the previous case, it was a simple slice assignment), and unfortunately, list addition is linear in complexity.
In summary, both your algorithms are quadratic.
This is the function:
c = []
def badsort(l):
v = 0
m = len(l)
while v<m:
c.append(min(l))
l.remove(min(l))
v+=1
return c
Although I realize that this is a very inefficient way to sort, I was wondering what the time complexity of such a function would be, as although it does not have nested loops, it repeats the loop multiple times.
Terminology
Assume that n = len(l).
Iteration Count
The outer loops runs n times. The min() in the inner loop runs twice over l (room for optimisation here) but for incrementally decreasing numbers (for each iteration of the loop, the length of l decrements because you remove an item from the list every time).
That way the complexity is 2 * (n + (n-1) + (n-2) + ... + (n-n)).
This equals 2 * (n^2 - (1 + 2 + 3 + ... + n)).
The second term in the parenthesis is a triangular number and diverges to n*(n+1)/2.
Therefore your complexity equals 2*(n^2 - n*(n+1)/2)).
This can be expanded to 2*(n^2 - n^2/2 - n/2),
and simplified to n^2 - n.
BigO Notation
BigO notation is interested in the overall growth trend, rather than the precise growth rate of the function.
Drop Constants
In BigO notation, the constants are dropped. This leaves us still with n^2 - n since there are no constants.
Retain Only Dominant Terms
Also, in BigO notation only the dominant terms are considered. n^2 is dominant over n, so n is dropped.
Result
That means the answer in BigO is O(n) = n^2, i.e. quadratic complexity.
Here are a couple of useful points to help you understand how to find the complexity of a function.
Measure the number of iterations
Measure the complexity of each operation at each iteration
For the first point, you see the terminating condition is v < m, where v is 0 initially, and m is the size of the list. Since v increments by one at each iteration, the loop runs at most (at least) N times, where N is the size of the list.
Now, for the second point. Per iteration we have -
c.append(min(l))
Where min is a linear operation, taking O(N) time. append is a constant operation.
Next,
l.remove(min(l))
Again, min is linear, and so is remove. So, you have O(N) + O(N) which is O(N).
In summary, you have O(N) iterations, and O(N) per iteration, making it O(N ** 2), or quadratic.
The time compexity for this problem is O(n^2). While the code itself has only one obvious loop, the while loop, the min and max functions are both O(n) by implementation, because at worst case, it would have to scan the entire list to find the corresponding minimum or maximum value. list.remove is O(n) because it too has to traverse the list until it finds the first target value, which at worst case, could be at the end. list.append is amortized O(1), due to a clever implementation of the method, because list.append is technically O(n)/n = O(1) for n objects pushed:
def badsort(l):
v = 0
m = len(l)
while v<m: #O(n)
c.append(min(l)) #O(n) + O(1)
l.remove(min(l)) #O(n) + O(n)
v+=1
return c
Thus, there is:
Outer(O(n)) * Inner(O(n)+O(n)+O(n)) = Outer(O(n)) * Inner(O(n))
O(n)+O(n)+O(n) can be combined to simply O(n) because big o measures worst case. Thus, by combining the outer and inner compexities, the final complexity is O(n^2).