When calculating T(n) complexity is 1n usually represented as just n?
For example in the following Python code:
def Average(aList):
x = len(aList)
total = 0
for item in aList:
total = total + item
mean = total / n
return mean
Now working out T(n) the function starts with 2 assignments, 1 loop which is 1n assignments and 1 assignment after the loop giving
T(n) = 1n + 3
would the 1 be dropped giving n+3 giving O(n)?
Order notation is about the growth of algorithmic complexity, not about the specific number of operations.
So, O(3n) grows as the same rate as O(n), so the multiplicative and additive constants are eliminated. Think about ratios, if you double the value of "n", then in both cases, the timings double.
Slower growing components are ignored. In the limit O(n + 3) grows at about the same rate at O(n). For that matter, it grows at about the same rate as O(10n + log(n) + 7).
The key idea in order notation is what happens as "n" grows. It is not about counting all the specific operations.
"Big O" (or Landau) notation all constants are dropped because they do not influence the growth of a function/complexity. Therefore 1n + 3 would be O(n) and not O(1n+3) or O(n+3).
This is because of the linearity of that function. Something like 2n would be O(n) also, because the factor 2 just factorizes the output of the function but does not affect the "intensity/rate" at which your function grows.
It's the "Big O" notation. Addition of constants drops out, as well as does the multiplication by a constant.
Related
I have the following recursive Python function:
def f(n):
if n <=2:
result = n
else:
result = f(n-2) + f(n-3)
return result
What would you say is the algorithmic complexity of it?
I was thinking this function is really similar to the Fibonacci recursive function which has a complexity of O(2^N) but I am not sure if that would be correct for this specific case.
Just write the complexity function.
Assuming all atomic operations here worth 1 (they are O(1), so, even if it is false to say that they are all equal, as long as we just need the big-O of our complexity, at the end, it is equivalent to the reality), the complexity function is
def complexity(n):
if n<=2:
return 1
else:
return 1 + complexity(n-2) + complexity(n-3)
So, complexity of f computation is almost the same thing as f!
Which is not surprising: the only possible direct return value for f is 2. All other values are sum of other f. And in absence of any mechanism to avoid to recompute redundant value, you can say that if f(1000) = 2354964531714, then it took 2354964531714/2 addition of f(?)=2 to get to that result.
So, number of additions to compute f(n) is O(f(n))
And since f(n) is O(1.32ⁿ) (f(n)/1.32ⁿ → ∞, while f(n)/1.33ⁿ → 0. So it is exponential, with a base somewhere in between 1.32 and 1.33), it is an exponential complexity.
Let's suppose, guessing wildly, that T(n) ~ rT(n-1) for large values of n. Then our recurrence relation...
T(n) = c + T(n-2) + T(n-3)
Can be rewritten...
r^3T(n-3) ~ c + rT(n-3) + T(n-3)
Letting x = T(n-3), we have
(r^3)x ~ c + rx + x
Rearranging, we get
x(r^3 - r - 1) ~ c
And finally:
r^3 - r - 1 ~ c/x
Assuming again that x is very large, c/x ~ 0, so
r^3 - r - 1 ~ 0
I didn't have much look solving this analytically, however, as chrslg finds and Wolfram Alpha confirms, this has a root around ~1.32-1.33, and so there is real value of r that works; and so the time complexity is bounded by an exponential function with that base. If I am able to find an analytical solution, I will post with details (or if anybody else can do it, please leave it here.)
here is my code:
my_list.sort()
runningSum=0
for idx,query in enumerate(my_list):
runningSum+=sum(my_list[:idx])
as i know :
sort is O(nlogn)
for is O(n)
sum is O(n)
but total is how can I calculate the time complexity?
thanks
As you said sorting is O(n log n) (given an appropriate algorithm is used).
Then the for loop runs n times. In the first iteration there is only one operation, in the second loop there are two, in the third three and so on.
Therefore the computational effort can be described as the sum of the first n natural numbers:
1 + 2 + 3 + ... + n = (n* (n - 1)) / 2 = 0.5 * (n² - n) = O(n²)
The sum of the first n natural number can be calculated using the Gaussian summation formula.
The rules of big O notation dictate:
See Wikipedia for big-O notation
Therefore the overall runtime will be:
O(n log n) + O(n²) = O(n²)
Note: The algorithm you created is not the most efficient as you compute the same sum over and over again.
I'm having a really hard time understanding how to calculate worst case run times and run times in general. Since there is a while loop, would the run time have to be n+1 because the while loop must run 1 additional time to check is the case is still valid?I've also been searching online for a good explanation/ practice on how to calculate these run times but I can't seem to find anything good. A link to something like this would be very much appreciated.
def reverse1(lst):
rev_lst = []
i = 0
while(i < len(lst)):
rev_lst.insert(0, lst[i])
i += 1
return rev_lst
def reverse2(lst):
rev_lst = []
i = len(lst) - 1
while (i >= 0):
rev_lst.append(lst[i])
i -= 1
return rev_lst
Constant factors or added values don't matter for big-O run times, so you're over-complicating this. The run time is O(n) (linear) for reverse2, and O(n**2) (quadratic) for reverse1 (because list.insert(0, x) is itself a O(n) operation, performed O(n) times).
Big-O runtime calculations are about how the algorithm behaves as the input size increases towards infinity, and the smaller factors don't matter here; O(n + 1) is the same as O(n) (as is O(5n) for that matter; as n increases, the constant multiplier of 5 is irrelevant to the change in runtime), O(n**2 + n) is still just O(n**2), etc.
Since the number of iterations is fixed for any given size of the input list for both functions, the "worst" time complexity would be the same as the "best" and the average here.
In reverse1, the operation of inserting an item into a list at index 0 costs O(n) because it has to copy all the items to their following positions, and coupled with the while loop that iterates for the number of times of the size of the input list, the time complexity of reverse1 would be O(n^2).
There's no such an issue in reverse2, however, since the append method costs just O(1) to execute, so its overall time complexity is O(n).
I'm going to give you a mathematical explanation of why extra iterations and operations with constant time doesn't matter.
This is O(n) since the definition of Big-Oh is that for f(n) ∈ O(g(n)) there exists some constant k such that f(n) < kg(n).
Consider an algorithm with runtime represented as f(n) = 10000n + 15000000. A way you could simplify this is by factoring out the n: f(n) = n(10000 + 15000000/n). For the worst case runtime, you only care about the performance of the algorithm for super large values of n. Because in this simplification you're dividing by n, in the second part, as n gets really big, the coefficient of n will approach 10000, since 15000000/n approaches 0 if n is enormous. Therefore, for n > N (this means for a large enough value of n) there must exist a constant k such that f(n) < kn, for example k = 10001. Therefore, f(n) ∈ O(n), it has linear runtime efficiency.
With that being said, this means you don't need to worry about constant differences in your runtime, even if you loop n+1 times. The only part that matter (for polynomial time) is the highest degree of n in your code. Your reverse algorithms are O(n) runtime, and even if you iterated n + 1000 times, it would still be O(n) runtime.
This is the function:
c = []
def badsort(l):
v = 0
m = len(l)
while v<m:
c.append(min(l))
l.remove(min(l))
v+=1
return c
Although I realize that this is a very inefficient way to sort, I was wondering what the time complexity of such a function would be, as although it does not have nested loops, it repeats the loop multiple times.
Terminology
Assume that n = len(l).
Iteration Count
The outer loops runs n times. The min() in the inner loop runs twice over l (room for optimisation here) but for incrementally decreasing numbers (for each iteration of the loop, the length of l decrements because you remove an item from the list every time).
That way the complexity is 2 * (n + (n-1) + (n-2) + ... + (n-n)).
This equals 2 * (n^2 - (1 + 2 + 3 + ... + n)).
The second term in the parenthesis is a triangular number and diverges to n*(n+1)/2.
Therefore your complexity equals 2*(n^2 - n*(n+1)/2)).
This can be expanded to 2*(n^2 - n^2/2 - n/2),
and simplified to n^2 - n.
BigO Notation
BigO notation is interested in the overall growth trend, rather than the precise growth rate of the function.
Drop Constants
In BigO notation, the constants are dropped. This leaves us still with n^2 - n since there are no constants.
Retain Only Dominant Terms
Also, in BigO notation only the dominant terms are considered. n^2 is dominant over n, so n is dropped.
Result
That means the answer in BigO is O(n) = n^2, i.e. quadratic complexity.
Here are a couple of useful points to help you understand how to find the complexity of a function.
Measure the number of iterations
Measure the complexity of each operation at each iteration
For the first point, you see the terminating condition is v < m, where v is 0 initially, and m is the size of the list. Since v increments by one at each iteration, the loop runs at most (at least) N times, where N is the size of the list.
Now, for the second point. Per iteration we have -
c.append(min(l))
Where min is a linear operation, taking O(N) time. append is a constant operation.
Next,
l.remove(min(l))
Again, min is linear, and so is remove. So, you have O(N) + O(N) which is O(N).
In summary, you have O(N) iterations, and O(N) per iteration, making it O(N ** 2), or quadratic.
The time compexity for this problem is O(n^2). While the code itself has only one obvious loop, the while loop, the min and max functions are both O(n) by implementation, because at worst case, it would have to scan the entire list to find the corresponding minimum or maximum value. list.remove is O(n) because it too has to traverse the list until it finds the first target value, which at worst case, could be at the end. list.append is amortized O(1), due to a clever implementation of the method, because list.append is technically O(n)/n = O(1) for n objects pushed:
def badsort(l):
v = 0
m = len(l)
while v<m: #O(n)
c.append(min(l)) #O(n) + O(1)
l.remove(min(l)) #O(n) + O(n)
v+=1
return c
Thus, there is:
Outer(O(n)) * Inner(O(n)+O(n)+O(n)) = Outer(O(n)) * Inner(O(n))
O(n)+O(n)+O(n) can be combined to simply O(n) because big o measures worst case. Thus, by combining the outer and inner compexities, the final complexity is O(n^2).
result = 0
i = 0
while i < 2**n:
result = result + i
i += 1
# end while
I'm assuming O(2^n). Python code.
I think your code's time complexity is O(2^n log n) because you are computing 2^n, for 2^n times.
a^b can compute in O(log b) for exponentiation by squaring and I think the exponential algorithm in python is O(log n) algorithm.
So, the time complexity is O(2^n log n).
The time complexity is O(2n × n).
The reason is that numbers of size 2O(n) take O(n) bits to represent as int objects. If the result of an arithmetic operation takes O(n) bits then it can't be done in less than O(n) time, because Python's int objects are immutable.
So:
The addition result = result + i takes O(n) time because result uses O(n) bits.
The addition i += 1 takes O(n) time, because i uses O(n) bits.
It also takes O(n) time to compute 2**n, for the same reason: the result of this operation uses O(n) bits. The exponentiation algorithm only does O(log n) multiplications, but the time is dominated by the last multiplication, which is like 2n/2 * 2n/2.
And of course, the comparison i < 2**n takes O(n) time because both numbers use O(n) bits.
So, the loop iterates O(2n) times, and it does O(n) work on each iteration, hence the result.