def alg 3(n)
x = 0
i = 1
while i < n:
for j in range(0, n**3,n*3):
x += 1
i *= 3
return x
I don't really get the Big-O and exact runtime of this code. I first thought that the Big-O is O(n^3) * logn because of the n**3 but the n * 3 confuses me. Could someone please explain that problem? Thanks.
In order to compute the complexity we have to decompose the problem into two sub problems:
the inner for: the complexity of this one is n³/3n ~ O(n²)
the outer while: how many steps i needs to reach n:
first step i = 1 second step i = 3 third step i = 3*3 = 9 kth step i = 3^k
hence the solution for the second sub-problem is log3(k) ~ O(log(k))
the final complexity is: first sub-problem complexity multiplied by the second sub-problem complexity --> O(n²log(n))
Related
The following algorithm needs O(n) time. Is this improvable?
def powerOfTwo(n):
a = 1
if (n>0):
a=2
while (n>1):
a=(a+a)
n=(n-1)
return a
As mentioned in the comments(and pointed out by #Mark Ransom), you can simplify your code to:
def powerOfTwo(n):
a = 1
for _ in range(n):
a += a
return a
Unless you have pre-calculated values 2^n for different n, there's no way you can calculate 2^n using only sum operations in less than O(n).
Proof
Let's prove it. Let's assume there are k initial values a[i] (in your example, k=1) that are used for calculating 2^n.
Then, at the first iteration, you do some m additions. So, the maximum number that you can get is max(a[i])*m. At the second iteration, you do m additions again and the maximum number is max(a[i])*m*m and so on.
How many loop iterations do we need to reach 2^n? For this, we need to solve an inequality:
max(a[i])*(m^l) >= 2^n | take log from both sides
->
log(max(a[i])*(m^l)) >= n
->
log(max(a[i]) + log(m^l) >= n |
->
log(m^l) >= n - log(max(a[i]) | exclude log(max(a[i]) because it's constant
->
log(m^l) >= n
->
l*log(m) >= n
->
l >= n / log(m)
The number of iterations l linearly depends on n since log(m) is a finite number. Hence, the time complexity remains O(n).
You can implement the school algorithm for multiplication (https://en.wikipedia.org/wiki/Multiplication_algorithm) only with additions and bit tests. If this is allowed, you can compute the power with exponention by squaring.
The complexity of this algorithm should be faster than O(n).
def f1(n):
for i in range(n):
k = aux1(n - i)
while k > 0:
print(i*k)
k //= 2
def aux1(m):
jj = 0
for j in range(m):
jj += j
return m
I am trying to calculate the time complexity of function f1, and it's not really working out for me. I would appreciate any feedback on my work.
What I'm doing: I tried at first to substitute i=1 and try to go for an iteration, so the function calls aux with m=n-1, and aux1 iterates n-1 times, and returns m = n-1, so now in f1 we have k = n-1, and the while k > 0 loop runs log(n-1) times. so basically for the first run O(n) time complexity for f1 (coming from the call to function aux1).
But now with the loop we continue calling aux1 with n = n-1, n-2, n-3 ... 1, I am a little bit confused on how to continue calculating time complexity from here or if I'm on the right track.
Thanks in advance for any help and explanation!
This is all very silly but it can be figured out step by step.
The inner loop halves k every time, so its time complexity is O(log(aux1(n-i))).
Now what's aux1(n-i)? It is actually just n-i. But running it has time complexity n-i because of that superfluous weird extra loop.
Okay so now for the inner stuff we have one part time complexity n-i and one part log(n-i) so using the rules of time complexity, we can just ignore the smaller part (the log) and focus on the larger part that's O(n-i)`.
And now the outer loop has i run from 0 to n which means our time complexity would be O(n^2) because 1 + 2 + 3 + ... + n = O(n^2)
to find the factors I won't suggest the substitution approach for this type of question, rather try taking the approach where you actually try to calculate the order of functions on the basis of the number of operations they are trying to do.
Let's analyze it by first checking the below line
for i in range(n):
this will run for O(n) without any doubts.
k = aux1(n - i)
The complexity of the above line would be O( n * complexity of aux1(n-i))
Let's find the complexity of aux1(n-i) -> because of only one for loop it will also run for O(n) hence the complexity of the above line will be O(n * n)
Now the while loop will have a complexity of O(n * complexity of while loop)
while k > 0:
print(i*k)
k //= 2
this will run for log(k) times, but k is equal to (n-i) having an order of O(n)
hence, log(k) will be log(n). Making the complexity O(log(n)).
So the while loop will have a complexity of O(n*log(n)).
Now adding the overall complexities
O(nn) (complexity of aux1(n)) + O(nlog(n)) (complexity of while loop)
the above can be descibed as O(n^2) as big oh function requires the upper limit.
The 12th term, F12, is the first term to contain three digits.
What is the index of the first term in the Fibonacci sequence to contain 1000 digits?
a = 1
b = 1
i = 2
while(1):
c = a + b
i += 1
length = len(str(c))
if length == 1000:
print(i)
break
a = b
b = c
I got the answer(works fast enough). Just looking if there's a better way for this question
If you've answered the question, you'll find plenty of explanations on answers in the problem thread. The solution you posted is pretty much okay. You may get a slight speedup by simply checking that your c>=10^999 at every step instead of first converting it to a string.
The better method is to use the fact that when the Fibonacci numbers become large enough, the Fibonacci numbers converge to round(phi**n/(5**.5)) where phi=1.6180... is the golden ratio and round(x) rounds x to the nearest integer. Let's consider the general case of finding the first Fibonacci number with more than m digits. We are then looking for n such that round(phi**n/(5**.5)) >= 10**(m-1)
We can easily solve that by just taking the log of both sides and observe that
log(phi)*n - log(5)/2 >= m-1 and then solve for n.
If you're wondering "well how do I know that it has converged by the nth number?" Well, you can check for yourself, or you can look online.
Also, I think questions like these either belong on the Code Review SE or the Computer Science SE. Even Math Overflow might be a good place for Project Euler questions, since many are rooted in number theory.
Your solution is completely fine for #25 on project euler. However, if you really want to optimize for speed here you can try to calculate fibonacci using the identities I have written about in this blog post: https://sloperium.github.io/calculating-the-last-digits-of-large-fibonacci-numbers.html
from functools import lru_cache
#lru_cache(maxsize=None)
def fib4(n):
if n <= 1:
return n
if n % 2:
m = (n + 1) // 2
return fib4(m) ** 2 + fib4(m - 1) ** 2
else:
m = n // 2
return (2 * fib4(m - 1) + fib4(m)) * fib4(m)
def binarySearch( length):
first = 0
last = 10**5
found = False
while first <= last and not found:
midpoint = (first + last) // 2
length_string = len(str(fib4(midpoint)))
if length_string == length:
return midpoint -1
else:
if length < length_string:
last = midpoint - 1
else:
first = midpoint + 1
print(binarySearch(1000))
This code tests about 12 times faster than your solution. (it does require an initial guess about max size though)
Need help proving the time complexity of a recursive function.
Supposedly it's 2^n. I need to prove that this is the case.
def F(n):
if n == 0:
return 0
else:
result = 0
for i in range(n):
result += F(i)
return n*result+n`
Here's another version that does the same thing. The assignment said to use an array to store values in an attempt to reduce the time complexity so what I did was this
def F2(n,array):
if n < len(array):
answer = array[n]
elif n == 0:
answer = 0
array.append(answer)
else:
result = 0
for i in range(n):
result += F2(i,array)
answer = n*result+n
array.append(answer)
return answer
Again what I am looking for is the explanation of how to find the complexities of the two snippets of code, not interested in just knowing the answer.
All and any help greatly appreciated.
PS: for some reason, I can't get "def F2" to stay in the code block...sorry about that
Okay, the first function you wrote is an example of Exhaustive Search where you are exploring every possible branch that can be formed from a set of whole numbers up to n (which you have passed in the argument and you are using for loop for that). To explain you the time complexity I am going to consider the recursion stack as a Tree (to represent a recursive function call stack you can either use a stack or use an n-ary Tree)
Let's call you first function F1:
F1(3), now three branches will be formed for each number in the set S (set is the whole numbers up to n). I have taken n = 3, coz it will be easy for me to make the diagram for it. You can try will other larger numbers and observe the recursion call stack.
3
/| \
0 1 2 ----> the leftmost node is returns 0 coz (n==0) it's the base case
| /\
0 0 1
|
0 ----> returns 0
So here you have explored every possibility branches. If you try to write the recursive equation for the above problem then:
T(n) = 1; n is 0
= T(n-1) + T(n-2) + T(n-3) + ... + T(1); otherwise
Here,
T(n-1) = T(n-2) + T(n-3) + ... T(1).
So, T(n-1) + T(n-2) + T(n-3) + ... + T(1) = T(n-1) + T(n-1)
So, the Recursive equation becomes:
T(n) = 1; n is 0
= 2*T(n-1); otherwise
Now you can easily solve this recurrence relation (or use can use Masters theorem for the fast solution). You will get the time complexity as O(2^n).
Solving the recurrence relation:
T(n) = 2T(n-1)
= 2(2T(n-1-1) = 4T(n-2)
= 4(2T(n-3)
= 8T(n-3)
= 2^k T(n-k), for some integer `k` ----> equation 1
Now we are given the base case where n is 0, so let,
n-k = 0 , i.e. k = n;
Put k = n in equation 1,
T(n) = 2^n * T(n-n)
= 2^n * T(0)
= 2^n * 1; // as T(0) is 1
= 2^n
So, T.C = O(2^n)
So this is how you can get the time complexity for your first function. Next, if you observe the recursion Tree formed above (each node in the tree is a subproblem of the main problem), you will see that the nodes are repeating (i.e. the subproblems are repeating). So you have used a memory in your second function F2 to store the already computed value and whenever the sub-problems are occurring again (i.e. repeating subproblems) you are using the pre-computed value (this saves time for computing the sub-problems again and again). The approach is also known as Dynamic Programming.
Let's now see the second function, here you are returning answer. But, if you see your function you are building an array named as array in your program. The main time complexity goes there. Calculating its time complexity is simple because there is always just one level of recursion involved (or casually you can say no recursion involved) as every number i which is in range of number n is always going to be less than the number n, So the first if condition gets executed and control returns from there in F2. So each call can't go deeper than 2 level in the call stack.
So,
Time complexity of second function = time taken to build the array;
= 1 comparisions + 1 comparisions + 2 comparisions + ... + (n-1) comparisions
= 1 + 2 + 3 + ... + n-1
= O(n^2).
Let me give you a simple way to observe such recursions more deeply. You can print the recursion stack on the console and observe how the function calls are being made. Below I have written your code where I am printing the function calls.
Code:
def indent(n):
for i in xrange(n):
print ' '*i,
# second argument rec_cnt is just taken to print the indented function properly
def F(n, rec_cnt):
indent(rec_cnt)
print 'F(' + str(n) + ')'
if n == 0:
return 0
else:
result = 0
for i in range(n):
result += F(i, rec_cnt+1)
return n*result+n
# third argument is just taken to print the indented function properly
def F2(n, array, rec_cnt):
indent(rec_cnt)
print 'F2(' + str(n) + ')'
if n < len(array):
answer = array[n]
elif n == 0:
answer = 0
array.append(answer)
else:
result = 0
for i in range(n):
result += F2(i, array, rec_cnt+1)
answer = n*result+n
array.append(answer)
return answer
print F(4, 1)
lis = []
print F2(4, lis, 1)
Now observe the output:
F(4)
F(0)
F(1)
F(0)
F(2)
F(0)
F(1)
F(0)
F(3)
F(0)
F(1)
F(0)
F(2)
F(0)
F(1)
F(0)
96
F2(4)
F2(0)
F2(1)
F2(0)
F2(2)
F2(0)
F2(1)
F2(3)
F2(0)
F2(1)
F2(2)
96
In the first function call stack i.e. F1, you see that each call is explored up to 0, i.e. we are exploring each possible branch up to 0 (the base case), so, we call it Exhaustive Search.
In the second function call stack, you can see that the function calls are getting only two levels deep, i.e. they are using the pre-computed value to solve the repeated subproblems. Thus, it's time complexity is lesser than F1.
I have problems measuring complexity with python. Given the next two scripts:
1 def program1(L):
2 multiples = []
3 for x in L:
4 for y in L:
5 multiples.append(x*y)
6 return multiples
1 def program3(L1, L2):
2 intersection = []
3 for elt in L1:
4 if elt in L2:
5 intersection.append(elt)
6 return intersection
In the first one the best case (minimum steps to run the sript) is two considering an empty list L so are executed only the second and the sixth lines. The solution for the best case scenario: is 2.
In the worst case scenario L is a long list it goes through the loop for x in L n times.
The inner loop has three operations (assignment of a value to y, x*y, and list appending). So the inner loop executes 3*n times on each iteration of the outer loop. Thus the nested loop structure is executed n * (3*n + 1) = 3*n**2 + n times. Adding the second and the sixth line we get the solution 3n²+n+2.
But my question is: Where does comes from the number 1 in n(3n+1)?
According to me the solution is n(3n)+2 = 3n²+2 vs the right answer n(3n+1)+2 = 3n²+n+2.
Meanwhile in the second one the worst case scenario is n²+2n+2 but I don't understand why there is a quadratic term if there is only one loop.
According to you, there are three instructions in the innermost (y) loop of program1.
Assign to y.
Compute x*y.
Append to list.
By that same logic, there is one instruction in the outmost (x) loop:
Assign to x.
Perform inmost loop, see above.
That would make the outer loop:
n * (1 {assign to x} + n * 3 {assign, multiply, append})
Or:
n * (1 + 3n)
Adding the init/return instructions gives:
2 + n + 3n²
In program2, there is a similar situation with a "hidden loop":
2 instructions for init/return, plus ...
Then you run for elt in L1, which is going to be n iterations (n is size of L1). Your inner code is an if statement. In the worst case, the if body always runs. In the best case, it never runs.
The if condition is testing elt in L2, which is going to run an iterative function, type(L2).__contains__() on L2. The simple case will be an O(m) operation, where m is the length of L2. It is possible that L2 is not a list but some type where the in operation does not require a linear scan. For example, it might be a B-tree, or a dict, or a set, or who knows what? So you could assume that the best-case scenario is that elt in L2 is O(1) and the answer is no, while the worst-case is that elt in L2 is O(m) and the answer is yes.
Best case: 2 + n * (1 {assign to elt} + 1 {search L2})
Best case if L2 is a list: 2 + n * (1 {assign to elt} + m {search L2})
Worst case: 2 + n * (1 {assign to elt} + m {search L2} + 1 {append})
Which gives you 2 + 2n best case, 2 + n + nm best case if L2 is a list, and 2 + 2n + nm worst case.
You may be inclined to treat m as equal to n. That's your call, but if you're counting assignment statements, I'd argue against it.