How to calculate time complexity of these two functions? (recursion) - python

The first function:
def f1(n):
if n == 1:
return 1
return f(f(n-1))
The second function:
def f2(n):
if n == 1:
return 1
return 1 + f(f(n-1))
Now I can see why both of the function's space complexity is O(n) since the recursion depth is equal to n.
But about time complexity, I'm not being able to calculate it like I used to do with the equation for normal recursion, lets say instead of f(f(n-1)) we had f(n-1) in the first function. then it would be T(n) = 1 + T(n-1) = 2 + T(n-2)=... = n so O(n), I can intuitively understand that it might stay the same for f1, since all the returns are 1 so I might have 2n iterations which is O(n) but I have no idea how to deal it formally.
For f2 I couldn't know how to reach the time complexity, intuition failed me here, and I would really appreciate any help in how to analyze these recursive calls.
The Final answers are: f1: Time Complexity: O(n), Space Complexity: O(n).
f2: Time complexity: O(2^n), Space complexity: O(n).

As I think you realize, in f1 you're performing two operations:
Call f1(n - 1)
Call f1 again with the result from the first operation.
You can see that calling f1 with anything... or at least anything greater than zero... returns 1.
In step 2, you're just calling f1 again with 1, when returns immediately. O(N).
On to f2. Let's examine the behavior of this function.
When n = 1, we return 1.
When n = 2, we return 1 + f2(1). We know that f2(1) returns 1, so f2(2) returns 2.
When n = 3, we return 1 + f2(2), so 3.
Etc.
So it looks like f2(n) just returns n.
Every time we call f2 we are doing the same things we were doing in f1: call self with n - 1, then call self again with whatever that number is.
In other words, we're calling f2 twice with n - 1. For every n we double the number of calls, so O(2^N).

Related

What is the algorithmic complexity of this recursive function?

I have the following recursive Python function:
def f(n):
if n <=2:
result = n
else:
result = f(n-2) + f(n-3)
return result
What would you say is the algorithmic complexity of it?
I was thinking this function is really similar to the Fibonacci recursive function which has a complexity of O(2^N) but I am not sure if that would be correct for this specific case.
Just write the complexity function.
Assuming all atomic operations here worth 1 (they are O(1), so, even if it is false to say that they are all equal, as long as we just need the big-O of our complexity, at the end, it is equivalent to the reality), the complexity function is
def complexity(n):
if n<=2:
return 1
else:
return 1 + complexity(n-2) + complexity(n-3)
So, complexity of f computation is almost the same thing as f!
Which is not surprising: the only possible direct return value for f is 2. All other values are sum of other f. And in absence of any mechanism to avoid to recompute redundant value, you can say that if f(1000) = 2354964531714, then it took 2354964531714/2 addition of f(?)=2 to get to that result.
So, number of additions to compute f(n) is O(f(n))
And since f(n) is O(1.32ⁿ) (f(n)/1.32ⁿ → ∞, while f(n)/1.33ⁿ → 0. So it is exponential, with a base somewhere in between 1.32 and 1.33), it is an exponential complexity.
Let's suppose, guessing wildly, that T(n) ~ rT(n-1) for large values of n. Then our recurrence relation...
T(n) = c + T(n-2) + T(n-3)
Can be rewritten...
r^3T(n-3) ~ c + rT(n-3) + T(n-3)
Letting x = T(n-3), we have
(r^3)x ~ c + rx + x
Rearranging, we get
x(r^3 - r - 1) ~ c
And finally:
r^3 - r - 1 ~ c/x
Assuming again that x is very large, c/x ~ 0, so
r^3 - r - 1 ~ 0
I didn't have much look solving this analytically, however, as chrslg finds and Wolfram Alpha confirms, this has a root around ~1.32-1.33, and so there is real value of r that works; and so the time complexity is bounded by an exponential function with that base. If I am able to find an analytical solution, I will post with details (or if anybody else can do it, please leave it here.)

Time complexity for my max numbers implementation

def maxN(n:list[int])-> int:
if len(n)==2:
if n[0] > n[1]:
return n[0]
else:
return n[1]
if len(n) ==1:
return n[0]
else:
mid = len(n)//2
first = maxN(n[0:mid])
second= maxN(n[mid:])
if first>second:
return first
else:
return second
I'm getting struggled with my emplementation, because I don't know if this is better than simply using a for or a while loop.
At every level, every function calls two more functions until there are no more numbers. Roughly, the total number of functions calls will be 1 + 2 + 4 + 8 .... n. The total length of the series would be approximately logn as the array is halved at every level.
To get the total number of function calls, we could sum the specified GP series, which would give us the total as n.
We see that there n number of function calls in total and every function does constant amount of work. Thus, the time complexity would be O(n).
The time complexity is similar to the iterative version. However, we know that recursion consumes space in the call stack. Since there could be at most logn functions in the call stack (the depth of the recursive tree), the space complexity will be O(logn).
Thus, the iterative version would be a better choice as it has the same time complexity but a better, O(1), space complexity.
Edit: since the list being splitted into sublists in every function, the splitting cost would add up too. To avoid that, you could pass two variables in the function to the track the list boundary.

What is the time complexity of this function using a built-in function?

Assume you have the following compute function, using python built-in sum function:
def compute(a_list):
for i in range(0, n): #Line 1
number = sum(a_list[0:i + 1])/(i+1) #Line 2
return number
What would the time-complexity for something like this look like?
Line 1 is executed n number of times, but Line 2, having the built-in sum function (O(n)), would it execute n^2 number of times? Therefore the algorithm would be O(n^2).
For each iteration of i, Line 2 is executed 1 + 2 + 3 + ... + n-2 + n-1 + n. The sum of these terms is
Is this correct?
I'd say that line 1 is executed once, and this causes line 2 to be executed n times. the list subscript is O(n), and the sum is also O(n). division, addition and assignment are all O(1).
compute is therefore O(N^2) as the largest terms are evaluating an O(n) operation O(n) times.
note that, as written, it discards all intermediate results, so it could be rewritten as:
def compute(a_list):
return sum(a_list[0:n])/n
which would be O(n).

Time Complexity of recursive of function

Need help proving the time complexity of a recursive function.
Supposedly it's 2^n. I need to prove that this is the case.
def F(n):
if n == 0:
return 0
else:
result = 0
for i in range(n):
result += F(i)
return n*result+n`
Here's another version that does the same thing. The assignment said to use an array to store values in an attempt to reduce the time complexity so what I did was this
def F2(n,array):
if n < len(array):
answer = array[n]
elif n == 0:
answer = 0
array.append(answer)
else:
result = 0
for i in range(n):
result += F2(i,array)
answer = n*result+n
array.append(answer)
return answer
Again what I am looking for is the explanation of how to find the complexities of the two snippets of code, not interested in just knowing the answer.
All and any help greatly appreciated.
PS: for some reason, I can't get "def F2" to stay in the code block...sorry about that
Okay, the first function you wrote is an example of Exhaustive Search where you are exploring every possible branch that can be formed from a set of whole numbers up to n (which you have passed in the argument and you are using for loop for that). To explain you the time complexity I am going to consider the recursion stack as a Tree (to represent a recursive function call stack you can either use a stack or use an n-ary Tree)
Let's call you first function F1:
F1(3), now three branches will be formed for each number in the set S (set is the whole numbers up to n). I have taken n = 3, coz it will be easy for me to make the diagram for it. You can try will other larger numbers and observe the recursion call stack.
3
/| \
0 1 2 ----> the leftmost node is returns 0 coz (n==0) it's the base case
| /\
0 0 1
|
0 ----> returns 0
So here you have explored every possibility branches. If you try to write the recursive equation for the above problem then:
T(n) = 1; n is 0
= T(n-1) + T(n-2) + T(n-3) + ... + T(1); otherwise
Here,
T(n-1) = T(n-2) + T(n-3) + ... T(1).
So, T(n-1) + T(n-2) + T(n-3) + ... + T(1) = T(n-1) + T(n-1)
So, the Recursive equation becomes:
T(n) = 1; n is 0
= 2*T(n-1); otherwise
Now you can easily solve this recurrence relation (or use can use Masters theorem for the fast solution). You will get the time complexity as O(2^n).
Solving the recurrence relation:
T(n) = 2T(n-1)
= 2(2T(n-1-1) = 4T(n-2)
= 4(2T(n-3)
= 8T(n-3)
= 2^k T(n-k), for some integer `k` ----> equation 1
Now we are given the base case where n is 0, so let,
n-k = 0 , i.e. k = n;
Put k = n in equation 1,
T(n) = 2^n * T(n-n)
= 2^n * T(0)
= 2^n * 1; // as T(0) is 1
= 2^n
So, T.C = O(2^n)
So this is how you can get the time complexity for your first function. Next, if you observe the recursion Tree formed above (each node in the tree is a subproblem of the main problem), you will see that the nodes are repeating (i.e. the subproblems are repeating). So you have used a memory in your second function F2 to store the already computed value and whenever the sub-problems are occurring again (i.e. repeating subproblems) you are using the pre-computed value (this saves time for computing the sub-problems again and again). The approach is also known as Dynamic Programming.
Let's now see the second function, here you are returning answer. But, if you see your function you are building an array named as array in your program. The main time complexity goes there. Calculating its time complexity is simple because there is always just one level of recursion involved (or casually you can say no recursion involved) as every number i which is in range of number n is always going to be less than the number n, So the first if condition gets executed and control returns from there in F2. So each call can't go deeper than 2 level in the call stack.
So,
Time complexity of second function = time taken to build the array;
= 1 comparisions + 1 comparisions + 2 comparisions + ... + (n-1) comparisions
= 1 + 2 + 3 + ... + n-1
= O(n^2).
Let me give you a simple way to observe such recursions more deeply. You can print the recursion stack on the console and observe how the function calls are being made. Below I have written your code where I am printing the function calls.
Code:
def indent(n):
for i in xrange(n):
print ' '*i,
# second argument rec_cnt is just taken to print the indented function properly
def F(n, rec_cnt):
indent(rec_cnt)
print 'F(' + str(n) + ')'
if n == 0:
return 0
else:
result = 0
for i in range(n):
result += F(i, rec_cnt+1)
return n*result+n
# third argument is just taken to print the indented function properly
def F2(n, array, rec_cnt):
indent(rec_cnt)
print 'F2(' + str(n) + ')'
if n < len(array):
answer = array[n]
elif n == 0:
answer = 0
array.append(answer)
else:
result = 0
for i in range(n):
result += F2(i, array, rec_cnt+1)
answer = n*result+n
array.append(answer)
return answer
print F(4, 1)
lis = []
print F2(4, lis, 1)
Now observe the output:
F(4)
F(0)
F(1)
F(0)
F(2)
F(0)
F(1)
F(0)
F(3)
F(0)
F(1)
F(0)
F(2)
F(0)
F(1)
F(0)
96
F2(4)
F2(0)
F2(1)
F2(0)
F2(2)
F2(0)
F2(1)
F2(3)
F2(0)
F2(1)
F2(2)
96
In the first function call stack i.e. F1, you see that each call is explored up to 0, i.e. we are exploring each possible branch up to 0 (the base case), so, we call it Exhaustive Search.
In the second function call stack, you can see that the function calls are getting only two levels deep, i.e. they are using the pre-computed value to solve the repeated subproblems. Thus, it's time complexity is lesser than F1.

Time complexity of a function

I'm trying to find out the time complexity (Big-O) of functions and trying to provide appropriate reason.
First function goes:
r = 0
# Assignment is constant time. Executed once. O(1)
for i in range(n):
for j in range(i+1,n):
for k in range(i,j):
r += 1
# Assignment and access are O(1). Executed n^3
like this.
I see that this is triple nested loop, so it must be O(n^3).
but I think my reasoning here is very weak. I don't really get what is going
on inside the triple nested loop here
Second function is:
i = n
# Assignment is constant time. Executed once. O(1)
while i>0:
k = 2 + 2
i = i // 2
# i is reduced by the equation above per iteration.
# so the assignment and access which are O(1) is executed
# log n times ??
I found out this algorithm to be O(1). But like the first function,
I don't see what is going on in the while-loop.
Can someone explain thoroughly about the time complexity of the two
functions? Thanks!
For such a simple case, you could find the number of iterations of the innermost loop as a function of n exactly:
sum_(i=0)^(n-1)(sum_(j=i+1)^(n-1)(sum_(k=i)^(j-1) 1)) = 1/6 n (n^2-1)
i.e., Θ(n**3) time complexity (see Big Theta): it assumes that r += 1 is O(1) if r has O(log n) digits (model has words with log n bits).
The second loop is even simpler: i //= 2 is i >>= 1. n has Θ(log n) digits and each iteration drops one binary digit (shift right) and therefore the whole loop is Θ(log n) time complexity if we assume that the i >> 1 shift of log(n) digits is O(1) operation (same model as in the first example).
Well first of all, for the first function, the time complexity seems to be closer to O(N log N) because the parameters of each loop decreases each time.
Also, for the second function, the runtime is O(log2 N). Except, say i == n == 2. After one run i is 1. After another i is 0.5. After another i is 0.25. And so on... I assume you would want int(i).
For a rigorous mathematical approach to each function, you can go to https://www.coursera.org/course/algo. It's a great course for this sort of thing. I was sort of sloppy in my calculations.

Categories