Time complexity: if/else under for loop - python

If in a situation like the following (an if/else statement under a for loop) would the time complexity be O(n) or O(n^2):
def power_dic (n,k)
if (k=0):
return 1
elif (k mod 2 = 0):
return power(n*n, k/2)
else
return n*power_dic(n, k-1)
The above code computes n^k.

In situations like this, you need to analyze how the code behaves as a whole. How many times each of the return statements are going to be called, the relations between then and the input.
In this specific example:
The time complexity is O(logk) (assuming all int multiplications are O(1)).
For each time return power(n*n, k/2) is called, return n*power_dic(n, k-1) is called at most once(1).
In addition, return power(n*n, k/2) is called O(logk) times, since you reduce it by half each time it is called.
This means, your total complexity is 2*logn, which is O(logk).
(1) Except maybe the last bit, where you call power_dic(n,1), but that one time does not change the answer.

Related

Time complexity for recursive functions

How to count the time complexity for this? Should I count it for to functions separately?
def recursive_function2(mystring,a, b):
if(a >= b ):
return True
else:
if(mystring[a] != mystring[b]):
return False
else:
return recursive_function2(mystring,a+1,b-1)
def function2(mystring):
return recursive_function2(mystring, 0,len(mystring)-1)
Big O is the worst case time complexity of how many times your loop (a recursive function in this case) will iterate based on input parameter of size N. In your recursive function, the worst case is that it iterates until a >= b. For an input string of size N, that means your recursive function in the worst case runs N/2 times, for arbitrarily large N we drop coefficients so your function is O(N)

Time complexity for my max numbers implementation

def maxN(n:list[int])-> int:
if len(n)==2:
if n[0] > n[1]:
return n[0]
else:
return n[1]
if len(n) ==1:
return n[0]
else:
mid = len(n)//2
first = maxN(n[0:mid])
second= maxN(n[mid:])
if first>second:
return first
else:
return second
I'm getting struggled with my emplementation, because I don't know if this is better than simply using a for or a while loop.
At every level, every function calls two more functions until there are no more numbers. Roughly, the total number of functions calls will be 1 + 2 + 4 + 8 .... n. The total length of the series would be approximately logn as the array is halved at every level.
To get the total number of function calls, we could sum the specified GP series, which would give us the total as n.
We see that there n number of function calls in total and every function does constant amount of work. Thus, the time complexity would be O(n).
The time complexity is similar to the iterative version. However, we know that recursion consumes space in the call stack. Since there could be at most logn functions in the call stack (the depth of the recursive tree), the space complexity will be O(logn).
Thus, the iterative version would be a better choice as it has the same time complexity but a better, O(1), space complexity.
Edit: since the list being splitted into sublists in every function, the splitting cost would add up too. To avoid that, you could pass two variables in the function to the track the list boundary.

Whats time complexity of this python function?

def hasPairWithSum(arr,target):
for i in range(len(arr)):
if ((target-arr[i]) in arr[i+1:]):
return True
return False
In python, is time complexity for this function O(n) or O(n^2), in other words 'if ((target-arr[i]) in arr[i+1:])' is this another for loop or no?
Also what about following function, is it also O(n^2), if not why:
def hasPairWithSum2(arr,target):
seen = set()
for num in arr:
num2 = target - num
if num2 in seen:
return True
seen.add(num)
return False
Thanks!
The first version has a O(n²) time complexity:
Indeed the arr[i+1:] already creates a new list with n-1-i elements, which is not a constant time operation. The in operator will scan that new list, and in worst case visit each value in that new list.
If we count the number of elements copied into a new list (with arr[i+1), we can sum those counts for each iteration of the outer loop:
n-1
+ n-2
+ n-3
+ ...
+ 1
+ 0
This is a triangular number, and equals n(n-1)/2, which is O(n²).
Second version
The second version, using a set, runs in O(n) average time complexity.
There is no list slicing here, and the in operator on a set has -- contrary to a list argument -- an average constant time complexity. So now every action within the loop has an (average) constant time complexity, giving the algorithm an average time complexity of O(n).
According to the python docs, the in operator for a set can have an amortised worst time complexity of O(n). So you would still get a worst time complexity for your algorithm of O(n²).
It's O(n^2)
First loop will run n times and second will run (n-m) for every n.
So the whole thing will run n(n-m) times that is n^2 - nm. Knowing that m<n we know that it has a complexity of O(n^2)
But if it's critical to something you might consult someone who is better at that.

Time complexity of a function

I'm trying to find out the time complexity (Big-O) of functions and trying to provide appropriate reason.
First function goes:
r = 0
# Assignment is constant time. Executed once. O(1)
for i in range(n):
for j in range(i+1,n):
for k in range(i,j):
r += 1
# Assignment and access are O(1). Executed n^3
like this.
I see that this is triple nested loop, so it must be O(n^3).
but I think my reasoning here is very weak. I don't really get what is going
on inside the triple nested loop here
Second function is:
i = n
# Assignment is constant time. Executed once. O(1)
while i>0:
k = 2 + 2
i = i // 2
# i is reduced by the equation above per iteration.
# so the assignment and access which are O(1) is executed
# log n times ??
I found out this algorithm to be O(1). But like the first function,
I don't see what is going on in the while-loop.
Can someone explain thoroughly about the time complexity of the two
functions? Thanks!
For such a simple case, you could find the number of iterations of the innermost loop as a function of n exactly:
sum_(i=0)^(n-1)(sum_(j=i+1)^(n-1)(sum_(k=i)^(j-1) 1)) = 1/6 n (n^2-1)
i.e., Θ(n**3) time complexity (see Big Theta): it assumes that r += 1 is O(1) if r has O(log n) digits (model has words with log n bits).
The second loop is even simpler: i //= 2 is i >>= 1. n has Θ(log n) digits and each iteration drops one binary digit (shift right) and therefore the whole loop is Θ(log n) time complexity if we assume that the i >> 1 shift of log(n) digits is O(1) operation (same model as in the first example).
Well first of all, for the first function, the time complexity seems to be closer to O(N log N) because the parameters of each loop decreases each time.
Also, for the second function, the runtime is O(log2 N). Except, say i == n == 2. After one run i is 1. After another i is 0.5. After another i is 0.25. And so on... I assume you would want int(i).
For a rigorous mathematical approach to each function, you can go to https://www.coursera.org/course/algo. It's a great course for this sort of thing. I was sort of sloppy in my calculations.

python 2.7 - Recursive Fibonacci blows up

I have two functions fib1 and fib2 to calculate Fibonacci.
def fib1(n):
if n < 2:
return 1
else:
return fib1(n-1) + fib1(n-2)
def fib2(n):
def fib2h(s, c, n):
if n < 1:
return s
else:
return fib2h(c, s + c, n-1)
return fib2h(1, 1, n)
fib2 works fine until it blows up the recursion limit. If understand correctly, Python doesn't optimize for tail recursion. That is fine by me.
What gets me is fib1 starts to slow down to a halt even with very small values of n. Why is that happening? How come it doesn't hit the recursion limit before it gets sluggish?
Basically, you are wasting lots of time by computing the fib1 for the same values of n over and over. You can easily memoize the function like this
def fib1(n, memo={}):
if n in memo:
return memo[n]
if n < 2:
memo[n] = 1
else:
memo[n] = fib1(n-1) + fib1(n-2)
return memo[n]
You'll notice that I am using an empty dict as a default argument. This is usually a bad idea because the same dict is used as the default for every function call.
Here I am taking advantage of that by using it to memoize each result I calculate
You can also prime the memo with 0 and 1 to avoid needing the n < 2 test
def fib1(n, memo={0: 1, 1: 1}):
if n in memo:
return memo[n]
else:
memo[n] = fib1(n-1) + fib1(n-2)
return memo[n]
Which becomes
def fib1(n, memo={0: 1, 1: 1}):
return memo.setdefault(n, memo.get(n) or fib1(n-1) + fib1(n-2))
Your problem isn't python, it's your algorithm. fib1 is a good example of tree recursion. You can find a proof here on stackoverflow that this particular algorithm is (~θ(1.6n)).
n=30 (apparently from the comments) takes about a third of a second. If computational time scales up as 1.6^n, we'd expect n=100 to take about 2.1 million years.
Think of the recursion trees in each. The second version is a single branch of recursion with the addition taking place in the parameter calculations for the function calls, and then it returns the values back up. As you have noted, Python doesn't require tail recursion optimization, but if tail call optimization were a part of your interpreter, the tail recursion optimization could be triggered as well.
The first version, on the other hand, requires 2 recursion branches at EACH level! So the number of potential executions of the function skyrockets considerably. Not only that, but most of the work is repeated twice! Consider: fib1(n-1) eventually calls fib1(n-1) again, which is the same as calling fib1(n-2) from the point of reference of the first call frame. But after that value is calculated, it must be added to the value of fib1(n-2) again! So the work is needlessly duplicated many times.

Categories