Time complexity for recursive functions - python

How to count the time complexity for this? Should I count it for to functions separately?
def recursive_function2(mystring,a, b):
if(a >= b ):
return True
else:
if(mystring[a] != mystring[b]):
return False
else:
return recursive_function2(mystring,a+1,b-1)
def function2(mystring):
return recursive_function2(mystring, 0,len(mystring)-1)

Big O is the worst case time complexity of how many times your loop (a recursive function in this case) will iterate based on input parameter of size N. In your recursive function, the worst case is that it iterates until a >= b. For an input string of size N, that means your recursive function in the worst case runs N/2 times, for arbitrarily large N we drop coefficients so your function is O(N)

Related

Time complexity for my max numbers implementation

def maxN(n:list[int])-> int:
if len(n)==2:
if n[0] > n[1]:
return n[0]
else:
return n[1]
if len(n) ==1:
return n[0]
else:
mid = len(n)//2
first = maxN(n[0:mid])
second= maxN(n[mid:])
if first>second:
return first
else:
return second
I'm getting struggled with my emplementation, because I don't know if this is better than simply using a for or a while loop.
At every level, every function calls two more functions until there are no more numbers. Roughly, the total number of functions calls will be 1 + 2 + 4 + 8 .... n. The total length of the series would be approximately logn as the array is halved at every level.
To get the total number of function calls, we could sum the specified GP series, which would give us the total as n.
We see that there n number of function calls in total and every function does constant amount of work. Thus, the time complexity would be O(n).
The time complexity is similar to the iterative version. However, we know that recursion consumes space in the call stack. Since there could be at most logn functions in the call stack (the depth of the recursive tree), the space complexity will be O(logn).
Thus, the iterative version would be a better choice as it has the same time complexity but a better, O(1), space complexity.
Edit: since the list being splitted into sublists in every function, the splitting cost would add up too. To avoid that, you could pass two variables in the function to the track the list boundary.

Time complexity of finding max value in list using recursive bsearch

Sorry if this seems like a dumb question (I am new to Big O) but what is the difference between a) the time complexity of this function based on its no. of comparisons vs. b) the time complexity of the function overall?
def bmax(list):
if len(list) == 1:
return list[0]
else:
middle = len(list)//2
max1 = bmax(list[0:middle])
max2 = bmax(list[middle:len(list)])
if max1 > max2:
return max1
else:
return max2
I derived it to be a) O(n) and b) O(logN) respectively but the second answer seems off because based on my understanding, although the list is always divided into 2 sub arrays at each recursive call, the number of comparisons is still n-1?
The time complexity of this function based on its number of comparisons can be derived by "counting" how many comparisons are performed when calling the function on a list with N elements. There are two statements where you directly use comparisons here: len(list) == 1 and max1 > max2. There are clearly O(N) comparisons.
The time complexity of the function overall must take into account all the statements. So it will be at least equal to the previous complexity (therefore it can't be O(logN)). In this specific case, slicing operations do cost a lot. In general, the operation l[i1:i2] costs O(i2-i1). For more details, check out this question. So I would say that the total time complexity is O(N^2) in this case. If you want to improve the performance, you could pass the indexes instead of using slicing.
def bmax(lst):
def _bmax(start, end):
if end - start <= 1:
return lst[start]
else:
middle = start + (end - start) // 2
max1 = _bmax(start, middle)
max2 = _bmax(middle, end)
if max1 > max2:
return max1
else:
return max2
return _bmax(0, len(lst))
If you want to simplify a bit your code:
def bmax(lst):
def _bmax(start, end):
if end - start <= 1:
return lst[start]
middle = start + (end - start) // 2
return max(_bmax(start, middle), _bmax(middle, end))
return _bmax(0, len(lst))
As you correctly pointed out, the complexity is still O(n) here, because you still have to do all the comparisons.
However, if I'm not mistaken if the recursive calls to bmax is done in another thread for parallel execution, and you have infinitely many threads, the function will finish in O(log(n)) time.

Time complexity: if/else under for loop

If in a situation like the following (an if/else statement under a for loop) would the time complexity be O(n) or O(n^2):
def power_dic (n,k)
if (k=0):
return 1
elif (k mod 2 = 0):
return power(n*n, k/2)
else
return n*power_dic(n, k-1)
The above code computes n^k.
In situations like this, you need to analyze how the code behaves as a whole. How many times each of the return statements are going to be called, the relations between then and the input.
In this specific example:
The time complexity is O(logk) (assuming all int multiplications are O(1)).
For each time return power(n*n, k/2) is called, return n*power_dic(n, k-1) is called at most once(1).
In addition, return power(n*n, k/2) is called O(logk) times, since you reduce it by half each time it is called.
This means, your total complexity is 2*logn, which is O(logk).
(1) Except maybe the last bit, where you call power_dic(n,1), but that one time does not change the answer.

Why is branching recursion faster than linear recursion (example: list inversion)

Yesterday I wrote two possible reverse functions for lists to demonstrate some one different ways to do list inversion. But then I noticed that the function using branching recursion (rev2) is actually faster than the function using linear recursion (rev1), even though the branching function takes more calls to finish and the same number of calls (minus one) of non-trivial calls (that are actually more computation intensive) than the non-trivial calls of the linearly recursive function. Nowhere am I explicitly triggering parallelism, so where does the performance difference come from that makes a function with more calls that are more involved take less time?
from sys import argv
from time import time
nontrivial_rev1_call = 0 # counts number of calls involving concatentation, indexing and slicing
nontrivial_rev2_call = 0 # counts number of calls involving concatentation, len-call, division and sclicing
length = int(argv[1])
def rev1(l):
global nontrivial_rev1_call
if l == []:
return []
nontrivial_rev1_call += 1
return rev1(l[1:])+[l[0]]
def rev2(l):
global nontrivial_rev2_call
if l == []:
return []
elif len(l) == 1:
return l
nontrivial_rev2_call += 1
return rev2(l[len(l)//2:]) + rev2(l[:len(l)//2])
lrev1 = rev1(list(range(length)))
print ('Calls to rev1 for a list of length {}: {}'.format(length, nontrivial_rev1_call))
lrev2 = rev2(list(range(length)))
print ('Calls to rev2 for a list of length {}: {}'.format(length, nontrivial_rev2_call))
print()
l = list(range(length))
start = time()
for i in range(1000):
lrev1 = rev1(l)
end = time()
print ("Average time taken for 1000 passes on a list of length {} with rev1: {} ms".format(length, (end-start)/1000*1000))
start = time()
for i in range(1000):
lrev2 = rev2(l)
end = time()
print ("Average time taken for 1000 passes on a list of length {} with rev2: {} ms".format(length, (end-start)/1000*1000))
Example call:
$ python reverse.py 996
calls to rev1 for a list of length 996: 996
calls to rev2 for a list of length 996: 995
Average time taken for 1000 passes on a list of length 996 with rev1: 7.90629506111145 ms
Average time taken for 1000 passes on a list of length 996 with rev2: 1.3290061950683594 ms
Short answer: It's not that much the calls here, but it is the amount of copying of the lists. As a result the linear recursion has time complexity O(n2) wheras the branching recursion has time complexity O(n log n).
The recursive call here does not operate in constant time: it operates in the length of the list it copies. Indeed, if you copy a list of n elements, it will require O(n) time.
Now if we perform the linear recursion, it means we will perform O(n) calls (the maximum call depth is O(n)). Each time, we will copy the list entirely, except for one item. So the time complexity is:
n
---
\ n * (n+1)
/ k = -----------
--- 2
k=1
So the time complexity of the algorithm is - given the calls itself are done in O(1) - O(n2).
In case we perform branching recursion, we make two copies of the list, each with a length that is approximately half. So every level of recursion will take O(n) time (since these halves result in copies of the list as well, and if we sum these up, we make an entire copy at every recursive level). But the number of levels scales logwise:
log n
-----
\
/ n = n log n
-----
k=1
So the time complexity is here O(n log n) (here log is the 2-log, but that does not matter in terms of big oh).
Using views
Instead of copying lists, we can use views: here we keep a reference to the same list, but use two integers that specify the span of the list. For example:
def rev1(l, frm, to):
global nontrivial_rev1_call
if frm >= to:
return []
nontrivial_rev1_call += 1
result = rev1(l, frm+1, to)
result.append(l[frm])
return result
def rev2(l, frm, to):
global nontrivial_rev2_call
if frm >= to:
return []
elif to-frm == 1:
return l[frm]
nontrivial_rev2_call += 1
mid = (frm+to)//2
return rev2(l, mid, to) + rev2(l, frm, mid)
If we now run the timeit module, we obtain:
>>> timeit.timeit(partial(rev1, list(range(966)), 0, 966), number=10000)
2.176353386021219
>>> timeit.timeit(partial(rev2, list(range(966)), 0, 966), number=10000)
3.7402000919682905
This is because we no longer copy the lists, and thus the append(..) function works in O(1) amortized cost. Whereas for the branching recursion, we append two lists, so it works in O(k) with k the sum of the length of the two lists. So now we compare O(n) (linear recursion), with O(n log n) (branching recursion).

time complexity of variable loops

i want to try to calculate the O(n) of my program (in python). there are two problems:
1: i have a very basic knowledge of O(n) [aka: i know O(n) has to do with time and calculations]
and
2: all of the loops in my program are not set to any particular value. they are based on the input data.
The n in O(n) means precisely the input size. So, if I have this code:
def findmax(l):
maybemax = 0
for i in l:
if i > maybemax:
maybemax = i
return maybemax
Then I'd say that the complexity is O(n) -- how long it takes is proportional to the input size (since the loop loops as many times as the length of l).
If I had
def allbigger(l, m):
for el in l:
for el2 in m:
if el < el2:
return False
return True
then, in the worst case (that is, when I return True), I have one loop of length len(l) and inside it, one of length len(m), so I say that it's O(l * m) or O(n^2) if the lists are expected to be about the same length.
Try this out to start, then head to wiki:
Plain English Explanation of Big O Notation

Categories