python 2.7 - Recursive Fibonacci blows up - python

I have two functions fib1 and fib2 to calculate Fibonacci.
def fib1(n):
if n < 2:
return 1
else:
return fib1(n-1) + fib1(n-2)
def fib2(n):
def fib2h(s, c, n):
if n < 1:
return s
else:
return fib2h(c, s + c, n-1)
return fib2h(1, 1, n)
fib2 works fine until it blows up the recursion limit. If understand correctly, Python doesn't optimize for tail recursion. That is fine by me.
What gets me is fib1 starts to slow down to a halt even with very small values of n. Why is that happening? How come it doesn't hit the recursion limit before it gets sluggish?

Basically, you are wasting lots of time by computing the fib1 for the same values of n over and over. You can easily memoize the function like this
def fib1(n, memo={}):
if n in memo:
return memo[n]
if n < 2:
memo[n] = 1
else:
memo[n] = fib1(n-1) + fib1(n-2)
return memo[n]
You'll notice that I am using an empty dict as a default argument. This is usually a bad idea because the same dict is used as the default for every function call.
Here I am taking advantage of that by using it to memoize each result I calculate
You can also prime the memo with 0 and 1 to avoid needing the n < 2 test
def fib1(n, memo={0: 1, 1: 1}):
if n in memo:
return memo[n]
else:
memo[n] = fib1(n-1) + fib1(n-2)
return memo[n]
Which becomes
def fib1(n, memo={0: 1, 1: 1}):
return memo.setdefault(n, memo.get(n) or fib1(n-1) + fib1(n-2))

Your problem isn't python, it's your algorithm. fib1 is a good example of tree recursion. You can find a proof here on stackoverflow that this particular algorithm is (~θ(1.6n)).
n=30 (apparently from the comments) takes about a third of a second. If computational time scales up as 1.6^n, we'd expect n=100 to take about 2.1 million years.

Think of the recursion trees in each. The second version is a single branch of recursion with the addition taking place in the parameter calculations for the function calls, and then it returns the values back up. As you have noted, Python doesn't require tail recursion optimization, but if tail call optimization were a part of your interpreter, the tail recursion optimization could be triggered as well.
The first version, on the other hand, requires 2 recursion branches at EACH level! So the number of potential executions of the function skyrockets considerably. Not only that, but most of the work is repeated twice! Consider: fib1(n-1) eventually calls fib1(n-1) again, which is the same as calling fib1(n-2) from the point of reference of the first call frame. But after that value is calculated, it must be added to the value of fib1(n-2) again! So the work is needlessly duplicated many times.

Related

Why is a fibonacci function using while loops faster than else: return fib(n-1) + fib(n-2)?

So I watched this video: https://www.youtube.com/watch?v=OnXlnd3JB-8
He is giving a 16 year a brief lesson in coding and asks him to make a fibonacci function. At about 10:37 he shows the 16 year old the correct code that tells you the nth number in the fibonacci sequence. Before he revealed this code, I thought that this might be interesting to try myself, considering that I am very much a beginner at coding. I wrote this:
def fibonacci(n):
a = []
while len(a) == 0:
a.append(0)
while len(a) == 1:
a.append(1)
while n > len(a):
a.append(a[len(a) - 1] + a[len(a) - 2])
return a[n-1]
And in the video he writes this:
def fib(n):
if n == 0:
return 1
if n == 1:
return 1
else:
return fib(n-1) + fib(n - 2)
What I don't understand is why the code that I wrote seems to be so much faster than his in the video. It is my understanding that they do essentially the same thing? But if you go to decently big fibonacci numbers (like the 35th in the sequence) there is a very apparent speed difference. I even went up to 150th number in the sequence for the one I wrote and it still seems to be working in less than a second. So as a newbie, I don't understand exactly how these processes differ and what is making mine faster, so my question is why is it so fast? Thanks
The second method uses Recursion - Calling the same function within the function.
Why it is slower ?
Recursion (in your code) involves a lot of function calls, which takes time. Ofcourse, you could cache the results and avoid them.
It's Time Complexity is O(2^n)
It calculates everything that is already calculated before. Say n = 5
fib(5) = fib(4) + fib(3)
fib(4) = fib(3) + fib(2)
fib(3) = fib(2) + fib(1)
fib(2) = fib(1) + fib(0)
You can see f(3), f(2) and f(1) are calculated again and again.
You don't see the above issues in the first ccode.
Forn = 5,
You are just adding the previous two values (a[4] and a[3]) and storing that in a[5].
It's Time Complexity is O(n).
You can see there are no function calls, no recalculation and That's why it's faster.
Recursion can be elegant, but it's not the most efficient solution. There's overhead in making function calls and passing parameters. Also, your code is not producing the right answers. You'd need something like this to match their code:
def fibonacci(n):
a = [1,1]
while n >= len(a):
a.append(a[-1] + a[-2])
return a[-1]
In the end, there's really no point in keeping the whole list. All you need are the two most recent entries.

Recursion with memory vs loop

I've made two functions for computing the Fibonacci Sequence, one using recursion with memory and one using a loop;
def fib_rec(n, dic = {0 : 0, 1 : 1}):
if n in dic:
return dic[n]
else:
fib = fib_rec(n - 2, dic) + fib_rec(n - 1, dic)
dic[n] = fib
return fib
def fib_loop(n):
if n == 0 or n == 1:
return n
else:
smaller = 0
larger = 1
for i in range(1, n):
smaller, larger = larger, smaller + larger
return larger
I've heard that the Fibonacci Sequence often is solved using recursion, but I'm wondering why. Both my algorithms are of linear time complexity, but the one using a loop will not have to carry a dictionary of all past Fibonacci numbers, it also won't exceed Python's recursion depth.
Is this problem solved using recursion only to teach recursion or am I missing something?
The usual recursive O(N) Fibonacci implementation is more like this:
def fib(n, a=0, b=1):
if n == 0: return a
if n == 1: return b
return fib(n - 1, b, a + b)
The advantage with this approach (aside from the fact that it uses O(1) memory) is that it is tail-recursive: some compilers and/or runtimes can take advantage of that to secretly convert it to a simple JUMP instruction. This is called tail-call optimization.
Python, sadly, doesn't use this strategy, so it will use extra memory for the call stack, which as you noted quickly runs into Python's recursion depth limit.
The Fibonacci sequence is mostly a toy problem, used for teaching people how to write algorithms and about big Oh notation. It has elegant functional solutions as well as showing the strengths of dynamic programming (basically your dictionary-based solution), but it's also practically a solved problem.
We can also go a lot faster. The page https://www.nayuki.io/page/fast-fibonacci-algorithms describes how. It includes a fast doubling algorithm written in Python:
#
# Fast doubling Fibonacci algorithm (Python)
# by Project Nayuki, 2015. Public domain.
# https://www.nayuki.io/page/fast-fibonacci-algorithms
#
# (Public) Returns F(n).
def fibonacci(n):
if n < 0:
raise ValueError("Negative arguments not implemented")
return _fib(n)[0]
# (Private) Returns the tuple (F(n), F(n+1)).
def _fib(n):
if n == 0:
return (0, 1)
else:
a, b = _fib(n // 2)
c = a * (b * 2 - a)
d = a * a + b * b
if n % 2 == 0:
return (c, d)
else:
return (d, c + d)

Time complexity: if/else under for loop

If in a situation like the following (an if/else statement under a for loop) would the time complexity be O(n) or O(n^2):
def power_dic (n,k)
if (k=0):
return 1
elif (k mod 2 = 0):
return power(n*n, k/2)
else
return n*power_dic(n, k-1)
The above code computes n^k.
In situations like this, you need to analyze how the code behaves as a whole. How many times each of the return statements are going to be called, the relations between then and the input.
In this specific example:
The time complexity is O(logk) (assuming all int multiplications are O(1)).
For each time return power(n*n, k/2) is called, return n*power_dic(n, k-1) is called at most once(1).
In addition, return power(n*n, k/2) is called O(logk) times, since you reduce it by half each time it is called.
This means, your total complexity is 2*logn, which is O(logk).
(1) Except maybe the last bit, where you call power_dic(n,1), but that one time does not change the answer.

Finding Time and Space Complexity Of Recursive Functions

I am stumbling at anlayzing the time and space complexities of recursive functions:
consider:
def power(a, n):
if n==0:
return 1
else:
return a*power(a, n-1)
When finding time complexity of this: I think T(n) = c + T(n-1) where c is the constant cost of the multiplication.
This will probably lead to: c*n cost, i.e. linear cost O(n). But recursions are usually exponential in cost.
Also, consider this function:
def power(a,n):
if n==0:
return 1
if n%2 ==0:
return power(a*a, n//2)
else:
return a*power(a*a, n//2)
The above function will go on till: T(n) = c + T(n/2) which means the cost will be c*log(n) means log(n) complexity.
If the analysis is correct then recursion looks to be as fast as iterative algorithms, so where does the overhead come from and are there any exponential recursive algorithms?
It is not true that recursions are exponential in complexity. In fact there is a theorem that each recursive algorithm has an iterative analogue and vice versa(possibly using additional memory). For explanation on how to do this see here for instance. Also have a look at the section in wikipedia that compares recursion and iteration.
When a recursive function calls itself more than once in some of its flows, you may end up with exponential complexity as is the famous example with fibonacci numbers:
def fib(n):
if n < 2:
return 1
return fib(n - 1) + fib(n - 2)
But this does not mean there is no faster recursive implementation. For instance using memoization you can get that down to linear complexity.
Still recursive implementations really are a bit slower, because the stack frame should be stored when doing a recursive call and it should be restored when returning the value.

Python 3.3.4 function not computing past a certain argument

I'm solving Euler problem set #2, and I have found the function relevant to what I want I want to do, well partially at least since I'll need to make modifications for even numbers in the fibonacci sequence. I'm trying to print out fibonacci numbers, up to a certain point, n.
def fib(n):
if n == 1:
return 1
elif n == 0:
return 0
else:
return fib(n-1) + fib(n-2)
Giving
>>> fib(13)
233
However
>>> fib(200)
returns nothing. I'm not sure if it is taking long to compute, or whatnot. I might try this in C++, but would like some knowledge from here first.
It's just taking a long time to compute, because you're recursively calling fib() many, many times. If you add a line as follows you can get status update on it as it runs. It'll take a very long time because of the amount of recursion (if it even finishes).
def fib(n):
print("Fibonacci: {0}".format(n))
if n == 1:
return 1
elif n == 0:
return 0
else:
return fib(n-1) + fib(n-2)

Categories