Fibonacci numbers not getting printed beyond F(996) - python

I wrote this small snippet to calculate Fibonacci numbers. It works well for numbers up to 996 and from 997 a trace back is being printed. I can't figure out what the problem is. Does it has something to do with maximum_recursion_count?
def fib(n):
if n==0:
return 0
elif n==1:
return 1
else:
return fib(n-1)+n

Probably. Take a look at sys.getrecursionlimit(). The default value is 1000, which sounds like it just might be causing the problem you're seeing: once there are 1000 frames on the stack (i.e. slightly less than 1000 recursive function calls), you'll get an error on the next function call.
You can set the recursion limit to a larger value using sys.setrecursionlimit, but there is a maximum value which is platform-dependent (which means you might have to figure out what it is by trial and error).

There is a wonderful Fibonnaci function implementation here that doesn't use recursion.

Your code may come up against stack call limits.

You say "It works well for numbers up to 996" ... No, it doesn't, it generates the wrong results. The last line should be:
return fib(n - 1) + fib(n - 2)

You have reached the maxmimum recursion depth limit. As far as I know its default value is about 1000. You can change it sys.setrecursionlimit() and see it using sys.getrecursionlimit()

Related

Converting recursive function to completely iterative function without using extra space

Is it possible to convert a recursive function like the one below to a completely iterative function?
def fact(n):
if n <= 1:
return
for i in range(n):
fact(n-1)
doSomethingFunc()
It seems pretty easy to do given extra space like a stack or a queue, but I was wondering if we can do this in O(1) space complexity?
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of factorial(n).
Well, generally speaking no.
I mean, the space taken in the stack by recursive functions is not just an inconvenient of this programming style. It is the memory needed for the computation.
So, sure, for lot of algorithm, that space is unnecessary and could be spared. For a classical factorial for example
def fact(n):
if n<=1:
return 1
else:
return n*fact(n-1)
the stacking of all the n, n-1, n-2, ..., 1 arguments is not really necessary.
So, sure, you can find an implementation that get rid of it. But that is optimization (For example, in the specific case of terminal recursion. But I am pretty sure that you add that "doSomething" to make clear that you don't want to focus on that specific case).
You cannot assume in general that an algorithm that don't need all those values exist, recursive or iterative. Or else, that would be saying that all algorithm exist in a O(1) space complexity version.
Example: base representation of a positive integer
def baseRepr(num, base):
if num>=base:
s=baseRepr(num//base, base)
else:
s=''
return s+chr(48+num%base)
Not claiming it is optimal, or even well written.
But, the stacking of the arguments is needed. It is the way you implicitly store the digits that you compute in the reverse order.
An iterative function would also need some memory to store those digits, since you have to compute the last one first.
Well, I am pretty sure that for this simple example, you could find a way to compute from left to right, for example using a log computation to know in advance the number of digits or something. But that's not the point. Just imagine that there is no other algorithm known than the one computing digits from right to left. Then you need to store them. Either implicitly in the stack using recursion, or explicitly in allocated memory. So again, memory used in the stack is not just an inconvenience of recursion. It is the way recursive algorithm store things, that would be stored otherwise in iterative algorithm
Note, we cannot do something like:
def fact(n):
for i in range (factorial(n)):
doSomethingFunc()
since it takes a non-constant amount of memory to store the result of
factorial(n).
Yes.
I was wondering if we can do this in O(1) space complexity?
So, no.

Why maximum recursion depth exceeded is not (always) happening in python list comprehension?

I noticed that "maximum recursion depth exceeded" does not happen when a basic recursive function is used inside a list-comprehension, while it does when used outside of it. I would like to understand why, to get a better understanding of how list-comprehension works and then use it efficiently.
I tried it with a basic fibonacci function applied on a range.
from functools import lru_cache
#lru_cache(maxsize = 2048)
def fib(n): return n if n<2 else fib(n-1)+fib(n-2)
# The following will be calculated (and 5000 can be replaced by much bigger integer)
fb = [fib(n)for n in range(5000)]
print(fb[-1])
# but next line:
print(fib(500))
# will cause a RecursionError: maximum recursion depth exceeded
# And will need this to be enabled:
import sys
sys.setrecursionlimit(1024)
print(fib(500))
Every time the comprehension evaluates fib(n), it saves that result in the cache. By the time it gets to fib(500), fib(499) and fib(498) are already cached, so they don’t run again. The stack goes 1 call of fib deep.
When you run fib(500) immediately, the first thing it evaluates is fib(499), which isn’t cached and evaluates fib(498), which isn’t cached and evaluates fib(497)… all the way down to fib(1). The stack goes 499 calls of fib deep.
You should be able to see the same thing by running:
print(fib(250))
print(fib(500))

Recursive Factorial Calculator RecursionError

This recursive factorial calculator runs fine all the way up to an input of 994 when I recieve this error: "RecursionError: maximum recursion depth exceeded in comparison". Can someone please explain what is meant by this? How can there be a maximum amount of recursions? Thanks in advance.
def factorial(x):
if( x == 0):
return 1
else:
return x * factorial(x - 1)
while True:
u_input = input("")
print(factorial(int(u_input)))
def calc_factorial(num):
num-=1
fact_total = 1
while num > 0:
fact_total *= num
num-=1
return(fact_total)
EDIT:
I understand that recursion is re-using a function from within that function as a loop but I do not understand what recursion depth is and would like that explained. I could not tell from the answers to the other question. Apologies for the confusion.
Recursive calls are just like any other function calls, and function calls use memory to keep track of the state inside each function. You will notice that you get a very long traceback showing all the nested function calls that are on the stack. Since memory is finite, recursion depth (the number of nested function calls) is inherently limited even without python's enforced limit.
The error means what it says: Python limits the depth of how many recursive calls you can make. The default is 1000, which was chosen as a number that means you most likely have infinite recursion somewhere. Since no computer can keep track of an infinite amount of recursive calls (and such a program would never finish anyway), halting with this error message is seen as preferable to recurring as deeply as the computer can handle, which eventually results in a stack overflow.
You can change this limit if you wish with sys.setrecursionlimit, but the best way to avoid this issue is to change your program to work iteratively instead of recursively. Fortunately, this is easy for a factorial calculation:
def factorial(x):
result = 1
for num in range(1, x+1):
result *= num
return result
There are built in Function with Math library, It is improved algorithm to get the value of factorial quickly, So when we are writing the recursive algorithm to get the value for factorial, there will be a recursive limit. So If we use the built in library, then we can escape from that problem.
import math
math.factorial(5)
Answer : 120

quicksort algorithm as WIKIPEDIA says in python

In Wikipedia, there is a pseudo-code of quicksort algorithm:
So I tried to implement in python, but it does not sort anything.
def partition(A,lo,hi):
pivot = A[hi]
i=lo
#Swap
for j in range(lo,len(A)-1):
if (A[j] <= pivot):
val=A[i]
A[i]=A[j]
A[j]=val
i=i+1
val=A[i]
A[i]=A[hi]
A[hi]=val
return(A,i)
def quicksort(A,lo,hi):
if (lo<hi):
[A,p]=partition(A,lo,hi)
quicksort(A,lo,p-1)
quicksort(A,p+1,hi)
A=[5,3,2,6,8,9,1]
A=quicksort(A, 0, len(A)-1)
print(A)
ORIGINAL: It does not throw an error so I do not no where I made a mistake.
UPDATE: It now goes into infinite recursion.
It doesn't throw an error or print anything because there is no main program to run anything. You indented what should be the main program, so that is now part of the quicksort function.
Also, this code does throw an error, because you left a comment in what you posted. I'll clean up the code and edit your posting.
I corrected several code errors:
Removed "enter code here" text, which caused an obvious compilation error.
Corrected indentation, so that the last three lines are now your main program.
Corrected the main-program call: quicksort takes the bounds (subscripts) of the array, but you were passing in the array elements themselves.
That fixes your given problem. You now have infinite recursion due to not handling the return values properly. Also, your main program destroys the sorted array, since quicksort doesn't return anything. The final print statement will give None as its result.
You haven't quite implemented the given algorithm. The most important problem is the for loop's upper limit.
Python loops do not include the end value. Your given loop will run j through the values lo through len(A)-2, so you'll never treat the last value of the list.
The upper limit given in Wikipedia is hi, not the list end.
Fix those, and you'll be close to a solution.
Also, I strongly recommend that you stick in a couple of tracing print statements, so you can see follow how the program works. For instance, as the first statement of each function, print the function name and the input parameters.
Why do you return A from the function? The Wikipedia algorithm doesn't do that, and it's not necessary: you altered the list in place.
Since you already know about multiple assignment, note that there's an easier way to swap two values:
a, b = b, a
Does this get you moving through the current problems, enough to get you back on the learning track you want?

seeking convergence with optimize.fmin on scipy

I have a function I want to minimize with scipy.optimize.fmin. Note that I force a print when my function is evaluated.
My problem is, when I start the minimization, the value printed decreases untill it reaches a certain point (the value 46700222.800). There it continues to decrease by very small bites, e.g., 46700222.797,46700222.765,46700222.745,46700222.699,46700222.688,46700222.678
So intuitively, I feel I have reached the minimum, since the length of each step are minus then 1. But the algorithm keeps running untill I get a "Maximum number of function evaluations has been exceeded" error.
My question is: how can I force my algorithm to accept the value of the parameter when the function evaluation reaches a value from where it does not really evolve anymore (let say, I don't gain more than 1 after an iteration). I read that the options ftol could be used but it has absolutely no effect on my code. In fact, I don't even know what value to put for ftol. I tried everything from 0.00001 to 10000 and there is still no convergence.
There is actually no need to see your code to explain what is happening. I will answer point by point quoting you.
My problem is, when I start the minimization, the value printed decreases
untill it reaches a certain point (the value 46700222.800). There it
continues to decrease by very small bites, e.g.,
46700222.797,46700222.765,46700222.745,46700222.699,46700222.688,46700222.678
Notice that the difference between the last 2 values is -0.009999997913837433, i.e. about 1e-2. In the convention of minimization algorithm, what you call values is usually labelled x. The algorithm stops if these 2 conditions are respected AT THE SAME TIME at the n-th iteration:
convergence on x: the absolute value of the difference between x[n] and the next iteration x[n+1] is smaller than xtol
convergence on f(x): the absolute value of the difference between f[n] and f[n+1] is smaller than ftol.
Moreover, the algorithm stops also if the maximum number of iterations is reached.
Now notice that xtol defaults to a value of 1e-4, about 100 times smaller than the value 1e-2 that appears for your case. The algorithm then does not stop, because the first condition on xtol is not respected, until it reaches the maximum number of iterations.
I read that the options ftol could be used but it has absolutely no
effect on my code. In fact, I don't even know what value to put for
ftol. I tried everything from 0.00001 to 10000 and there is still no
convergence.
This helped you respecting the second condition on ftol, but again the first condition was never reached.
To reach your aim, increase also xtol.
The following methods will also help you more in general when debugging the convergence of an optimization routine.
inside the function you want to minimize, print the value of x and the value of f(x) before returning it. Then run the optimization routine. From these prints you can decide sensible values for xtol and ftol.
consider nondimensionalizing the problem. There is a reason if ftol and xtol default both to 1e-4. They expect you to formulate the problem so that x and f(x) are of order O(1) or O(10), say numbers between -100 and +100. If you carry out the nondimensionalization you handle a simpler problem, in the way that you often know what values to expect and what tolerances you are after.
if you are interested just in a rough calculation and can't estimate typical values for xtol and ftol, and you know (or you hope) that your problem is well behaved, i.e. that it will converge, you can run fmin in a try block, pass to fmin only maxiter=20 (say), and catch the error regarding the Maximum number of function evaluations has been exceeded.
I just spent three hours digging into the source code of scipy.minimize. In it, the "while" loop in function "_minimize_neldermead" deals with the convergence rule:
if (numpy.max(numpy.ravel(numpy.abs(sim[1:] - sim[0]))) <= xtol and
numpy.max(numpy.abs(fsim[0] - fsim[1:])) <= ftol):
break"
"fsim" is the variable that stores results from functional evaluation. However, I found that fsim[0] = f(x0) which is the function evaluation of the initial value, and it never changes during the "while" loop. fsim[1:] updates itself all the time. The second condition of the while loop was never satisfied. It might be a bug. But my knowledge of mathematical optimization is far from enough to judge it.
My current solution: design your own system to control the convergence. Add this in your function:
global x_old, Q_old
if (np.absolute(x_old-x).sum() <= 1e-4) and (np.absolute(Q_old-Q).sum() <= 1e-4):
return None
x_old = x; Q_old = Q
Here Q=f(x). Don't forget to give them an initial value.
Update 01/30/15:
I got it! This should be the correct code for the second line of the if function (i.e. remove numpy.absolute):
numpy.max(fsim[0] - fsim[1:]) <= ftol)
btw, this is my first debugging of a open source software. I just created an issue on GitHub.
Update 01/31/15 - 1:
I don't think my previous update is correct. Nevertheless, this is the a screenshot of the iterations of a function using the original code.
It prints the values of sim and fsim variable for each iteration. As you can see, the changes of each iteration is less than both of xtol and ftol values, but it just kept going without stopping. The original code compares the difference between fsim[0] and the rest of fsim values, i.e. the value here is always 87.63228689 - 87.61312213 = .01916476, which is greater than ftol=1e-2.
Update 01/31/15 - 2:
Here is the data and code that I used to reproduce the previous results. It includes two data files and one iPython Notebook file.
From the documentation it looks like you DO want to change the ftol arg.
Post your code so we can look at your progress.
edit: Try increasing xtol as well.
Your question is a bit ambiguous. Are you printing the value of your function, or the point where it is evaluated?
My understanding of xtol and ftol is as follows. The iteration stops
when the change in the value of the function between iterations is less than ftol
AND
when the change in x between successive iterations is less than xtol
When you say "...accept the value of the parameter...", this suggests you should change xtol.

Categories